Grid Cell Path Integration For Movement-Based Visual Object Recognition

Created: 2022-09-08T23:43:20-05:00

Return to the Index

This card pertains to a resource available on the internet.

Path Integration: including the position of a sensor, encoded via grid modules, with features from detection sensors. Movement of the sensor is predicted and performed which gives a new set of sensory inputs. An exchange of "move in this direction" followed by receiving the new encoded position and new sensory data happens and the network learns features at positions.

Grid cells with different scales and orientation can encode complex 3D positions in space as a sparse set of column activations (Fiete et all 2008)

Extend path integration from the toy object recognition example of previous papers to a visual feature check. The sensor being moved is now an eyeball receiving visual features and the sensory changes are "saccades" where the eye is moved around.

Convolution network (artificial neural networks) is trained to detect visual features. Visual features are re-encoded to an SDR to feed the cortical network to make predictions.

Network learns to identify objects by moving its eye sensor around the field until it is satisfied it identified the object.