Understanding how cortical networks process information to encode features of the external world, and then influence behavioral performance, is a fundamental problem in systems neuroscience. When we look at a visual scene, cells in visual cortex respond to information streaming in from millions of wires that carry a pixilated image of the world to construct an internal representation.
At the first stages of cortical processing, primary visual areas create a fragmented picture of the world, dominated, for instance, by small oriented lines highlighting edges.
This representation is subsequently passed on to a higher functioning level of the visual cortex, e.g., the inferotemporal cortex, where neurons typically respond to more complex image features, such as shapes and objects.
It has long been suggested that the visual cortex is a passive filter that creates a static, spatial, representation of a visual scene via a hierarchical processing of sensory inputs from the two eyes. However, visual perception is dynamic, not static. Indeed, when we look at a visual scene we move our eyes several times a second to update the visual information transmitted to the visual cortex, possibly by creating a dynamic representation of the visual world.