A detailed protocol to analyze object selectivity of parieto-frontal neurons involved in visuomotor transformations is presented.
Previous studies have shown that neurons in parieto-frontal areas of the macaque brain can be highly selective for real-world objects, disparity-defined curved surfaces, and images of real-world objects (with and without disparity) in a similar manner as described in the ventral visual stream. In addition, parieto-frontal areas are believed to convert visual object information into appropriate motor outputs, such as the pre-shaping of the hand during grasping. To better characterize object selectivity in the cortical network involved in visuomotor transformations, we provide a battery of tests intended to analyze the visual object selectivity of neurons in parieto-frontal regions.
Human and non-human primates share the capacity of performing complex motor actions including object grasping. To successfully perform these tasks, our brain needs to complete the transformation of intrinsic object properties into motor commands. This transformation relies on a sophisticated network of dorsal cortical areas located in parietal and ventral premotor cortex1,2,3 (Figure 1).
From lesion studies in monkeys and humans4,5, we know that the dorsal visual stream – originating in primary visual cortex and directed towards posterior parietal cortex – is involved in both spatial vision and the planning of motor actions. However, the majority of dorsal stream areas are not devoted to a unique type of processing. For instance, the anterior intraparietal area (AIP), one of the end stage areas in the dorsal visual stream, contains a variety of neurons that fire not only during grasping6,7,8, but also during the visual inspection of the object7,8,9,10.
Similar to AIP, neurons in area F5, located in the ventral premotor cortex (PMv), also respond during visual fixation and object grasping, which is likely to be important for the transformation of visual information into motor actions11. The anterior portion of this region (subsector F5a) contains neurons responding selectively to three-dimensional (3D, disparity-defined) images12,13, while the subsector located in the convexity (F5c) contains neurons characterized by mirror properties1,3, firing both when an animal performs or observes an action. Finally, the posterior F5 region (F5p) is a hand-related field, with a high proportion of visuomotor neurons responsive to both observation and grasping of 3D objects14,15. Next to F5, area 45B, located in the inferior ramus of the arcuate sulcus, may also be involved in both shape processing16,17 and grasping18.
Testing object selectivity in parietal and frontal cortex is challenging, because it is difficult to determine which features these neurons respond to and what the receptive fields of these neurons are. For example, if a neuron responds to a plate but not to a cone, which feature of these objects is driving this selectivity: the 2D contour, the 3D structure, the orientation in depth, or a combination of many different features? To determine the critical object features for neurons that respond during object fixation and grasping, it is necessary to employ various visual tests using images of objects and reduced versions of the same images.
A sizeable fraction of the neurons in AIP and F5 not only responds to the visual presentation of an object, but also when the animal grasps this object in the dark (i.e., in the absence of visual information). Such neurons may not respond to an image of an object that cannot be grasped. Hence, visual and motor components of the response are intimately connected, which makes it difficult to investigate the neuronal object representation in these regions. Since visuomotor neurons can only be tested with real-world objects, we need a flexible system for presenting different objects at different positions in the visual field and at different orientations if we want to determine which features are important for these neurons. The latter can only be achieved by means of a robot capable of presenting different objects at different locations in visual space.
This article intends to provide an experimental guide for researchers interested in the study of parieto-frontal neurons. In the following sections, we will provide the general protocol used in our laboratory for the analysis of grasping and visual object responses in awake macaque monkeys (Macaca mulatta).
All technical procedures were performed in accordance with the National Institute of Health's Guide for the Care and Use of Laboratory Animals and EU Directive 2010/63/EU and were approved by the Ethical Committee of KU Leuven.
1. General Methods for Extracellular Recordings in Awake Behaving Monkeys
2. Investigating Object Selectivity in Dorsal Areas
Figure 5 plots the responses of an example neuron recorded from area F5p tested with four objects: two different shapes -a sphere and a plate- shown in two different sizes (6 and 3 cm). This particular neuron responded not only to the large sphere (optimal stimulus; upper left panel), but also to the large plate (lower left panel). In comparison, the response to the smaller objects was weaker (upper and lower right panels).
Figure 6 shows an example neuron recorded in AIP tested during both VGG and passive fixation. This neuron was responsive not only during grasping (VGG task, panel A) but also to the visual presentation of 2D images of objects presented on a screen (passive fixation including the picture of the objects used in the grasping task; Figure 6B). Note that the preferred stimulus in the passive fixation task is not necessarily the object-to-be-grasped, but another 2D picture with which the animal has no previous grasping experience (tangerine). Figure 6C shows the RF of this cell when tested with the preferred and nonpreferred image. An example of the responses obtained in the Reduction test is shown in Figure 6D. This example neuron responded to the smallest fragments in the test (1 – 1.5°).
Figure 1. Parieto-frontal network involved in visual object processing and motor planning and execution. Posterior parietal area AIP projects to areas PFG, 45B and F5a, and then to F5p, M1 and, finally, to the spinal cord.
Figure 2. Decision tree for testing object selectivity: experimental protocol used to test visuomotor responses in our neuronal populations. The VGG task can be followed by either a MGG or a visual task (passive fixation). Two different passive fixation tasks can be considered depending on the region of interest: passive fixation of real world objects and passive fixation of 2D images of objects. The primate visuomotor system has evolved to support the manipulation of real objects, not the images of the objects6,13 and therefore, it is predicted that those regions with a motor dominant component will be significantly more responsive to the vision of real, graspable objects. However, shape selectivity can only be explored in detail using a reduction approach, which can be more easily implemented with the images of the objects. In the 2D passive fixation task, a positive answer (indicating visual selectivity to the images of the objects) signifies that it is possible to refine the neuronal response even further. This leads us to run a new experimental task exploring lower level features in the stimulus. In contrast, a negative response indicates the end of the experiment.
Figure 3. Visuomotor setups. (A). Carousel setup. Left panel: carousel design (invisible to the monkey). Right panel: the detail of the carousel plate showing the object to-be-grasped and the monkey hand approaching it. With a vertical rotating carousel containing up to six objects, we can present different objects to the monkey. (B). Robot setup. Left panel: front view of the robot setup. Right panel: the detail of the four different objects presented by the robot (small/large plate; small/large sphere). A second and more sophisticated way to present the objects during single-cell recordings is by means of a commercial robot arm equipped with a gripper. For A and B, the sequence of events is identical during visual fixation with the exception that in the carousel setup, the object gets illuminated from above and in the robot setup the object is illuminated from within. In the grasp phase, the task slightly differs. Whereas in the carousel setup, the GO CUE is indicated by the dimming of the laser; in the robot setup, the fixation LED switches off completely. Another difference refers to the specific functionality of both setups. While the carousel setup can be mainly used to test object selectivity at one unique position in the visual space, with the robot setup, we can program the distance at which the object to-be-grasped is presented, the position in the frontoparallel plane, or even induce perturbations in object orientation during grasping (e.g., a rapid 45° rotation of the object during the reaching phase). Both systems allow the presentation of different target objects with different grasping properties (size, volume, etc.), requiring different grasping strategies (power grip versus precision grip). (C). Example of a VGG task (carousel setup). 1. Fixation: In our carousel VGG task, the monkey places its contralateral hand on a resting position device to initiate the sequence. Next, a laser is projected on the object-to-be-grasped, which remains in total darkness. 2. Light on: If the animal maintains stable fixation around an electronically defined window surrounding the object for a specific duration, the object is illuminated by an external light source (visual phase of the task). Finally, after a variable delay, the laser dims, working as a visual GO cue and indicating to the monkey to initiate the grasping movement. The animal is rewarded for reaching, grasping, and lifting the object (detected by fiber-optic cables).
Figure 4. Visual stimuli. (A). Example of the stimulus set used to assess visual shape selectivity. (B). From the original surface images in A, we produce progressively simplified versions of the visual stimuli (3D surfaces, 2D surfaces, silhouettes, outlines, and fragments). By dividing the outline in smaller segments, we search for the Minimum Effective Shape Feature (MESF) evoking visual selectivity.
Figure 5. VGG task tested with the robot setup (robot setup in Figure 3B). We presented four different objects at the same position in depth: large sphere (top left), large plate (bottom left), small plate (bottom right), and small sphere (top right). The neuronal response is aligned to light onset in the object (bin size of 20 ms).
Figure 6. AIP neuron recorded using VGG (grasping on the carousel) and passive fixation tasks. (A). Activity during grasping. Peristimulus-time histogram showing the response of an AIP neuron (neuronal response aligned to light onset on the object). (B). Visual response of the same neuron when tested with a wide set of 2D images of real-world objects, including an image of the object to-be-grasped (in two different orientations: horizontal versus vertical). (C). Receptive field mapping. 2D interpolated maps representing the average response to the preferred (left) and nonpreferred (right) stimuli for the neuron in A and B when tested with 3° images of objects. To build the maps, we quantified the net neuronal response (by subtracting the baseline activity) obtained at 35 different positions on the screen (indicated by the intersections of the dashed grid lines; [0,0]: central position; +6° azimuth: contralateral), spaced 2° apart and covering both the ipsi- and contralateral visual hemifields. Color indicates the strength of the neural response (varying between 0 and the maximum response of the cell). (D). Color tree plot representing the normalized net responses (firing rate minus baseline activity) of the same neuron as in Figure 6A-C to the preferred and nonpreferred stimulus (outlines of the preferred and nonpreferred image) in the standard reduction test (reduction test with the fragments located at the position occupied at the original outline shape; 4-fragment stimuli, first row; 8-fragment stimuli, second row; 16-fragment stimuli, third row).The color in each circle indicates the response magnitude (1 = 28 spikes/s).
A comprehensive approach to the study of the dorsal stream requires a careful selection of behavioral tasks and visual tests: visual and grasping paradigms can be employed either combined or separately depending on the specific properties of the region.
In this article, we provide the examples of the neural activity recorded in both AIP and F5p in response to a subset of visual and motor tasks, but very similar responses can be observed in other frontal areas such as area 45B and F5a.
We propose two experimental setups to investigate the neural representation of objects during grasping. With a vertical rotating carousel (Figure 3A) containing up to six objects, we can present different objects to the monkey. The rotating carousel permits the presentation of different target objects (differing in shape, size, volume, etc.), requiring different grasping strategies (power grip versus precision grip).
A second and more sophisticated way to present objects during single-cell recordings is by means of a commercial robot arm and gripper (Figure 3B). In this case, the robot initiates the trial by grasping an object (Figure 3B) and moving it to a specific position in space in total darkness, while the monkey's hand stays on the resting position. Besides this, the sequence of the events is identical in the two setups. However, the use of a robot allows a wide manipulation of experimental parameters (distance at which the object is presented, position in the frontoparallel plane, or the orientation of the object). Finally, as shown in the right panel of Figure 3B, the robot can also be programmed to grasp different objects (plate and sphere in our case).
This experimental approach allows determining the object features driving visuomotor neurons that respond to object observation during grasping. However, this approach also has limitations. With every test, some neurons will be excluded from further testing (e.g., no responses to the images of the objects, no contour selectivity), so that the conclusions of the experiment can only pertain to a subset of all the neurons showing task-related activity during grasping. However, in our previous studies8, the large majority (83%) of neurons showing visual responses to object observation during grasping were also responding selectively to the images of the objects, and the large majority of the latter neurons (90%) were also selective for contour versions of these images. Therefore, our testing protocol may be appropriate for a very large fraction of all visually responsive neurons in parietal and frontal cortex.
Some visuomotor neurons, most likely in more motor-related subsectors in the frontal cortex such as area F5p, may only respond to objects in the context of a grasping task, and never respond to the images of the objects (even with binocular disparity) presented on a display. We can nevertheless investigate the properties of this subpopulation of neurons using the robot. With this experimental setup, we can present the objects at different locations in the frontoparallel plane during passive fixation (analogous to a RF test), at different 3D orientations and at different distances from the animal, and we can combine saccadic eye movements towards the object with object grasping21.
Our intention is not to provide a single or rigid experimental protocol for the study of parieto-frontal neurons, but to underline the necessity of a comprehensive and dynamic approach, with the tasks and tests specifically designed for the neurons under study. Regarding visual selectivity, for instance, our protocol can be easily adapted for the study of other visual properties of neurons responding to objects. For example, we followed a very similar approach when investigating 3D selectivity in F5a12 and AIP neurons13 during grasping. We also combined grasping execution and detailed visual testing with videos of actions when investigating the action observation responses in AIP22. In the same way, many other experimental tasks, not included here, could be also added to our protocol depending on the scientific question to be addressed. These tasks include the study of both purely physical characteristics of the stimulus (e.g., stimulus size) and cognitive aspects such as stimulus familiarity23 or biological relevance (preference for shapes which are biologically relevant such as faces24).
Further studies in these areas will provide a better understanding of the network and will allow us to refine the type of protocols to be used.
The authors have nothing to disclose.
We thank Inez Puttemans, Marc De Paep, Sara De Pril, Wouter Depuydt, Astrid Hermans, Piet Kayenbergh, Gerrit Meulemans, Christophe Ulens, and Stijn Verstraeten for technical and administrative assistance.
Grasping robot | GIBAS Universal Robots | UR-6-85-5-A | Robot arm equipped with a gripper |
Carousel motor | Siboni | RD066/†20 MV6, 35×23 F02 | Motor to be implemented in a custom-made vertical carousel. It allows the rotation of the carousel. |
Eye tracker | SR Research | EyeLink II | Infrared camera system sampling at 500 Hz |
Filter | Wavetek Rockland | 852 | Electronic filters perform a variety of signal-processing functions with the purpose of removing a signal's unwanted frequency components. |
Preamplifier | BAK ELECTRONICS, INC. | A-1 | The Model A-1 allows to reduce input capacity and noise pickup and allows to test impedance for metal micro-electrodes |
Electrodes | FHC | UEWLEESE*N4G | Metal microelectrodes (* = Impedance, to be chosen by the researcher) |
CRT monitor | Vision Research Graphics | M21L-67S01 | The CRT monitor is equipped with a fast-decay P46-phosphor operating at 120 Hz |
Ferroelectric liquid crystal shutters | Display Tech | FLC Shutter Panel; LV2500P-OEM | The shutters operate at 60 Hz in front of the monkeys and are synchronized to the vertical retrace of the monitor |