Summary

Investigating Object Representations in the Macaque Dorsal Visual Stream Using Single-unit Recordings

Published: August 01, 2018
doi:

Summary

A detailed protocol to analyze object selectivity of parieto-frontal neurons involved in visuomotor transformations is presented.

Abstract

Previous studies have shown that neurons in parieto-frontal areas of the macaque brain can be highly selective for real-world objects, disparity-defined curved surfaces, and images of real-world objects (with and without disparity) in a similar manner as described in the ventral visual stream. In addition, parieto-frontal areas are believed to convert visual object information into appropriate motor outputs, such as the pre-shaping of the hand during grasping. To better characterize object selectivity in the cortical network involved in visuomotor transformations, we provide a battery of tests intended to analyze the visual object selectivity of neurons in parieto-frontal regions.

Introduction

Human and non-human primates share the capacity of performing complex motor actions including object grasping. To successfully perform these tasks, our brain needs to complete the transformation of intrinsic object properties into motor commands. This transformation relies on a sophisticated network of dorsal cortical areas located in parietal and ventral premotor cortex1,2,3 (Figure 1).

From lesion studies in monkeys and humans4,5, we know that the dorsal visual stream – originating in primary visual cortex and directed towards posterior parietal cortex – is involved in both spatial vision and the planning of motor actions. However, the majority of dorsal stream areas are not devoted to a unique type of processing. For instance, the anterior intraparietal area (AIP), one of the end stage areas in the dorsal visual stream, contains a variety of neurons that fire not only during grasping6,7,8, but also during the visual inspection of the object7,8,9,10.

Similar to AIP, neurons in area F5, located in the ventral premotor cortex (PMv), also respond during visual fixation and object grasping, which is likely to be important for the transformation of visual information into motor actions11. The anterior portion of this region (subsector F5a) contains neurons responding selectively to three-dimensional (3D, disparity-defined) images12,13, while the subsector located in the convexity (F5c) contains neurons characterized by mirror properties1,3, firing both when an animal performs or observes an action. Finally, the posterior F5 region (F5p) is a hand-related field, with a high proportion of visuomotor neurons responsive to both observation and grasping of 3D objects14,15. Next to F5, area 45B, located in the inferior ramus of the arcuate sulcus, may also be involved in both shape processing16,17 and grasping18.

Testing object selectivity in parietal and frontal cortex is challenging, because it is difficult to determine which features these neurons respond to and what the receptive fields of these neurons are. For example, if a neuron responds to a plate but not to a cone, which feature of these objects is driving this selectivity: the 2D contour, the 3D structure, the orientation in depth, or a combination of many different features? To determine the critical object features for neurons that respond during object fixation and grasping, it is necessary to employ various visual tests using images of objects and reduced versions of the same images.

A sizeable fraction of the neurons in AIP and F5 not only responds to the visual presentation of an object, but also when the animal grasps this object in the dark (i.e., in the absence of visual information). Such neurons may not respond to an image of an object that cannot be grasped. Hence, visual and motor components of the response are intimately connected, which makes it difficult to investigate the neuronal object representation in these regions. Since visuomotor neurons can only be tested with real-world objects, we need a flexible system for presenting different objects at different positions in the visual field and at different orientations if we want to determine which features are important for these neurons. The latter can only be achieved by means of a robot capable of presenting different objects at different locations in visual space.

This article intends to provide an experimental guide for researchers interested in the study of parieto-frontal neurons. In the following sections, we will provide the general protocol used in our laboratory for the analysis of grasping and visual object responses in awake macaque monkeys (Macaca mulatta).

Protocol

All technical procedures were performed in accordance with the National Institute of Health's Guide for the Care and Use of Laboratory Animals and EU Directive 2010/63/EU and were approved by the Ethical Committee of KU Leuven.

1. General Methods for Extracellular Recordings in Awake Behaving Monkeys

  1. Train the animals to perform the visual and motor tasks required to address your specific research question. Ensure that the animal is able to flexibly switch between tasks during the same recording session in order to test the neuron extensively and obtain a better understanding of the features driving the neural response (Figure 2-3). 
    1. Train the animal in Visually-Guided Grasping (VGG; grasping ‘in the light’) to evaluate the visuomotor components of the response. Note: independently from the task chosen, gradually restrict fluid intake at least three days before the start of the training phase. 
      1. Restrain the monkey’s head for the whole duration of the experimental session.
      2. In the first sessions, hold the hand contralateral to the recording chamber at the resting position and help the animal to reach and grasp the object, giving manual reward after each attempt.
      3. Place back the monkey’s hand on the resting position at the end of each trial.
      4. Every few trials, release the hand of the monkey, and wait a few seconds to observe if the animal initiates the movement spontaneously.
      5. Apply manual reward whenever the monkey reaches towards the object.
      6. When the reaching phase is acquired correctly, help the animal to lift (or pull) the object and reward manually.
      7. As in 1.1.1.4 and 1.1.1.5, release the monkey’s hand, and wait a few seconds to observe if the animal initiates the movement spontaneously. Give reward whenever the movement is performed correctly.
      8. Correct the reaching, hand position, and wrist orientation as many times as necessary during the procedure.
      9. Repeat the steps above until the animal performs the sequence automatically.
      10. Load the automatic task. The animal gets rewarded automatically when it performs the reach and grasp movements for a predetermined time.
      11. Gradually increase the holding time of the object.
      12. Introduce the laser that projects the fixation point at the base of the object. Add then the eye tracker to monitor the eye position around the object-to-be-grasped. 
    2. Train the animal in Memory-Guided Grasping (MGG) to investigate the motor component of the response, not affected by the visual component of the stimulus.
      1. Restrain the monkey’s head.
      2. Follow the same steps described for the VGG making sure that the animal maintains fixation on the laser during the task within an electronically defined window. For this version of the task, the light goes off at the end of the fixation period.
    3. Train the monkey in Passive Fixation to address visual responsiveness and shape selectivity. 
      1. Restrain the monkey’s head.
      2. Present the visual stimuli to the monkey using either a CRT (Passive fixation of 3D stimuli) or an LCD monitor (Passive Fixation of 2D stimuli).
      3. Present a fixation spot at the center of the screen, superimposed on the visual stimuli.
      4. Reward the animal after each presentation of the stimulus and increase gradually the fixation period until reaching the standards of the task.
  2. Perform Surgery, using sterile tools, drapes and gowns.
    1. Anesthetize the animal with ketamine (15 mg/kg, intramuscularly) and medetomidine hydrochloride (0.01-0.04 mL/kg intramuscularly) and confirm the anesthesia regularly by checking the animal’s response to stimuli, heart rate, respiration rate and blood pressure. 
    2. Maintain general anesthesia (propofol 10 mg/kg/h intravenously) and administer oxygen with a tracheal tube. Use a lanolim-based ointment to prevent eye dryness while under anesthesia.
    3. Provide analgesia using 0.5 cc of Buprenorphine (0.3mg/ml intravenously). In case of increase of heart rate during the surgery, an extra dosage can be administered. 
    4. Implant an MRI compatible head post with ceramic screws and dental acrylic. Perform all survival surgeries under strict aseptic conditions. For an adequate maintenance of the sterile field, use disposable sterile gloves, masks and sterile instruments. 
    5. Guided by anatomical magnetic resonance imaging (MRI; Horsley-Clark coordinates), make a craniotomy above the area of interest and implant the recording chamber on the monkey’s skull. Use a standard recording chamber for single unit extracellular recordings or a multielectrode microdrive, for the simultaneous recording of multiple neurons.
    6. After the surgery, discontinue the intravenous administration of propofol until spontaneous breathing resumes. Do not leave the animal unattended until it has regained consciousness and introduce the animal in the social group only after complete recovery.
    7. Provide post-operative analgesia as recommended by the institutional veterinarian; use for example Meloxicam (5mg/ml intramuscularly).
    8. Wait 6 weeks after the surgery before starting the experiment. This allows a better anchorage of the head post to the skull and guarantees that the animal has fully recovered from the intervention.
  3. Localize the recording area using MRI (for single unit extracellular recordings) and computed tomography (CT; for multielectrode recordings).
    1. Fill glass capillaries with a 2% copper sulfate solution and insert them into a recording grid.
    2. Perform structural MRI (slice thickness: 0.6 mm). 
  4. Monitoring of  neural activity.
    1. Use tungsten microelectrodes with an impedance of 0.8 – 1 MΩ.
    2. Insert the electrode through the dura using a 23G stainless steel guide tube and a hydraulic microdrive. 
    3. For spike discrimination, amplify and filter the neural activity between 300 and 5,000 Hz. 
    4. For local field potential (LFP) recordings, amplify and filter the signal between 1 and 170 Hz.
  5. Monitor the eye signal
    1. Adjust an infrared camera in front of the animal’s eyes to obtain an adequate image of the pupil and of the corneal reflex. 
    2. Use an infrared-based camera to sample the pupil position at 500 Hz.

2. Investigating Object Selectivity in Dorsal Areas 

  1. Perform Visually-Guided Grasping (VGG).
    1. Choose the right grasping setup depending on the goal of research: carousel setup or robot setup (Figure 3). 
    2. For the carousel setup, run the VGG task:
      1. Let the monkey place the hand contralateral to the recorded hemisphere in the resting position in complete darkness to initiate the sequence. 
      2. After a variable time (intertrial interval: 2,000-3,000 ms), apply a red laser (fixation point) at the base of the object (distance: 28 cm from the monkeys’ eyes). If the animal maintains its gaze inside an electronically-defined fixation window (+/- 2.5°) for 500 ms, illuminate the object from above with a light source. 
      3. After a variable delay (300-1500 ms), program a dimming of the laser (visual GO cue) instructing the monkey to lift the hand from the resting position, and reach, grasp and hold the object for a variable interval (holding time: 300-900 ms). 
      4. Whenever the animal performs the whole sequence correctly, reward it with a drop of juice.
    3. Use a similar task sequence for the robot setup.
      1. As for the carousel setup, let the monkey place the hand contralateral to the recorded hemisphere in the resting position in complete darkness to initiate the sequence. 
      2. After a variable time (intertrial interval: 2,000-3,000 ms), illuminate the LED (fixation point) on the object (from within; distance: 28 cm from the monkeys’ eyes). Again, if the animal maintains its gaze inside an electronically-defined fixation window (+/- 2.5°) for 500 ms, illuminate the object from within with a white light source.
      3. After a variable delay (300-1500 ms), switch off the LED (visual GO cue), instructing the monkey to lift the hand from the resting position, and reach, grasp and hold the object for a variable interval (holding time: 300-900 ms). 
      4. Whenever the animal performs the whole sequence correctly, reward it with a drop of juice.
    4. During the task, quantify the performance of the monkey, paying special attention to the timing. Measure both the time elapsed between the go-signal and the onset of the hand movement (reaction time), and between the start of the movement and the lift of the object (grasping time).
  2. Perform Memory-Guided Grasping (MGG; ‘Grasping in the dark’). Use the MGG task to determine if neurons are visuomotor or motor-dominant. 
    ​Note: The sequence is similar to that described for the VGG, but the object is grasped in total darkness.
    1. Identical to the VGG task, let the monkey place the hand contralateral to the recorded hemisphere in the resting position in complete darkness to initiate the sequence. 
    2. After a variable time (intertrial interval: 2,000-3,000 ms), apply a red laser/LED (fixation point) to indicate the fixation point (at the base of the object for the carousel setup, at the center of the object for the robot setup; distance: 28 cm from the monkeys’ eyes). If the animal maintains its gaze inside an electronically-defined fixation window (+/- 2.5°) for 500 ms, illuminate the object.
    3. After a fixed time (400 ms), switch off the light. 
    4. After a variable delay period (300-1500 ms) following  light offset, dim/switch off the fixation point (GO CUE) to instruct the monkey to lift the hand and reach, grasp, and hold the object (holding time: 300-900 ms). 
    5. Whenever the animal performs the whole sequence correctly, give a drop of juice as a reward. 
  3. Perform Passive fixation. As for the VGG task, choose the most appropriate setup (carousel or robot setup) depending on the goal of the research. 
    ​Note: Two different passive fixation tasks can be performed: passive fixation of real-world objects (using the objects-to-be-grasped in the carousel and robot setups) and passive fixation of 3D/2D images of objects.
    1. Perform passive fixation of real-world objects. 
      1. Present the fixation point (red laser for the carousel setup projected at the base of the object and red LED in the robot setup).
      2. If the animal maintains its gaze inside an electronically-defined fixation window (+/- 2.5°) for 500 ms, illuminate the object for 2,000 ms. 
      3. If the animal maintains its gaze within the window for 1,000 ms, reward it with a drop of juice. 
    2. Perform passive fixation of 3D/2D images of objects.
      1. Present all visual stimuli on a black background (luminance of 8 cd/m2) using a monitor (resolution of 1,280 × 1,024 pixels) equipped with a fast-decay P46-phosphor and operated at 120 Hz (viewing distance: 86 cm).
      2. In the 3D tests, present the stimuli stereoscopically by alternating the left and right eye images on a display (CRT monitor), in combination with two ferroelectric liquid crystal shutters. Locate these shutters in front of the monkey’s eyes, operate at 60 Hz and synchronize to the vertical retrace of the monitor. 
      3. Start the trial by presenting a small square in the center of the screen (fixation point; 0.2° × 0.2°). If the eye position remains within an electronically defined 1° square window (much smaller than for real-world objects) for at least 500 ms, present the visual stimulus on the screen, for a total time of 500 ms. 
      4. When the monkey maintains a stable fixation until the stimulus offset, reward it with a drop of juice. 
      5. For an adequate study of shape selectivity, run a comprehensive battery of tests with 2D images during passive fixation task, in the following sequence.
      6. Run a Search test. Test the visual selectivity of the cell using a wide set of images (surface images; Figure 4A), including the pictures of the object that is grasped in the VGG. For this and all subsequent visual tasks, compare the image evoking the strongest response (termed ‘preferred image’) to a second image to which the neuron is responding weakly (termed ‘nonpreferred image’). If the neuron under study responds also to the images of objects, search for specific stimulus components driving the cell’s responsiveness (Contour test, Receptive Field test and Reduction test).
      7. Run a Contour test. From the original surface images of real objects (2D or 3D images containing texture, shading and perspective), obtain progressively simplified versions of the same stimulus shape (silhouettes and outlines; Figure 4B). Collect at least 10 trials per condition in order to determine whether the neuron prefers the original surface, the silhouette or the outline from the original shape.
      8. Run a Receptive Field (RF) test. To map the RF of a neuron, present the images of objects at different positions on a display (in this experiment, 35 positions; stimulus size of 3°), covering the central visual field19,20. To collect enough stimulus repetitions at all possible positions in a reasonable time, reduce the stimulus duration (flashed stimuli; stimulus duration: 300 ms, intertrial interval: 300 ms).
      9. Run a Reduction test. Run a Reduction test with contour fragments presented at the center of the RF to identify the Minimum Effective Shape Feature (MESF). Generate the set of stimuli in Photoshop by cropping the contour of each of the original contour shapes along the main axes (Figure 3B). Design the MESF as the smallest shape fragment evoking a response that is at least 70% of the intact outline response and not significantly smaller than that response8
      10. For a better estimation of position dependency (the effect of stimulus position on fragment selectivity), run two different tests. Run a Reduction Test with the fragments located at the position occupied in the original outline shape. Run a Reduction Test with the fragments at the center of mass of the shape. 
      11. At this stage, run a new RF mapping using the MESF.

Representative Results

Figure 5 plots the responses of an example neuron recorded from area F5p tested with four objects: two different shapes -a sphere and a plate- shown in two different sizes (6 and 3 cm). This particular neuron responded not only to the large sphere (optimal stimulus; upper left panel), but also to the large plate (lower left panel). In comparison, the response to the smaller objects was weaker (upper and lower right panels).

Figure 6 shows an example neuron recorded in AIP tested during both VGG and passive fixation. This neuron was responsive not only during grasping (VGG task, panel A) but also to the visual presentation of 2D images of objects presented on a screen (passive fixation including the picture of the objects used in the grasping task; Figure 6B). Note that the preferred stimulus in the passive fixation task is not necessarily the object-to-be-grasped, but another 2D picture with which the animal has no previous grasping experience (tangerine). Figure 6C shows the RF of this cell when tested with the preferred and nonpreferred image. An example of the responses obtained in the Reduction test is shown in Figure 6D. This example neuron responded to the smallest fragments in the test (1 – 1.5°).

Figure 1
Figure 1. Parieto-frontal network involved in visual object processing and motor planning and execution. Posterior parietal area AIP projects to areas PFG, 45B and F5a, and then to F5p, M1 and, finally, to the spinal cord.

Figure 2
Figure 2. Decision tree for testing object selectivity: experimental protocol used to test visuomotor responses in our neuronal populations. The VGG task can be followed by either a MGG or a visual task (passive fixation). Two different passive fixation tasks can be considered depending on the region of interest: passive fixation of real world objects and passive fixation of 2D images of objects. The primate visuomotor system has evolved to support the manipulation of real objects, not the images of the objects6,13 and therefore, it is predicted that those regions with a motor dominant component will be significantly more responsive to the vision of real, graspable objects. However, shape selectivity can only be explored in detail using a reduction approach, which can be more easily implemented with the images of the objects. In the 2D passive fixation task, a positive answer (indicating visual selectivity to the images of the objects) signifies that it is possible to refine the neuronal response even further. This leads us to run a new experimental task exploring lower level features in the stimulus. In contrast, a negative response indicates the end of the experiment.

Figure 3
Figure 3. Visuomotor setups. (A). Carousel setup. Left panel: carousel design (invisible to the monkey). Right panel: the detail of the carousel plate showing the object to-be-grasped and the monkey hand approaching it. With a vertical rotating carousel containing up to six objects, we can present different objects to the monkey. (B). Robot setup. Left panel: front view of the robot setup. Right panel: the detail of the four different objects presented by the robot (small/large plate; small/large sphere). A second and more sophisticated way to present the objects during single-cell recordings is by means of a commercial robot arm equipped with a gripper. For A and B, the sequence of events is identical during visual fixation with the exception that in the carousel setup, the object gets illuminated from above and in the robot setup the object is illuminated from within. In the grasp phase, the task slightly differs. Whereas in the carousel setup, the GO CUE is indicated by the dimming of the laser; in the robot setup, the fixation LED switches off completely. Another difference refers to the specific functionality of both setups. While the carousel setup can be mainly used to test object selectivity at one unique position in the visual space, with the robot setup, we can program the distance at which the object to-be-grasped is presented, the position in the frontoparallel plane, or even induce perturbations in object orientation during grasping (e.g., a rapid 45° rotation of the object during the reaching phase). Both systems allow the presentation of different target objects with different grasping properties (size, volume, etc.), requiring different grasping strategies (power grip versus precision grip). (C). Example of a VGG task (carousel setup). 1. Fixation: In our carousel VGG task, the monkey places its contralateral hand on a resting position device to initiate the sequence. Next, a laser is projected on the object-to-be-grasped, which remains in total darkness. 2. Light on: If the animal maintains stable fixation around an electronically defined window surrounding the object for a specific duration, the object is illuminated by an external light source (visual phase of the task). Finally, after a variable delay, the laser dims, working as a visual GO cue and indicating to the monkey to initiate the grasping movement. The animal is rewarded for reaching, grasping, and lifting the object (detected by fiber-optic cables).

Figure 4
Figure 4. Visual stimuli. (A). Example of the stimulus set used to assess visual shape selectivity. (B). From the original surface images in A, we produce progressively simplified versions of the visual stimuli (3D surfaces, 2D surfaces, silhouettes, outlines, and fragments). By dividing the outline in smaller segments, we search for the Minimum Effective Shape Feature (MESF) evoking visual selectivity.

Figure 5
Figure 5. VGG task tested with the robot setup (robot setup in Figure 3B). We presented four different objects at the same position in depth: large sphere (top left), large plate (bottom left), small plate (bottom right), and small sphere (top right). The neuronal response is aligned to light onset in the object (bin size of 20 ms).

Figure 6
Figure 6. AIP neuron recorded using VGG (grasping on the carousel) and passive fixation tasks. (A). Activity during grasping. Peristimulus-time histogram showing the response of an AIP neuron (neuronal response aligned to light onset on the object). (B). Visual response of the same neuron when tested with a wide set of 2D images of real-world objects, including an image of the object to-be-grasped (in two different orientations: horizontal versus vertical). (C). Receptive field mapping. 2D interpolated maps representing the average response to the preferred (left) and nonpreferred (right) stimuli for the neuron in A and B when tested with 3° images of objects. To build the maps, we quantified the net neuronal response (by subtracting the baseline activity) obtained at 35 different positions on the screen (indicated by the intersections of the dashed grid lines; [0,0]: central position; +6° azimuth: contralateral), spaced 2° apart and covering both the ipsi- and contralateral visual hemifields. Color indicates the strength of the neural response (varying between 0 and the maximum response of the cell). (D). Color tree plot representing the normalized net responses (firing rate minus baseline activity) of the same neuron as in Figure 6A-C to the preferred and nonpreferred stimulus (outlines of the preferred and nonpreferred image) in the standard reduction test (reduction test with the fragments located at the position occupied at the original outline shape; 4-fragment stimuli, first row; 8-fragment stimuli, second row; 16-fragment stimuli, third row).The color in each circle indicates the response magnitude (1 = 28 spikes/s).

Discussion

A comprehensive approach to the study of the dorsal stream requires a careful selection of behavioral tasks and visual tests: visual and grasping paradigms can be employed either combined or separately depending on the specific properties of the region.

In this article, we provide the examples of the neural activity recorded in both AIP and F5p in response to a subset of visual and motor tasks, but very similar responses can be observed in other frontal areas such as area 45B and F5a.

We propose two experimental setups to investigate the neural representation of objects during grasping. With a vertical rotating carousel (Figure 3A) containing up to six objects, we can present different objects to the monkey. The rotating carousel permits the presentation of different target objects (differing in shape, size, volume, etc.), requiring different grasping strategies (power grip versus precision grip).

A second and more sophisticated way to present objects during single-cell recordings is by means of a commercial robot arm and gripper (Figure 3B). In this case, the robot initiates the trial by grasping an object (Figure 3B) and moving it to a specific position in space in total darkness, while the monkey's hand stays on the resting position. Besides this, the sequence of the events is identical in the two setups. However, the use of a robot allows a wide manipulation of experimental parameters (distance at which the object is presented, position in the frontoparallel plane, or the orientation of the object). Finally, as shown in the right panel of Figure 3B, the robot can also be programmed to grasp different objects (plate and sphere in our case).

This experimental approach allows determining the object features driving visuomotor neurons that respond to object observation during grasping. However, this approach also has limitations. With every test, some neurons will be excluded from further testing (e.g., no responses to the images of the objects, no contour selectivity), so that the conclusions of the experiment can only pertain to a subset of all the neurons showing task-related activity during grasping. However, in our previous studies8, the large majority (83%) of neurons showing visual responses to object observation during grasping were also responding selectively to the images of the objects, and the large majority of the latter neurons (90%) were also selective for contour versions of these images. Therefore, our testing protocol may be appropriate for a very large fraction of all visually responsive neurons in parietal and frontal cortex.

Some visuomotor neurons, most likely in more motor-related subsectors in the frontal cortex such as area F5p, may only respond to objects in the context of a grasping task, and never respond to the images of the objects (even with binocular disparity) presented on a display. We can nevertheless investigate the properties of this subpopulation of neurons using the robot. With this experimental setup, we can present the objects at different locations in the frontoparallel plane during passive fixation (analogous to a RF test), at different 3D orientations and at different distances from the animal, and we can combine saccadic eye movements towards the object with object grasping21.

Our intention is not to provide a single or rigid experimental protocol for the study of parieto-frontal neurons, but to underline the necessity of a comprehensive and dynamic approach, with the tasks and tests specifically designed for the neurons under study. Regarding visual selectivity, for instance, our protocol can be easily adapted for the study of other visual properties of neurons responding to objects. For example, we followed a very similar approach when investigating 3D selectivity in F5a12 and AIP neurons13 during grasping. We also combined grasping execution and detailed visual testing with videos of actions when investigating the action observation responses in AIP22. In the same way, many other experimental tasks, not included here, could be also added to our protocol depending on the scientific question to be addressed. These tasks include the study of both purely physical characteristics of the stimulus (e.g., stimulus size) and cognitive aspects such as stimulus familiarity23 or biological relevance (preference for shapes which are biologically relevant such as faces24).

Further studies in these areas will provide a better understanding of the network and will allow us to refine the type of protocols to be used.

Divulgaciones

The authors have nothing to disclose.

Acknowledgements

We thank Inez Puttemans, Marc De Paep, Sara De Pril, Wouter Depuydt, Astrid Hermans, Piet Kayenbergh, Gerrit Meulemans, Christophe Ulens, and Stijn Verstraeten for technical and administrative assistance.

Materials

Grasping robot GIBAS Universal Robots UR-6-85-5-A Robot arm equipped with a gripper
Carousel motor Siboni RD066/†20 MV6, 35×23 F02 Motor to be implemented in a custom-made vertical carousel. It allows the rotation of the carousel.
Eye tracker SR Research EyeLink II Infrared camera system sampling at 500 Hz
Filter Wavetek Rockland 852 Electronic filters perform a variety of signal-processing functions with the purpose of removing a signal's unwanted frequency components.
Preamplifier BAK ELECTRONICS, INC. A-1 The Model A-1 allows to reduce input capacity and noise pickup and allows to test impedance for metal micro-electrodes
Electrodes FHC UEWLEESE*N4G Metal microelectrodes (* = Impedance, to be chosen by the researcher)
CRT monitor Vision Research Graphics M21L-67S01 The CRT monitor is equipped with a fast-decay P46-phosphor operating at 120 Hz
Ferroelectric liquid crystal shutters Display Tech FLC Shutter Panel; LV2500P-OEM The shutters operate at 60 Hz in front of the monkeys and are synchronized to the vertical retrace of the monitor

Referencias

  1. Gallese, V., Fadiga, L., Fogassi, L., Rizzolatti, G. Action recognition in the premotor cortex. Brain. 119 (2), 593-609 (1996).
  2. Fogassi, L., Gallese, V., Buccino, G., Craighero, L., Fadiga, L., Rizzolatti, G. Cortical mechanism for the visual guidance of hand grasping movements in the monkey: a reversible inactivation study. Brain. 124 (3), 571-586 (2001).
  3. Rizzolatti, G., Camarda, R., Fogassi, L., Gentilucci, M., Luppino, G., Matelli, M. Functional organization of inferior area 6 in the macaque monkey. II. Area F5 and the control of distal movements. Exp. Brain Res. 71 (3), 491-507 (1988).
  4. Mishkin, M., Ungerleider, L. G. Contribution of striate inputs to the visuospatial functions of parieto-preoccipital cortex in monkeys. Behav. Brain Res. 6 (1), 57-77 (1982).
  5. Goodale, M. A., Milner, A. D. Separate visual pathways for perception and action. Trends Neurosci. 15 (1), 20-25 (1992).
  6. Baumann, M. A., Fluet, M. C., Scherberger, H. Context-specific grasp movement representation in the macaque anterior intraparietal area. J. Neurosci. 29 (20), 6436-6438 (2009).
  7. Murata, A., Gallese, V., Luppino, G., Kaseda, M., Sakata, H. Selectivity for the shape, size, and orientation of objects for grasping neurons of monkey parietal area AIP. J. Neurophysiol. 83 (5), 2580-2601 (2000).
  8. Romero, M. C., Pani, P., Janssen, P. Coding of shape features in the macaque anterior intraparietal area. J. Neurosci. 34 (11), 4006-4021 (2014).
  9. Sakata, H., Taira, M., Kusonoki, M., Murata, A., Tanaka, Y., Tsutsui, K. Neural coding of 3D features of objects for hand action in the parietal cortex of the monkey. Philos. Trans. R. Soc. Lond. B. Biol. Sci. 353 (1373), 1363-1373 (1998).
  10. Taira, M., Mine, S., Georgopoulos, A. P., Murata, A., Sakata, H. Parietal cortex neurons of the monkey related to the visual guidance of the hand movement. Exp Brain Res. 83 (1), 29-36 (1990).
  11. Janssen, P., Scherberger, H. Visual guidance in control of grasping. Annu. Rev. Neurosci. 8 (38), 69-86 (2015).
  12. Theys, T., Pani, P., van Loon, J., Goffin, J., Janssen, P. Selectivity for three-dimensional contours and surfaces in the anterior intraparietal area. J. Neurophysiol. 107 (3), 995-1008 (2012).
  13. Goffin, J., Janssen, P. Three-dimensional shape coding in grasping circuits: a comparison between the anterior intraparietal area and ventral premotor area F5a. J. Cogn. Neurosci. 25 (3), 352-364 (2013).
  14. Raos, V., Umiltá, M. A., Murata, A., Fogassi, L., Gallese, V. Functional properties of grasping-related neurons in the ventral premotor area F5 of the macaque monkey. J. Neurophysiol. 95 (2), 709-729 (2006).
  15. Umilta, M. A., Brochier, T., Spinks, R. L., Lemon, R. N. Simultaneous recording of macaque premotor and primary motor cortex neuronal populations reveals different functional contributions to visuomotor grasp. J. Neurophysiol. 98 (1), 488-501 (2007).
  16. Denys, K., et al. The processing of visual shape in the cerebral cortex of human and nonhuman primates: a functional magnetic resonance imaging study. J. Neurosci. 24 (10), 2551-2565 (2004).
  17. Theys, T., Pani, P., van Loon, J., Goffin, J., Janssen, P. Selectivity for three-dimensional shape and grasping-related activity in the macaque ventral premotor cortex. J.Neurosci. 32 (35), 12038-12050 (2012).
  18. Nelissen, K., Luppino, G., Vanduffel, W., Rizzolatti, G., Orban, G. A. Observing others: multiple action representation in the frontal lobe. Science. 310 (5746), 332-336 (2005).
  19. Janssen, P., Srivastava, S., Ombelet, S., Orban, G. A. Coding of shape and position in macaque lateral intraparietal area. J. Neurosci. 28 (26), 6679-6690 (2008).
  20. Romero, M. C., Janssen, P. Receptive field properties of neurons in the macaque anterior intraparietal area. J. Neurophysiol. 115 (3), 1542-1555 (2016).
  21. Decramer, T., Premereur, E., Theys, T., Janssen, P. Multi-electrode recordings in the macaque frontal cortex reveal common processing of eye-, arm- and hand movements. Program No. 495.15/GG14. Neuroscience Meeting Planner. , (2017).
  22. Pani, P., Theys, T., Romero, M. C., Janssen, P. Grasping execution and grasping observation activity of single neurons in macaque anterior intraparietal area. J. Cogn. Neurosci. 26 (10), 2342-2355 (2014).
  23. Turriziani, P., Smirni, D., Oliveri, M., Semenza, C., Cipolotti, L. The role of the prefrontal cortex in familiarity and recollection processes during verbal and non-verbal recognition memory. Neuroimage. 52 (1), 469-480 (2008).
  24. Tsao, D. Y., Schweers, N., Moeller, S., Freiwald, W. A. Patches of faces-selective cortex in the macaque frontal lobe. Nat. Neurosci. 11 (8), 877-879 (2008).

Play Video

Citar este artículo
Caprara, I., Janssen, P., Romero, M. C. Investigating Object Representations in the Macaque Dorsal Visual Stream Using Single-unit Recordings. J. Vis. Exp. (138), e57745, doi:10.3791/57745 (2018).

View Video