Presented here is a protocol to build an automatic apparatus that guides a monkey to perform the flexible reach-to-grasp task. The apparatus combines a 3D translational device and turning table to present multiple objects in an arbitrary position in 3D space.
Reaching and grasping are highly-coupled movements, and their underlying neural dynamics have been widely studied in the last decade. To distinguish reaching and grasping encodings, it is essential to present different object identities independent of their positions. Presented here is the design of an automatic apparatus that is assembled with a turning table and three-dimensional (3D) translational device to achieve this goal. The turning table switches different objects corresponding to different grip types while the 3D translational device transports the turning table in 3D space. Both are driven independently by motors so that the target position and object are combined arbitrarily. Meanwhile, wrist trajectory and grip types are recorded via the motion capture system and touch sensors, respectively. Furthermore, representative results that demonstrate successfully trained monkey using this system are described. It is expected that this apparatus will facilitate researchers to study kinematics, neural principles, and brain-machine interfaces related to upper limb function.
Various apparatuses have been developed to study the neural principles underlying reaching and grasping movement in non-human primate. In reaching tasks, touch screen1,2, screen cursor controlled by a joystick3,4,5,6,7, and virtual reality technology8,9,10 have all been employed to present 2D and 3D targets, respectively. To introduce different grip types, differently shaped objects fixed in one position or rotating around an axis were widely used in the grasping tasks11,12,13. An alternative is to use visual cues to inform subjects to grasp the same object with different grip types14,15,16,17. More recently, reaching and grasping movements have been studied together (i.e., subjects reach multiple positions and grasp with different grip types in an experimental session)18,19,20,21,22,23,24,25,26,27,28,29. Early experiments have presented objects manually, which inevitably lead to low time and spatial precision20,21. To improve experimental precision and save manpower, automatic presentation devices controlled by programs have been widely used. To vary the target position and grip type, experimenters have exposed multiple objects simultaneously, but the relative (or absolute) position of targets and the grip types are bound together, which causes rigid firing patterns through long-term training22,27,28. Objects are usually presented in a 2D plane, which limits the diversity of reaching movement and neural activity19,25,26. Recently, virtual reality24 and robot arm23,29 have been introduced to present objects in 3D space.
Presented here are detailed protocols for building and using an automated apparatus30 that can achieve any combination of multiple target positions and grip types in 3D space. We designed a turning table to switch objects and 3D translational device to transport the turning table in 3D space. Both the turning table and translational device are driven by independent motors. Meanwhile, the 3D trajectory of subject’s wrist and neural signals are recorded simultaneously throughout the experiment. The apparatus provides a valuable platform for the study of upper limb function in the rhesus monkey.
All behavioral and surgical procedures conformed to the Guide for the Care and Use of Laboratory Animals (China Ministry of Health) and were approved by the Animal Care Committee at Zhejiang University, China.
1.Assembling the 3D translational device
2. Assembling the turning table
3. Setup of the control system
4. Preparation of the experimental session
The size of complete workspace of the apparatus is 600 mm, 300 mm, and 500 mm in x-, y-, and z-axes, respectively. The maximum load of the 3D translational device is 25 kg, while the turning table (including the stepping motor) is weighted 15 kg and can be transported at a speed of up to 500 mm/s. The kinematic precision of the 3D translational device is less than 0.1 mm and the noise of the apparatus is less than 60 dB.
To demonstrate the utility of the system, the monkey is trained (previously trained in a reaching task) to do a delayed reach-to-grasp task with the system30. Using the procedure presented above, the paradigm software automatically presents the behavioral experiment trial by trial (~500 trials per session). Specifically, the monkey must start a trial (Figure 4) by pushing the button and holding it before the “go” cue. As a first step (“motor run” phase), the 3D translational device transports the turning table to a pseudo randomly chosen position, and at the same time, the turning table will also rotate to present a pseudo randomly chosen object. This motor run phase lasts 2 s and all four motors (three in the 3D translational device and one in the turning table) start and stop at the same time. The motor run phase is followed by a “planning” phase (1 second), during which the monkey plans the following movement. Once the green LED (“go” cue) turns on, the monkey should release the button, reach into the turning table and grasp the object with the corresponding grip type as soon as possible (maximum reaction time = 0.5 s; maximum movement time = 1 s). The monkey receives a water reward after a minimum hold time of 0.5 s. One trial is aborted, and the blue LED turns on if the monkey releases the button before the “go” cue or does not release the button within maximum reaction time after the cue.
The synchronization software receives event labels (e.g., Button On, Go Cue, Button Off, etc., Figure 4) from paradigm software and a “start-record” label from motion capture system, then sends them to neural signal acquisition system in real-time during the experiment. All labels are saved with neural signals, but trajectory of the wrist is stored in a separate file. To align the neural signals and trajectory in time, the timestamp of the “start-record” label was taken as that of the first sample of trajectory, then incremental timestamps were assigned for the other samples according to the frame rate of motion capture system. Figure 3 shows the time-aligned event labels, trajectory of wrist, and example neuronal activity.
Trajectory of the wrist during the reaching phase in all successful trials was extracted and divided them into eight groups based on target positions (Figure 5). For each group of trajectories, average values and 95% confidence intervals at each timepoint were calculated. The trajectory plot in Figure 5 shows that the ends of eight groups of trajectory forms a cuboid, which has the same size as the predefined cuboid workspace (step 4.3.4). The peristimulus time histogram (PSTH) for single neuron was plotted with respect to reaching position and object, respectively. The spike trains in successful trials were binned with a sliding window of 50 ms and smoothed with a Gaussian kernel (σ = 100 ms). The average values and 95% confidence interval for each group were calculated by the bootstrap method (n = 2,000). Figure 6 shows the PSTHs of two example neurons tuning both reaching position and objects. The neuron in Figure 6A shows significant selectivity during the reaching and holding phases, while the neuron in Figure 6B starts to tune positions and objects from the middle of the “motor run” phase.
Figure 1: Step-by-step instructions for the 3D translational device assembly.
I-I X-rail, I-III Y-rail, I-II Z-rail, II connecting shafts, III stepping motors, IV planetary gear reducers, V connecting rings, VI diaphragm couplings, VII pedestals, VIII T-shaped connecting boards, IX right triangle frames. (A) The materials for the translational device assembly. (B) Building the frame and install the Y-rails (steps 1.1–1.4). (C) Fixing two Z-rails onto Y-rails (steps 1.5–1.7). (D) Fixing X-rail onto Z-rails (steps 1.8 and 1.9). (E) Installing stepping motor and gear recuder (steps 1.10 and 1.11). (F) Completely assembled 3D translational device (steps 1.12 and 1.3). Please click here to view a larger version of this figure.
Figure 2: Step-by-step instructions for the turning table assembly.
(A) Materials for turning table assembly. (B) Assembling objects and installing touch sensors (step 2.2). (C) Securing objects onto the rotator (step 2.3). (D) Connecting wires of sensors to electric slip ring (step 2.4). (E) Installing the base onto 3D translational device and placing the locating bar and bearing (step 2.5). (F) Putting the rotator into the case (step 2.6). (G) Install the shaft and electric slip ring (steps 2.7 and 2.8). (H) Installing the stepping motor (step 2.9). Please click here to view a larger version of this figure.
Figure 3: The graphical user interface of the paradigm and synchronization software.
(A) A custom-made LabView program to control the behavioral task. (B) A custom-made C++ program to communicate with the paradigm software, neural signal acquisition system, and motion capture system. Please click here to view a larger version of this figure.
Figure 4: Time aligned data in a successful trial.
All the event timings, wrist trajectories (X, Y, and Z) and neuronal activity (example unit 1–3) were recorded simultaneously. The short black lines in the top row are the event labels. “Button On” indicates the time when the monkey pressed the button down; “Position Index” is a number from 1–8 indicating which reaching position is presented; “Object Index” is a number from 1–6 indicating which object is presented; “Motor On” indicates the start time of four motors. “Motor Off” indicates their stop time; “Go Cue” indicates the moment when the green LED tunes on; “Button Off” indicates the moment when the monkey release the button; “Touch On” indicates the moment when the touch sensors in the object detect the hand; “Reward On” indicates the moment when the pump begins to deliver the water reward and represents the end of a trial. The “Button On”, “Position Index”, and “Object Index” labels are saved successively in a very short time at the beginning of a trial. Rows 2–4 (labeled with X, Y and Z) plot the trajectory of the wrist in 3D recorded by motion capture system. Rows 5–7 (labeled with Unit 1, 2 and 3) show the spike trains of three example neurons recorded by neural signal acquisition system. The bottom row shows the timeline for a complete trial which is divided into six phases based on event labels. Please click here to view a larger version of this figure.
Figure 5: Trajectories of wrist recorded by motion capture system.
All successful trials are divided into eight groups according to target positions (labeled with letter A to H). Each solid line is an average trajectory of one group and the shadow represents the variances of trajectories. This figure has been modified from a previous study30. Please click here to view a larger version of this figure.
Figure 6: PSTHs of two example neurons (A and B).
The vertical dashed lines from right to left in order is Motor On, Motor Off, Go Cue On, Button Off, and Touch On. Each solid line (in different colors) in PSTH represents an average firing rate across trials towards one target position and the shadow represents 95% confidence intervals (bootstrap; 2,000 times). For both A and B, the upper and lower panels show the PSTH with respect to different positions and objects, respectively. Please click here to view a larger version of this figure.
Supplementary Files. Please click here to download the files.
The behavioral apparatus is described here enables a trial-wise combination of different reaching and grasping movements (i.e., the monkey can grasp differently shaped objects in any arbitrary 3D locations in each trial). This is accomplished through the combination of a custom turning table that switches different objects and a linear translational device that transports the turning table to multiple positions in 3D space. In addition, the neural signals from the monkey, trajectory of wrist, and hand shapes were able to be recorded and synchronized for neurophysiological research.
The apparatus, which includes separately driven 3D translational device and turning table, presents multiple target positions and objects independently. That is, all predefined positions and objects were combined arbitrarily, which is important in studying multivariable encoding14,25,28. On the contrary, if the object to be grasped is linked to position (for instance, the object is fixed on a panel), it is difficult to determine whether a single neuron tunes an object or position18,27,32. Moreover, the apparatus presents objects in 3D space instead of on a 2D plane19,27, which activates more neurons with spatial modulation.
The bolted connection is widely used between subcomponents of the apparatus, which results in high expansibility and flexibility. By designing the shape of objects and placement of touch sensors, a large number of grip types were precisely induced and identified. The 3D translational device can move any subcomponent less than 25 kg in 3D space and is competent for most task involving spatial displacement. Moreover, although the apparatus was designed to train rhesus monkey (Macaca mulatta), due to the adjustable range of the 3D translational device, it is also competent for other primates with similar or larger body sizes or even humans.
One major concern of the behavioral task combining reaching and grasping movement is whether hand posture differs across different reaching positions even if the monkey grasps object with the same grip type. Although reaching and grasping is generally regarded as two different movements, their effectors (arm and hand) are connected. Thus, it is inevitable that the reaching movement interacts with grasping. According to the observations in this experiment, the monkey’s wrist angle changed slightly when grasping the same object in different positions, but significant differences in the hand posture were not observed.
One potential limitation of the apparatus is that the experimental room is not completely dark because of infrared light from the motion capture system. The monkey may see the target object throughout the whole trail, which leads to the undesired tuning before the planning period. To control visual access to an object, a switchable glass controlled by the paradigm software can be placed between the head and apparatus. The switchable glass is opaque during the baseline and planning phases and turns transparent after the “go” cue. In this way, the visual information is precisely controlled. In the same way, white noise can be employed to mask the motor running sound, which prevents the monkey from identifying the object’s location by the sound of the motor. Another limitation of the apparatus is that the motion of fingers cannot be tracked. This is because the monkey must reach the hand into the turning table to grasp the object, which blocks the cameras from capturing marks on the hand.
The authors have nothing to disclose.
We thank Mr. Shijiang Shen for his advice on apparatus design and Ms. Guihua Wang for her assistance with animal care and training. This work was supported by National Key Research and Development Program of China (2017YFC1308501), the National Natural Science Foundation of China (31627802), the Public Projects of Zhejiang Province (2016C33059), and the Fundamental Research Funds for the Central Universities.
Active X-rail | CCM Automation technology Inc., China | W50-25 | Effective travel, 600 mm; Load, 25 kg |
Active Y-rail | CCM Automation technology Inc., China | W60-35 | Effective travel, 300 mm, Load 35 kg |
Active Z-rail | CCM Automation technology Inc., China | W50-25 | Effective travel, 500 mm; Load 25 kg |
Bearing | Taobao.com | 6004-2RSH | Acrylic |
Case | Custom mechanical processing | TT-C | Acrylic |
Connecting ring | CCM Automation technology Inc., China | 57/60-W50 | |
Connecting shaft | CCM Automation technology Inc., China | D12-700 | Diam., 12 mm;Length, 700 mm |
Diaphragm coupling | CCM Automation technology Inc., China | CCM 12-12 | Inner diam., 12-12mm |
Diaphragm coupling | CCM Automation technology Inc., China | CCM 12-14 | Inner diam., 14-12mm |
Electric slip ring | Semring Inc., China | SNH020a-12 | Acrylic |
Locating bar | Custom mechanical processing | TT-L | Acrylic |
Motion capture system | Motion Analysis Corp. US | Eagle-2.36 | |
Neural signal acquisition system | Blackrock Microsystems Corp. US | Cerebus | |
NI DAQ device | National Instruments, US | USB-6341 | |
Object | Custom mechanical processing | TT-O | Acrylic |
Passive Y-rail | CCM Automation technology Inc., China | W60-35 | Effective travel, 300 mm; Load 35 kg |
Passive Z-rail | CCM Automation technology Inc., China | W50-25 | Effective travel, 500 mm; Load 25 kg |
Pedestal | CCM Automation technology Inc., China | 80-W60 | |
Peristaltic pump | Longer Inc., China | BT100-1L | |
Planetary gearhead | CCM Automation technology Inc., China | PLF60-5 | Flange, 60×60 mm; Reduction ratio, 1:5 |
Right triangle frame | CCM Automation technology Inc., China | 290-300 | |
Rotator | Custom mechanical processing | TT-R | Acrylic |
Servo motor | Yifeng Inc., China | 60ST-M01930 | Flange, 60×60 mm; Torque, 1.91 N·m; for Y- and Z-rail |
Servo motor | Yifeng Inc., China | 60ST-M01330 | Flange, 60×60 mm; Torque, 1.27 N·m; for X-rail |
Shaft | Custom mechanical processing | TT-S | Acrylic |
Stepping motor | Taobao.com | 86HBS120 | Flange, 86×86 mm; Torque, 1.27 N·m; Driving turning table |
Touch sensor | Taobao.com | CM-12X-5V | |
Tricolor LED | Taobao.com | CK017, RGB | |
T-shaped connecting board | CCM Automation technology Inc., China | 110-120 |