Rodent skilled reaching is commonly used to study dexterous skills, but requires significant time and effort to implement the task and analyze the behavior. We describe an automated version of skilled reaching with motion tracking and three-dimensional reconstruction of reach trajectories.
Rodent skilled reaching is commonly used to study dexterous skills, but requires significant time and effort to implement the task and analyze the behavior. Several automated versions of skilled reaching have been developed recently. Here, we describe a version that automatically presents pellets to rats while recording high-definition video from multiple angles at high frame rates (300 fps). The paw and individual digits are tracked with DeepLabCut, a machine learning algorithm for markerless pose estimation. This system can also be synchronized with physiological recordings, or be used to trigger physiologic interventions (e.g., electrical or optical stimulation).
Humans depend heavily on dexterous skill, defined as movements that require precisely coordinated multi-joint and digit movements. These skills are affected by a range of common central nervous system pathologies including structural lesions (e.g., stroke, tumor, demyelinating lesions), neurodegenerative disease (e.g., Parkinson’s disease), and functional abnormalities of motor circuits (e.g., dystonia). Understanding how dexterous skills are learned and implemented by central motor circuits therefore has the potential to improve quality of life for a large population. Furthermore, such understanding is likely to improve motor performance in healthy people by optimizing training and rehabilitation strategies.
Dissecting the neural circuits underlying dexterous skill in humans is limited by technological and ethical considerations, necessitating the use of animal models. Nonhuman primates are commonly used to study dexterous limb movements given the similarity of their motor systems and behavioral repertoire to humans1. However, non-human primates are expensive with long generation times, limiting numbers of study subjects and genetic interventions. Furthermore, while the neuroscientific toolbox applicable to nonhuman primates is larger than for humans, many recent technological advances are either unavailable or significantly limited in primates.
Rodent skilled reaching is a complementary approach to studying dexterous motor control. Rats and mice can be trained to reach for, grasp, and retrieve a sugar pellet in a stereotyped sequence of movements homologous to human reaching patterns2. Due to their relatively short generation time and lower housing costs, as well as their ability to acquire skilled reaching over days to weeks, it is possible to study large numbers of subjects during both learning and skill consolidation phases. The use of rodents, especially mice, also facilitates the use of powerful modern neuroscientific tools (e.g., optogenetics, calcium imaging, genetic models of disease) to study dexterous skill.
Rodent skilled reaching has been used for decades to study normal motor control and how it is affected by specific pathologies like stroke and Parkinson’s disease3. However, most versions of this task are labor and time-intensive, mitigating the benefits of studying rodents. Typical implementations involve placing rodents in a reaching chamber with a shelf in front of a narrow slot through which the rodent must reach. A researcher manually places sugar pellets on the shelf, waits for the animal to reach, and then places another one. Reaches are scored as successes or failures either in real time or by video review4. However, simply scoring reaches as successes or failures ignores rich kinematic data that can provide insight into how (as opposed to simply whether) reaching is impaired. This problem was addressed by implementing detailed review of reaching videos to identify and semi-quantitatively score reach submovements5. While this added some data regarding reach kinematics, it also significantly increased experimenter time and effort. Further, high levels of experimenter involvement can lead to inconsistencies in methodology and data analysis, even within the same lab.
More recently, several automated versions of skilled reaching have been developed. Some attach to the home cage6,7, eliminating the need to transfer animals. This both reduces stress on the animals and eliminates the need to acclimate them to a specialized reaching chamber. Other versions allow paw tracking so that kinematic changes under specific interventions can be studied8,9,10, or have mechanisms to automatically determine if pellets were knocked off the shelf11. Automated skilled reaching tasks are especially useful for high-intensity training , as may be required for rehabilitation after an injury12. Automated systems allow animals to perform large numbers of reaches over long periods of time without requiring intensive researcher involvement. Furthermore, systems which allow paw tracking and automated outcome scoring reduce researcher time spent performing data analysis.
We developed an automated rat skilled reaching system with several specialized features. First, by using a movable pedestal to bring the pellet into “reaching position” from below, we obtain a nearly unobstructed view of the forelimb. Second, a system of mirrors allows multiple simultaneous views of the reach with a single camera, allowing three-dimensional (3-D) reconstruction of reach trajectories using a high resolution, high-speed (300 fps) camera. With the recent development of robust machine learning algorithms for markerless motion tracking13, we now track not only the paw but individual knuckles to extract detailed reach and grasp kinematics. Third, a frame-grabber that performs simple video processing allows real-time identification of distinct reaching phases. This information is used to trigger video acquisition (continuous video acquisition is not practical due to file size), and can also be used to trigger interventions (e.g., optogenetics) at precise moments. Finally, individual video frames are triggered by transistor-transistor logic (TTL) pulses, allowing the video to be precisely synchronized with neural recordings (e.g., electrophysiology or photometry). Here, we describe how to build this system, train rats to perform the task, synchronize the apparatus with external systems, and reconstruct 3-D reach trajectories.
All methods involving animal use described here have been approved by the Institutional Animal Care and Use Committee (IACUC) of the University of Michigan.
1. Setting up the reaching chamber
NOTE: See Ellens et al.14 for details and diagrams of the apparatus. Part numbers refer to Figure 1.
2. Setting up the computer and hardware
3. Behavioral training
4. Training rats using the automated system
5. Analyzing videos with DeepLabCut
NOTE: Different networks are trained for each paw preference (right paw and left paw) and for each view (direct view and left mirror view for right pawed rats, direct view and right mirror view for left pawed rats). The top mirror view is not used for 3D reconstruction—just to detect when the nose enters the slot, which may be useful to trigger interventions (e.g., optogenetics). Each network is then used to analyze a set of videos cropped for the corresponding paw and view.
6. Box calibration
NOTE: These instructions are used to determine the transformation matrices to convert points identified in the direct and mirror views into 3-D coordinates. For the most up to date version and more details on how to use the boxCalibration package, see the Leventhal Lab GitHub: https://github.com/LeventhalLab/boxCalibration, which includes step-by-step instructions for their use.
7. Reconstructing 3D trajectories
Rats acquire the skilled reaching task quickly once acclimated to the apparatus, with performance plateauing in terms of both numbers of reaches and accuracy over 1–2 weeks (Figure 5). Figure 6 shows sample video frames indicating structures identified by DeepLabCut, and Figure 7 shows superimposed individual reach trajectories from a single session. Finally, in Figure 8, we illustrate what happens if the paw detection trigger (steps 4.3.4–4.3.6) is not accurately set. There is significant variability in the frame at which the paw breaches the reaching slot. This is not a major problem in terms of analyzing reach kinematics. However, it could lead to variability in when interventions (e.g., optogenetics) are triggered during reaching movements.
Figure 1: The skilled reaching chamber.
Clockwise from top left are a side view, a view from the front and above, the frame in which the actuator is mounted (see step 1.8), and a view from the side and above. The skilled reaching chamber (1) has a door (2) cut into one side to allow rats to be placed into and taken out of the chamber. A slit is cut into the ceiling panel (12) to allow the animal to be tethered and holes are cut into the floor panel (13) to allow litter to fall through. Two infrared sensors (3) are aligned on either side of the back of the chamber. A mirror (4) is mounted above the reaching slot (14) at the front of the reaching chamber and two other mirrors (6) are mounted on either side of the reaching chamber. The skilled reaching chamber sits atop a support box (5). The high-definition camera (7) is mounted onto the support box in front of the reaching slot. Two pieces of black paper (18) are mounted on either side of the camera (7) to enhance contrast of the paw in the side mirrors (6). Below the support box is a frame (8) that supports the linear actuator (16) and pellet reservoir (9). A guide tube encasing the pellet delivery rod (10) is fit into the pellet reservoir and controlled by the linear actuator. Holes are cut into the actuator frame (17) and support box (15) above the pellet reservoir to allow the pellet delivery rod to move up and down freely. The box is illuminated with light panels (11) mounted to the cabinet walls and ceiling. Please click here to view a larger version of this figure.
Figure 2: Single trial structure.
(A) A trial begins with the pellet delivery rod (controlled by a linear actuator) positioned at the “ready” position (position 2 – midway between floor and bottom of reaching slot). (B) The rat moves to the back of the chamber to break the infrared (IR) beam, which causes the pellet delivery rod to rise to position 3 (aligned with bottom of reaching slot). (C) The rat reaches through the reaching slot to grasp the pellet. Reaches are detected in real-time using an FPGA framegrabber that detects pixel intensity changes within a region of interest (ROI) in the side mirror view directly in front of the slot. When enough pixels match the user defined “paw intensity”, video acquisition is triggered. (D) Two seconds later the pellet is lowered to position 1, picking up a new pellet from the pellet reservoir before resetting to position 2. Please click here to view a larger version of this figure.
Figure 3: Sample calibration image.
A helping hand is placed inside the skilled reaching chamber. An alligator clip pokes through the reaching slot to hold the calibration cube in place outside of the reaching chamber. The three checkerboard patterns are entirely visible in the direct view and the corresponding mirror views (green: left; red: top; and blue: right). Please click here to view a larger version of this figure.
Figure 4: Learning algorithm marker positions.
Left column: direct view; right column: mirror view. Markers 1–4: MCP joints; 5–8: PIP joints; 9–12: digit tips; 13: dorsum of reaching paw; 14: nose; 15: dorsum of non-reaching paw. Marker 16 (pellet) is not visible. Please click here to view a larger version of this figure.
Figure 5: Rats rapidly acquire the automated skilled reaching task.
Average first reach success rate (green, left axis) and average total trials (blue, right axis) over the first 20 training sessions in the automated skilled reaching task (n = 19). Each training session lasted 30 min. Error bars represent standard error of the mean. Please click here to view a larger version of this figure.
Figure 6: Sample video frames marked by the learning program.
Left column: mirror view; right column: direct view. Cyan, red, yellow, and green dots mark digits 1–4, respectively. The white dot marks the nose, the black dot marks the pellet. Filled circles were identified by DeepLabCut. Open circles mark object positions estimated by where that object appeared in the opposite view. X’s are points re-projected onto the video frames from the estimates of their 3-D locations. This video was triggered at frame 300, as the paw passed through the slot. Top images are from the first frame when the reaching paw was detected. Bottom images are from the frame at which the second digit was maximally extended. These frames were identified by the image processing software. Please click here to view a larger version of this figure.
Figure 7: Sample 3-D trajectories from a single test session.
Both axes show the same data, but rotated for ease of presentation. Black lines indicate mean trajectories. Cyan, red, yellow, and green are individual trajectories of the tips of digits 1–4, respectively. Blue lines indicate the trajectory of the paw dorsum. The large black dot indicates the sugar pellet located at (0,0,0). This represents only initial paw advancement for ease of presentation (including retractions and multiple reaches makes the figure almost uninterpretable). However, all kinematic data are available for analysis. Please click here to view a larger version of this figure.
Figure 8: Histograms of frame numbers in which specific reaching phases were identified for 2 different sessions.
In one session (dark solid lines), the ROI trigger values were carefully set, and the paw was identified breaching the slot within the same few frames in each trial. In the other session (light dashed lines), the nose was often misidentified as the reaching paw, triggering video acquisition prematurely. Note that this would have little effect on off-line kinematic analyses unless the full reach was not captured. However, potential interventions triggered by the reaching paw would be poorly timed. Please click here to view a larger version of this figure.
Rodent skilled reaching has become a standard tool to study motor system physiology and pathophysiology. We have described how to implement an automated rat skilled reaching task that allows: training and testing with minimal supervision, 3-D paw and digit trajectory reconstruction (during reaching, grasping, and paw retraction), real-time identification of the paw during reaching, and synchronization with external electronics. It is well-suited to correlate forelimb kinematics with physiology or to perform precisely-timed interventions during reaching movements.
Since we initially reported this design14, our training efficiency has improved so that almost 100% of rats acquire the task. We have identified several important factors that lead to consistently successful training. As with many tasks motivated by hunger, rats should be carefully monitored during caloric restriction to maintain 80–90% of their anticipated body weight. Handling the rats daily, even prior to training, is critically important to acclimate them to humans. Rats should be trained to reach before learning to return to the back of the chamber to request pellets—this greatly reduces training time and improves the likelihood that rats acquire the task. Finally, when transferred between seemingly identical chambers, rats often perform fewer reaches. This was especially true when chambers were used for the first time. We speculate that this is due to differences in scent between chambers. Whatever the reason, it is important to maintain as stable a training environment as possible, or acclimate the rats to all boxes in which testing might occur.
The apparatus described here is readily adaptable to specific needs. We described a rat version of the task, but have also implemented a mouse version (though it is difficult to identify individual digits with DeepLabCut in mice). Because individual video frames are marked with TTL pulses, videos can be synchronized with any recording system that accepts digital or analog inputs (e.g., electrophysiology amplifiers or photometry). Finally, head-fixed mice readily perform skilled reaching9, and a head-fixed version of this task could be implemented for 2-photon imaging or juxtacellular recordings. Importantly, we have only used this system with Long-Evans rats, whose nose and paw fur (black and white, respectively) differ enough in color that nose pokes are not mistaken for reaches (with appropriate ROI settings, Figure 8). This may be a problem for rats with similar coloration on their paws and noses (e.g., albino rats), but could be solved by coloring the paw with ink, nail polish, or tattoos.
The presented version of skilled reaching has several distinct features, which may be advantageous depending on the specific application. The relatively complicated hardware and need for real-time video processing make it poorly suited to home cage training6,7. On the other hand, home cage training makes it difficult to acquire high-speed high-resolution video from multiple angles, or tether the animals for physiologic recordings/interventions. The data acquisition cards and requirement for one computer per chamber makes each chamber relatively expensive, and the videos require significant digital storage space (~200 MB per 4 s video). We have implemented a simpler microcontroller-based version costing about $300 per chamber, though it lacks real-time feedback or the ability to synchronize with external devices. These boxes are essentially identical to those described here, but use a commercial camcorder and do not require a computer except to program the microcontroller (details of this set-up and associated software are available upon request). Real-time video processing on the FPGA frame-grabber is especially useful; we find that it more robustly identifies reaches in real time than infrared beams or proximity sensors (which may mistake the rat’s snout for the reaching paw). Furthermore, multiple triggers may be used to identify the paw at different reaching phases (e.g., approach to the slot, paw lift, extension through slot). This not only allows reproducible, precisely-timed neuronal perturbations, but can be used to trigger storage of short high-speed videos.
While our automated version of skilled reaching has several advantages for specific applications, it also has some limitations. As noted above, the high-speed, high-resolution camera is moderately expensive, but necessary to include mirror and direct views in a single image and capture the very quick reaching movement. Using one camera eliminates the need to synchronize and record multiple video streams simultaneously, or purchase multiple cameras and frame grabbers. The paw in the reflected view is effectively about twice as far from the camera (by ray-tracing) as in the direct view. This means that one of the views is always out of focus, though DLC still robustly identifies individual digits in both views (Figure 4, Figure 6). Finally, we used a color camera because, prior to the availability of DLC, we tried color coding the digits with tattoos. While it is possible that this learning-based program would be equally effective on black and white (or lower resolution) video, we can only verify the effectiveness of the hardware described here. Finally, our analysis code (other than DLC) is written primarily in a commercial software package (see Table of Materials) but should be straightforward to adapt to open source programming languages (e.g., Python) as needed.
There are several ways in which we are working to improve this system. Currently, the mirror view is partially occluded by the front panel. We have therefore been exploring ways to obtain multiple simultaneous views of the paw while minimizing obstructions. Another important development will be to automatically score the reaches (the system can track kinematics, but a human must still score successful versus failed reaches). Methods have been developed to determine whether pellets were knocked off the shelf/pedestal, but cannot determine whether the pellet was grasped or missed entirely11. By tracking the pellet with DLC, we are exploring algorithms to determine the number of reaches per trial, as well as whether the pellet was grasped, knocked off the pedestal, or missed entirely. Along those lines, we are also working to fully automate the workflow from data collection through video conversion, DLC processing, and automatic scoring. Ultimately, we envision a system in which multiple experiments can be run on one day, and by the next morning the full forelimb kinematics and reaching scores for each experiment have been determined.
The authors have nothing to disclose.
The authors would like to thank Karunesh Ganguly and his laboratory for advice on the skilled reaching task, and Alexander and Mackenzie Mathis for their help in adapting DeepLabCut. This work was supported by the National Institute of Neurological Disease and Stroke (grant number K08-NS072183) and the University of Michigan.
clear polycarbonate panels | TAP Plastics | cut to order (see box design) | |
infrared source/detector | Med Associates | ENV-253SD | 30" range |
camera | Basler | acA2000-340kc | 2046 x 1086 CMV2000 340 fps Color Camera Link |
camera lens | Megapixel (computar) | M0814-MP2 | 2/3" 8mm f1.4 w/ locking Iris & Focus |
camera cables | Basler | #2000031083 | Cable PoCL Camera Link SDR/MDR Full, 5 m – Data Cables |
mirrors | Amazon | ||
linear actuator | Concentrics | LACT6P | Linear Actuator 6" Stroke (nominal), 110 Lb Force, 12 VDC, with Potentiometer |
pellet reservoir/funnel | Amico (Amazon) | a12073000ux0890 | 6" funnel |
guide tube | ePlastics | ACREXT.500X.250 | 1/2" OD x 1/4" ID Clear. Extruded Plexiglass Acrylic Tube x 6ft long |
pellet delivery rod | ePlastics | ACRCAR.250 | 0.250" DIA. Cast Acrylic Rod (2' length) |
plastic T connector | United States Plastic Corp | #62065 | 3/8" x 3/8" x 3/8" Hose ID Black HDPE Tee |
LED lights | Lighting EVER | 4100066-DW-F | 12V Flexible Waterproof LED Light Strip, LED Tape, Daylight White, Super Bright 300 Units 5050 LEDS, 16.4Ft 5 M Spool |
Light backing | ePlastics | ACTLNAT0.125X12X36 | 0.125" x 12" x 36" Natural Acetal Sheet |
Light diffuser films | inventables | 23114-01 | .007×8.5×11", matte two sides |
cabinet and custom frame materials | various (Home Depot, etc.) | 3/4" fiber board (see protocol for dimensions of each structure) | |
acoustic foam | Acoustic First | FireFlex Wedge Acoustical Foam (2" Thick) | |
ventilation fans | Cooler Master (Amazon) | B002R9RBO0 | Rifle Bearing 80mm Silent Cooling Fan for Computer Cases and CPU Coolers |
cabinet door hinges | Everbilt (Home Depot | #14609 | continuous steel hinge (1.4" x 48") |
cabinet wheels | Everbilt (Home Depot | #49509 | Soft rubber swivel plate caster with 90 lb. load rating and side brake |
cabinet door handle | Everbilt (Home Depot | #15094 | White light duty door pull (4.5") |
computer | Hewlett Packard | Z620 | HP Z620 Desktop Workstation |
Camera Link Frame Grabber | National Instruments | #781585-01 | PCIe-1473 Virtex-5 LX50 Camera Link – Full |
Multifunction RIO Board | National Instruments | #781100-01 | PCIe-17841R |
Analog RIO Board Cable | National Instruments | SCH68M-68F-RMIO | Multifunction Cable |
Digital RIO Board Cable | National Instruments | #191667-01 | SHC68-68-RDIO Digital Cable for R Series |
Analog Terminal Block | National Instruments | #782536-01 | SCB-68A Noise Rejecting, Shielded I/O Connector Block |
Digital Terminal Block | National Instruments | #782536-01 | SCB-68A Noise Rejecting, Shielded I/O Connector Block |
24 position relay rack | Measurement Computing Corp. | SSR-RACK24 | Solid state relay backplane (Gordos/OPTO-22 type relays), 24-channel |
DC switch | Measurement Computing Corp. | SSR-ODC-05 | Solid state relay module, single, DC switch, 3 to 60 VDC @ 3.5 A |
DC Sense | Measurement Computing Corp. | SSR-IDC-05 | solid state relay module, single, DC sense, 3 to 32 VDC |
DC Power Supply | BK Precision | 1671A | Triple-Output 30V, 5A Digital Display DC Power Supply |
sugar pellets | Bio Serv | F0023 | Dustless Precision Pellets, 45 mg, Sucrose (Unflavored) |
LabVIEW | National Instruments | LabVIEW 2014 SP1, 64 and 32-bit versions | 64-bit LabVIEW is required to access enough memory to stream videos, but FPGA coding must be performed in 32-bit LabVIEW |
MATLAB | Mathworks | Matlab R2019a | box calibration and trajectory reconstruction software is written in Matlab and requires the Computer Vision toolbox |