This protocol describes how to use frame-by-frame video analysis to quantify idiosyncratic reach-to-grasp movements in humans. A comparative analysis of reaching in sighted versus unsighted healthy adults is used to demonstrate the technique, but the method can also be applied to the study of developmental and clinical populations.
Prehension, the act of reaching to grasp an object, is central to the human experience. We use it to feed ourselves, groom ourselves, and manipulate objects and tools in our environment. Such behaviors are impaired by many sensorimotor disorders, yet our current understanding of their neural control is far from complete. Current technologies for investigating human reach-to-grasp movements often utilize motion tracking systems that can be expensive, require the attachment of markers or sensors to the hands, impede natural movement and sensory feedback, and provide kinematic output that can be difficult to interpret. While generally effective for studying the stereotypical reach-to-grasp movements of healthy sighted adults, many of these technologies face additional limitations when attempting to study the unpredictable and idiosyncratic reach-to-grasp movements of young infants, unsighted adults, and patients with neurological disorders. Thus, we present a novel, inexpensive, and highly reliable yet flexible protocol for quantifying the temporal and kinematic structure of idiosyncratic reach-to-grasp movements in humans. High speed video cameras capture multiple views of the reach-to-grasp movement. Frame-by-frame video analysis is then used to document the timing and magnitude of pre-defined behavioral events such as movement start, collection, maximum height, peak aperture, first contact, and final grasp. The temporal structure of the movement is reconstructed by documenting the relative frame number of each event while the kinematic structure of the hand is quantified using the ruler or measure function in photo editing software to calibrate 2 dimensional linear distances between two body parts or between a body part and the target. Frame-by-frame video analysis can provide a quantitative and comprehensive description of idiosyncratic reach-to-grasp movements and will enable researchers to expand their area of investigation to include a greater range of naturalistic prehensile behaviors, guided by a wider variety of sensory modalities, in both healthy and clinical populations.
Prehension, the act of reaching to grasp an object, is used for many daily functions including acquiring food items for eating, grooming, manipulating objects, wielding tools, and communicating through gesture and written word. The most prominent theory concerning the neurobehavioral control of prehension, the Dual Visuomotor Channel theory1,2,3,4, proposes that prehension consists of two movements – a reach that transports the hand to the location of the target and a grasp that opens, shapes, and closes the hand to the size and shape of the target. The two movements are mediated by dissociable but interacting neural pathways from visual to motor cortex via the parietal lobe1,2,3,4. Behavioral support for the Dual Visuomotor Channel theory has been ambiguous, largely due to the fact that the reach-to-grasp movement appears as a single seamless act and unfolds with little conscious effort. Nonetheless, prehension is almost always studied in the context of visually-guided prehension in which a healthy participant reaches to grasp a visible target object. Under these circumstances the action does appear as a single movement that unfolds in a predictable and stereotypical fashion. Prior to reach onset the eyes fixate on the target. As the arm extends the digits open, preshape to the size of the object, and subsequently start to close. The eyes disengage from the target just prior to target contact and final grasp of the target follows almost immediately afterwards5. When vision is removed, however, the structure of the movement is fundamentally different. The movement dissociates into its constituent components such that an open-handed reach is first used to locate the target by touching it and then haptic cues associated with target contact guide shaping and closure of the hand to grasp6.
Quantification of the reach-to-grasp movement is most often achieved using a 3 dimensional (3D) motion tracking system. These can include infrared tracking systems, electromagnetic tracking systems, or video based tracking systems. While such systems are effective for acquiring kinematic measures of prehension in healthy adult participants performing stereotypical reach-to-grasp movements towards visible target objects, they do have a number of drawbacks. In addition to being very expensive, these systems require the attachment of sensors or markers onto the arm, hand, and digits of the participant. These are usually attached using medical tape, which can impede tactile feedback from the hand, alter natural motor behavior, and distract participants7. As these systems generally produce numerical output related to different kinematic variables such as acceleration, deceleration, and velocity they are also not ideal for investigating how the hand contacts the target. When using these systems, additional sensors or equipment are required to determine what part of the hand makes contact with the target, where on the target contact occurs, and how the configuration of the hand might change in order to manipulate the target. In addition, infrared tracking systems, which are the most commonly employed, require the use of a specialized camera to track the location of the markers on the hand in 3D space6. This requires a direct line of sight between the camera and the sensors on the hand. As such, any idiosyncrasies in the movement are likely to obscure this line of sight and result in the loss of critical kinematic data. There are, however, a large number of instances in which idiosyncrasies in the reach-to-grasp movement are actually the norm. These include during early development when infants are just learning to reach and grasp for objects; when the target object is not visible and tactile cues must be used to guide the reach and the grasp; when the target object is an odd shape or texture; and when the participant presents with any one of a variety of sensorimotor disorders such as a stroke, Huntington's disease, Parkinson's disease, Cerebral Palsy, etc. In all of these cases, the reach-to-grasp movement is neither predictable nor stereotypical, nor is it necessarily guided by vision. Consequently, the capability of 3D motion tracking systems to reliably quantify the temporal and kinematic structure of these movements can be severely limited due to disruptions in sensory feedback from the hand, changes in natural motor behavior, loss of data, and/or difficulties interpreting the idiosyncratic kinematic output from these devices.
The present paper describes a novel technique for quantifying idiosyncratic reach-to-grasp movements in various human populations that is affordable, does not impede sensory feedback from the hand or natural motor behavior, and is reliable but can be flexibly modified to suit a variety of experimental paradigms. The technique involves using multiple high-speed video cameras to record the reach-to-grasp movement from multiple angles. The video is then analyzed offline by progressing through the video frames one at a time and using visual inspection to document key behavioral events that, together, provide a quantified description of the temporal and kinematic organization of the reach-to-grasp movement. The present paper describes a comparative analysis of visually- versus nonvisually-guided reach-to-grasp movements in healthy human adults6,8,9,10 in order to demonstrate the efficacy of the technique; however, modified versions of the technique have also been used to quantify the reach-to-grasp actions of human infants11 and non-human primates12. The comprehensive results of the frame-by-frame video analysis from these studies are among the first to provide behavioral evidence in support of the Dual Visuomotor Channel theory of prehension.
All procedures involving human participants have been approved by the University of Lethbridge Human Subjects Research Committee and the Thompson Rivers University Research Ethics for Human Subjects Board.
1. Participants
2. Experimental Setup
3. Data Collection
4. Prepare Videos for Frame-by-Frame Video Analysis
5. Frame-by-Frame Video Analysis: Temporal Organization
6. Frame-by-Frame Video Analysis: Kinematic Calibration Scale
7. Frame-by-Frame Video Analysis: Kinematic Structure
8. Frame-by-Frame Video Analysis: Topographical Measures
This section provides examples of the results that can be obtained when using frame-by-frame video analysis to investigate idiosyncratic reach-to-grasp movements under nonvisual sensory guidance. The primary finding is that when participants can use vision to preemptively identify both the extrinsic (location/orientation) and intrinsic (size/shape) properties of a target object they integrate the reach and the grasp into a single seamless prehensile act in which they preshape the hand to the size and shape of the target before touching it (Figure 2A). When vision is unavailable, however, they dissociate the two movements so that tactile feedback can be used to first direct the hand in relation to the extrinsic and then in relation to the intrinsic properties of the target, in what has been termed a generalized touch-then-grasp strategy (Figure 2B). The results derived from frame-by-frame video analysis are comparable to that of traditional motion tracking systems without the expense, hassle, and other drawbacks of attaching sensors to the participant's hands. The results also provide support for the postulate of the Dual Visuomotor Channel Theory of Prehension that the reach and the grasp are separable movements that appear as one when integrated together under visual guidance.
All key behavioral events are generally present in both the Vision and No Vision Conditions. However, there is a noticeable change in the No Vision condition, such that a significantly greater amount of time is required to transition from peak aperture to first contact and again from first contact to final grasp (Figure 3). Review of the kinematic results from the frame-by-frame video analysis provides a number of explanations for this increase in movement duration in the No Vision condition.
The hand takes a more elevated approach to the target and thus, achieves a greater maximum height in the No Vision condition compared to the Vision condition (Figure 4). This greater maximum height is a consistent feature of the No Vision reach-to-grasp movement, even after 50 trials of practice. The use of a more elevated reaching trajectory, in which the hand is raised above the target and then lowered down onto it from above, likely contributes to the increased amount of time required to transition from peak aperture to first contact in the No Vision compared to Vision conditions.
In the No Vision condition, the hand maintains a neutral posture, in which the digits remain open and extended during transport towards the target. This differs from the Vision condition in which the digits flex and close into a configuration that matches the size of the target on approach towards it. Consequently, in the No Vision condition the aperture of the hand does not preshape to the size of the target at either peak aperture (Figure 5, top) or at first contact (Figure 5, middle). This lack of preshaping in the No Vision condition means that additional time is required to modify the hand's configuration after first contact in order to match that of the target. This contributes to the increased amount of time required to transition from first contact to final grasp in the No Vision condition. Despite differences in hand aperture prior to and at first contact with the target, hand aperture at final grasp is identical in the Vision and No Vision conditions (Figure 5, bottom).
In the No Vision Condition, the location at which the thumb (red) or index finger (blue) made first contact with the target is randomized across the dorsal surface of the target object in a haphazard way, indicating the absence of a preferred digit-thumb orientation (Figure 6, bottom left). This differed from the Vision condition in which the index finger and thumb consistently established first contact with opposing sides of the target, indicating the presence of a preferred digit-thumb orientation prior to first contact (Figure 6, top left). The absence of a preferred digit-thumb orientation prior to first contact in the No Vision condition meant that additional time was required after first contact to re-adjust the configuration and position of the digits towards appropriate grasp points that were conducive to actually grasping the target. This is eventually achieved by the time of final grasp (Figure 6, bottom right) with a consistency similar to that observed in the Vision condition (Figure 6, top right).
In the No Vision condition, participants generally make at least one adjustment after first contact with the target (Figure 7), usually to re-direct the digits to more appropriate grasp points on the target. In contrast, in the Vision condition, participants never adjust hand-to-target contact after first contact. Thus, the adjustments made by participants in the No Vision condition likely contribute to the increased amount of time required to transition from first contact to final grasp.
Figure 8 illustrates the part of the hand used to make first contact with the target in the Vision condition (Figure 8A, left) and in the No Vision Condition (Figure 8B, left). In the Vision condition, participants generally use the index finger and/or thumb to make first contact with the target. In contrast, the part of the hand to make first contact with the target is much more variable in the No Vision condition, with participants often using any of the digits or the palm to make first contact. Notably, in the Vision condition the digits to make first contact with the target are the same ones that make contact during the final grasp. In contrast, the parts of the hand used to make first contact in the No Vision condition are usually different from the parts of the hand used during the final grasp (Figure 8A & Figure 8B, right).
Figure 9 illustrates the proportion of trials on which participants used a pincer or precision grasp to acquire the target object. Participants in the No Vision condition used a precision grip significantly more than a pincer grip, in contrast to participants in the Vision condition, who preferred a pincer grip.
In the Vision condition, participants consistently use a preshaping strategy in which the hand shapes and orients to the target prior to first contact in order to facilitate immediate grasping of the target. In the No Vision condition, the hand does not shape or orient to the target prior to first contact. Rather, in the No Vision condition the preferred grasp strategy is a touch-then-grasp strategy. This strategy is characterized by initial contact with the target, followed by a release of contact during which the hand re-shapes and re-orients, resulting in altered digit-to-target contact locations that ultimately facilitate successful grasping of the target (Figure 10A). Depending on the configuration of the hand at the time of first contact, variations of the touch-then-grasp strategy could be observed. In the first variation (Figure 10B), the hand is semi-shaped at first contact and first contact is made with the index finger or thumb, but at an inappropriate contact location, resulting in modifications in both hand shape and contact location prior to establishment of the final grasp posture. In the second variation (Figure 10C), the hand does not shape at all prior to first contact, but first contact is made with an appropriate part of the hand at an appropriate location on the target. Thus, a simple flexion of the remaining digits allows for successful capture of the target between the digits and thumb in an effective grasping posture. In the third variation (Figure 10D), the hand does not shape at all prior to first contact and first contact is made at an inappropriate location on the target, but with an appropriate part of the hand. Thus, the digit that makes first contact maintains contact while adjacent digits manipulate the target into a position that more readily facilitates grasping of the target between the index/middle finger and the thumb.
Figure 1: Six behavioral events. Still frames illustrating the 6 key behavioral events that constitute a stereotypical visually-guided reach-to-grasp movement in healthy human adults. White arrows indicate the aspects of the hand/action that are most relevant for identifying each behavioral event. Participants reached with their dominant hand. Please click here to view a larger version of this figure.
Figure 2: Grasping strategies used by adults in the Vision and No Vision conditions. Still frames illustrating the preshaping strategy (A) that was favored by participants in the Vision condition and the general touch-then-grasp strategy (B) that was favored by participants in the No Vision condition. Participants reached with their dominant hand. This figure has been modified from Karl et al.6 and Whishaw et al.11 Please click here to view a larger version of this figure.
Figure 3: Temporal organization of the reach-to-grasp movement. Time (mean ± standard error (SE)) to peak aperture (light grey), first contact (medium grey), and final grasp (black) of the reach-to-grasp movement of participants (n = 12) in the Vision (top) and No Vision conditions (bottom). This figure has been modified from Karl et al.6 Please click here to view a larger version of this figure.
Figure 4: Maximum height. Maximum height (mean ± SE) of the reach-to-grasp trajectory for the first five versus the last five trials of each participant (n = 20) in the Vision and No Vision conditions (A). These results were confirmed by a repeated measures analysis of variance (ANOVA) that found a main effect of Condition F(1,17) = 35.673, p < 0.001 but no main effect of Trial F(9,153) = 1.173, p > 0.05 (*** = p < 0.001). Representative still frames of the arm and hand at the moment of maximum height on the first and last experimental trials in the Vision and No Vision condition (B). Participants reached with their dominant hand. This figure has been modified from and presents data originally published in Karl et al.8 Please click here to view a larger version of this figure.
Figure 5: Aperture. Peak aperture (mean ± SE; top), aperture at first contact (mean ± SE; middle), and aperture at final grasp (mean ± SE; bottom) of participants (n = 12) reaching in the Vision (gray) and No Vision (black) conditions. These results were confirmed by repeated measures ANOVAs that found a significant Condition X Target interaction for peak aperture F(2,20) = 101.088, p < 0.001 and aperture at first contact F(2,20) = 114.779, p < 0.001, but not for aperture at final grasp F(2,20) = 0.457, p > 0.05 (*** = p < 0.001). Note, the aperture measures shown in the graphs were derived using both a traditional 3D motion tracking system and frame-by-frame video analysis. Participants reached with their dominant hand. B = blueberry, D = donut ball, O = orange slice. This figure has been modified from and presents data originally published in Karl et al.6 Please click here to view a larger version of this figure.
Figure 6: First contact points and grasp contact points. Location of contact points at the moment of first contact with the target (left) and final grasp of the target (right). Participants reached with their dominant hand. This figure has been modified from and presents data originally published in Karl et al.6 Please click here to view a larger version of this figure.
Figure 7: Adjustments. Number of adjustments (mean ± SE) between first contact and final grasp for all participants (n = 18) in the No Vision and Vision conditions. These results were confirmed by a repeated measures ANOVA that gave a significant effect of Condition F(1,17) = 55.987, p < 0.001 (*** = p <0.001). Participants reached with their dominant hand. This figure has been modified from and presents data originally published in Karl et al.10 Please click here to view a larger version of this figure.
Figure 8: Part of the hand to make contact with the target. Part of the hand to make first contact (left) and final grasp contacts (right) with the target object on the first five and last five experimental trials in the Vision (top) and No Vision (bottom) conditions. Participants reached with their dominant hand. This figure has been modified from and presents data originally published in Karl et al.8 Please click here to view a larger version of this figure.
Figure 9: Grip type: Proportion of trials (mean ± SE) for which the participants (n = 12) utilized either a pincer or precision grip to acquire the target in the Vision (A) and No Vision (B) conditions. These results were confirmed by a repeated measures ANOVA that found a significant effect of Condition X Grip F(1,11) = 32.301, p < 0.001 (*** = p < 0.001). Participants reached with their dominant hand. This figure has been modified from and presents data originally published in Karl et al.6 Please click here to view a larger version of this figure.
Figure 10: Grasping strategies. Representative still frames illustrate the general touch-then-grasp strategy (A), as well as 3 variations of it (B-D) by participants in the No Vision condition. Participants reached with their dominant hand. This figure has been modified from and presents data originally published in Karl et al.6 Please click here to view a larger version of this figure.
Key Behavioral Event | Description | Record |
1. Movement Start | Defined as the first visible lifting of the palm of the hand away from the dorsum of the upper thigh | > Frame number |
2. Collection | Defined as the formation of a closed hand posture in which the digits maximally flex and close. Collection may be very obvious or very subtle | > Frame number > Distance between the central tip of the index finger and the central tip of the thumb |
3. Maximum Height | Defined as the maximum height of the most proximal knuckle of the index finger | > Frame number > Vertical distance between the top of the pedestal and the top of the index knuckle |
4. Peak Aperture | Defined as the maximum opening of the hand, as measured between the two digits used to secure the final grasp of the object, usually the index finger and thumb. In some cases the digits will re-open after target contact and it will be necessary to record a second peak aperture after target contact | > Frame number > Distance between the central tip of the index finger and the central tip of the thumb |
5. First Contact | Defined as the moment of first contact between the hand and the target | > Frame number > Distance between the central tip of the index finger and the central tip of the thumb > Part of the hand to make first contact with the target (Figure 8) > First contact points (Figure 6) |
6. Final Grasp | Defined as the moment at which all manipulation of the target is complete and the participant establishes a firm hold on the target | > Frame number > Distance between the central tip of the index finger and the central tip of the thumb > Grasp contact points (Figure 6) > Grip type > Part of the hand to make contact with the target at final grasp (Figure 8) |
Table 1: Description of key behavioral events. Lists the 6 key behavioral events that can be acquired using frame-by-frame video analysis (first column). Each behavioral event is accompanied by a description (second column) as well as a list of the temporal and kinematic information that should be recorded for each (third column).
Topographical Measure | Description | Record |
Part of Hand to Make First Contact | Describes what part of the hand was used to make first contact with the target (1= thumb, 2 = index finger, 3 = middle finger, 4 = ring finger, 5 = pinky finger, 6 = palm, 7 = dorsum of hand) | > Which part of the hand was used to make first contact with the target |
Contact Points | Illustrates where on the target first contact with the hand occurred | > See step 8.1.2. |
Grasp Points | Illustrates where on the target the hand made contact while establishing final grasp of the target | > See step 8.1.3. |
Adjustments | A reach-to-grasp movement is considered to contain an adjustment if, between first contact and final grasp, the participant releases and re-establishes contact with the target | > Number of adjustments per trial |
Grip Type | Describes the grip configuration used to acquire the target object | > See step 8.1.5. |
Grasp Strategy | Refers to the use of different digit-to-target manipulations after first contact in order to facilitate successful grasping of the target | > Type of grasp strategy used (Figure 10) |
Table 2: Description of topographical measures. Lists the topographical measures that can be acquired using frame-by-frame video analysis (first column). Each measure is accompanied by a description (second column) as well as a list of the types of information that should be recorded for each (third column).
Supplemental Table 1: Spreadsheet for data collection. A template for organizing the temporal, kinematic, and topographical measures (not including contact points and grasp points) collected from frame-by-frame video analysis in a single spreadsheet. Please click here to download this file.
The present paper describes how to use frame-by-frame video analysis to quantify the temporal organization, kinematic structure, and a subset of topographical features of human reach-to-grasp movements. The technique can be used to study typical visually-guided reach-to-grasp movements, but also idiosyncratic reach-to-grasp movements. Such movements are difficult to study using traditional 3D motion tracking systems, but are common in developing infants, participants with altered sensory processing, and patients with sensorimotor disorders such as blindness, Parkinson's disease, Stroke, or Cerebral Palsy. Thus, the use of frame-by-frame video analysis will allow researchers to expand their area of investigation to include a greater range of prehensile behaviors, guided by a wider variety of sensory modalities, by both healthy and clinical populations. Specific advantages of frame-by-frame video analysis include its relative affordability, ease to implement, lack of sensors or markers that hinder sensory and motor abilities of the hands, compatibility with other motion tracking systems, and the ability to describe subtle changes in the reach-to-grasp movement that are often hard to interpret from the kinematic output provided by most traditional 3D motion tracking systems. Together, these features of frame-by-frame video analysis have made it possible to advance our theoretical understanding of the neurobehavioral control of prehension.
While there are many instances in which frame-by-frame video analysis may be the only reliable option for analyzing idiosyncratic reach-to-grasp movements, it is important to note that the technique does face some limitations. First, the distance measures (e.g., peak aperture) acquired using frame-by-frame video analysis are 2D and less precise compared to traditional 3D motion tracking systems. Nonetheless, if necessary, additional cameras could be focused on the region of interest. This would allow the experimenter to select the camera view that provides the clearest fronto-parallel view of the behavioral event of interest, and thus increase the precision of the distance measure for that particular event. Furthermore, if very high precision is required for the distance measures then frame-by-frame video analysis can easily be combined with traditional 3D motion tracking techniques (see Figure 4, 5, and 10) as it does not impede data collection from the traditional system. Secondly, the ultimate success of the technique is critically dependent on the integrity of the video record. Choosing filming views that adequately capture the behavior, using a shutter speed of 1/1000th of a second with a strong light source, and ensuring that the focus of the camera remains stabilized on the action of interest will all help to ensure that individual frames in the video record are crisp, free of motion artifacts, and easy to analyze. Finally, when first learning to implement the technique, researchers may wish to utilize multiple blind raters to ensure high inter-rater reliability for scoring of the various behavioral events. Once trained, however, scoring is highly reliable and interrater reliability can be easily established using only a small subset of sample data.
Frame-by-frame video analysis, in contrast to traditional 3D motion tracking systems, can provide a more ethologically valid description of natural reaching and grasping behavior as it does not require the placement of markers or sensors onto the participant's arms or hands. Additionally, many 3D motion tracking systems require a constant and direct line of sight between the camera and sensors/markers placed on the hands. To ensure this, most users of this technology ask participants to begin the reach-to-grasp movement with the hand shaped in an unnatural configuration with the index finger and thumb pinched together. They also instruct the participant to grasp the target object in a pre-defined way (usually a pincer grip) with a pre-defined orientation. These directives are required to ensure that the reach-to-grasp movement unfolds in a predictable and stereotypical manner as traditional recording systems can suffer significant data loss when the trajectory of the arm and configuration of the hand do not follow a predictable course that maintains the line of sight between the camera and sensors/markers. Nonetheless, imposing these constraints severely limits the ethological validity of the task and can even alter the organization of the movement; for example, it is not possible to observe the key behavioral event of 'collection' when the initial hand configuration is that of a pinch between the thumb and index finger13,14. These limitations are largely overcome when using frame-by-frame video analysis as variations in reach trajectory and hand configuration are much less likely to result in a complete loss of data in the video record so there is no need to impose these unnatural constraints on the reach-to-grasp movement.
Frame-by-frame video analysis also makes it possible to observe subtle modifications of the reach-to-grasp movement beyond what is generally possible with traditional 3D motion tracking systems, especially when the modification is not a specific prediction of the study. An example will illustrate: Figure 5 (top) shows measures of peak aperture acquired from participants reaching-to-grasp three different sized objects either with vision or without vision. The results suggest that participants preemptively scale peak aperture to match the size of the target in the Vision condition, but not in the No Vision condition. In the No Vision condition participants use a consistent peak aperture despite reaching for targets of varying size. If one were to consider only the type of data available from a traditional 3D motion tracking system, similar to that shown in Figure 5 (top left), there are two possible explanations for this discrepancy. First, it could be that in the No Vision condition participants shape the hand into a grasping posture that matches the "average" or "middle" size of the three possible targets. Alternatively, they may not form a grasping posture at all, but rather, they may form a slightly more open hand during transport towards the target, to increase the chances of making tactile contact with the target, that coincidentally matches the size of the "middle" target. To differentiate between these two possibilities, it is necessary to review the data from the frame-by-frame video analysis, a sample of which is given in Figure 5 (top right), which clearly indicates that the participants are not shaping their hand into a grasping posture that matches the "middle" sized object in the No Vision condition; rather, they are forming an open but neutral hand shape that could serve to either locate the target through tactile feedback and/or to grasp the target. Thus, frame-by-frame video analysis can provide clarification when data from traditional 3D motion capture systems is ambiguous and can enable a more veritable interpretation of the results.
The use of frame-by-frame video analysis to study the reach-to-grasp movements of unsighted adults6,8,9,10, human infants11, non-human primates12, and rodents15 has already greatly amplified our understanding of the neurobehavioral control of prehension. Specifically, the results of these studies have consistently shown that in the early stages of prehensile development and evolution the touch-then-grasp strategy, in which the Reach and Grasp are temporally dissociated to capitalize on tactile cues, is preferred over the preshaping strategy in which the two movements are integrated into a single seamless act under visual guidance. These results provide substantial behavioral support for the Dual Visuomotor Channel theory and further suggest that the theory should be revised to account for the fact that separate reach and grasp movements likely originate under tactile control long before they are integrated together under visual guidance1,2.
The authors have nothing to disclose.
The authors would like to thank Alexis M. Wilson and Marisa E. Bertoli for their assistance with filming and preparing the video for this manuscript. This research was supported by the Natural Sciences and Engineering Research Council of Canada (JMK, JRK, IQW), Alberta Innovates-Health Solutions (JMK), and the Canadian Institutes of Health Research (IQW).
High Speed Video Cameras | Casio | http://www.casio-intl.com/asia-mea/en/dc/ex_f1/ or http://www.casio-intl.com/asia-mea/en/dc/ex_100/ | Casio EX-F1 High Speed Camera or Casio EX-100 High Speed Camera used to collect high speed video records |
Adobe Photoshop | Adobe | http://www.adobe.com/ca/products/photoshop.html | Software used to calibrate and measure distances on individual video frames |
Adobe Premiere Pro | Adobe | http://www.adobe.com/ca/products/premiere.html?sdid=KKQOM&mv=search&s_kwcid=AL!3085!3!193588412847!e!!g!!adobe%20premiere%20pro&ef_id=WDd17AAABAeTD6-D:20170606160204:s | Software used to perform Frame-by-Frame Video Analysis |
Height-Adjustable Pedestal | Sanus | http://www.sanus.com/en_US/products/speaker-stands/htb3/ | A height adjustable speaker stand with a custom made 9 cm x 9 cm x 9 cm triangular top plate attached to the top with a screw is used as a reaching pedestal |
1 cm Calibration Cube | Learning Resources (Walmart) | https://www.walmart.com/ip/Learning-Resources-Centimeter-Cubes-Set-500/24886372 | A 1 cm plastic cube is used to transform distance measures from pixels to centimeters |
Studio Light | Dot Line | https://www.bhphotovideo.com/c/product/1035910-REG/dot_line_rs_5620_1600w_led_light.html | Strong lamp with cool LED light used to illumate the participant and testing area |
3 Dimensional (3D) Sleep Mask | Kfine | https://www.amazon.com/Kfine-Sleeping-Contoured-lightweight-Comfortable/dp/B06W5CDY78?th=1 | Used as a blindfold to occlude vision in the No Vision condition |
Orange Slices | N/A | N/A | Orange slices served as the large sized reaching targets |
Donut Balls | Tim Hortons | http://www.timhortons.com/ca/en/menu/timbits.php | Old fashion plain timbits from Tim Hortons served as the medium sized reaching targets |
Blueberries | N/A | N/A | Blueberries served as the small sized reaching targets |