Summary

Applying Incongruent Visual-Tactile Stimuli during Object Transfer with Vibro-Tactile Feedback

Published: May 23, 2019
doi:

Summary

We present a protocol to apply incongruent visual-tactile stimuli during an object transfer task. Specifically, during block transfers, performed while the hand is hidden, a virtual presentation of the block shows random occurrences of false block drops. The protocol also describes adding vibrotactile feedback while performing the motor task.

Abstract

The application of incongruent sensory signals that involves disrupted tactile feedback is rarely explored, specifically with the presence of vibrotactile feedback (VTF). This protocol aims to test the effect of VTF on the response to incongruent visual-tactile stimuli. The tactile feedback is acquired by grasping a block and moving it across a partition. The visual feedback is a real-time virtual presentation of the moving block, acquired using a motion capture system. The congruent feedback is the reliable presentation of the movement of the block, so that the subject feels that the block is grasped and see it move along with the path of the hand. The incongruent feedback appears as the movement of the block diverts from the actual movement path, so that it seems to drop from the hand when it is actually still held by the subject, thereby contradicting the tactile feedback. Twenty subjects (age 30.2 ± 16.3) repeated 16 block transfers, while their hand was hidden. These were repeated with VTF and without VTF (total of 32 block transfers). Incongruent stimuli were presented randomly twice within the 16 repetitions in each condition (with and without VTF). Each subject was asked to rate the difficulty level of performing the task with and without the VTF. There were no statistically significant differences in the length of the hand paths and durations between transfers recorded with congruent and incongruent visual-tactile signals – with and without the VTF. The perceived difficulty level of performing the task with the VTF significantly correlated with the normalized path length of the block with VTF (r = 0.675, p = 0.002). This setup is used to quantify the additive or reductive value of VTF during motor function that involves incongruent visual-tactile stimuli. Possible applications are prosthetics design, smart sport-wear, or any other garments that incorporate VTF.

Introduction

Illusions are exploitations of the limitations of our senses, as we mistakenly perceive information that deviates from objective reality. Our perceptual inference is based on our experience in interpreting sensory data and on the calculation of our brain of the most reliable estimate of reality in the presence of ambiguous sensory input1.

A sub-category in the research of illusions is one that combines incongruent sensory signals. The illusion that results from incongruent sensory signals originates from the constant multisensory integration performed by our brain. While there are numerous studies concerning incongruence in visual-auditory signals, incongruence in other sensory pairs is less reported. This difference in the number of reports might be attributed to the higher simplicity in designing a setup that incorporates visual-auditory incongruence. However, studies that report results relating to other sensory pairs modalities, are interesting. For example, the effect of incongruent visual-haptic signals on visual sensitivity2 was studied using a system where the visual and haptic stimuli were matched in spatial frequency; however, the haptic and visual orientation was identical (congruent) or orthogonal (incongruent). In another study, the effect of incongruent visual-tactile motion stimuli on the perceived visual direction of motion was investigated using a visual-tactile cross-modal integration stimulator with a lighted panel that presents visual stimuli and a tactile stimulator that presents tactile motion stimuli with arbitrary motion direction, speed, and indentation depth in the skin3. It was suggested that we internally represent both the statistical distribution of the task and our sensory uncertainty, combining them in a manner consistent with a performance-optimizing Bayesian process4.

Virtual reality has made the ability to deceive the visual feedback to the subject an easy task. Several studies used multisensory virtual reality to misalign visual and somatosensory information. For example, virtual reality was recently used to induce embodiment in a child’s body, with or without activation of a child-like voice distortion5. In another example, the visual presentation of the walking distance during self-motion was extended and was therefore incongruent with the travel distance felt by body-based cues6. A similar virtual reality setup was designed for a cycling activity7. All of the aforementioned literature, however, did not combine an interference to one of the senses, in addition to the incongruent signal. We chose the tactile sense to receive such a disturbance.

Our tactile sensory system provides direct evidence as to whether an object is being grasped. We therefore expect that when direct visual feedback is distorted or unavailable, the role of the tactile sensory system in object manipulation tasks will be prominent. However, what would happen if the tactile sensory channel was also disturbed? This is a possible outcome of using vibrotactile feedback (VTF) for sensory augmentation, as it captures the attention of the individual8. Today, augmented feedback of different modalities is used as an external tool, meant to enhance our internal sensory feedback and improve performance during motor learning, in sport and in rehabilitation settings9.

The study of incongruent visual-tactile stimuli may enhance our understanding regarding perception of the sensory input. Particularly, quantification of the additive or reductive value of VTF during motor function that involves incongruent visual-tactile stimuli, can assist in future prosthetics design, smart sport-wear, or any other garments that incorporate VTF. Since amputees are deprived of tactile stimuli at the distal aspect of their residuum, their daily usage of the VTF, embedded in the prosthetic to convey knowledge of grasping, for example, might influence how they perceive visual feedback. Understanding of the mechanism of perception under these conditions, will allow engineers to perfect VTF modalities to reduce the negative effect on VTF users.

We aimed to test the effect of VTF on the response to incongruent visual-tactile stimuli. In the presented setup, the tactile feedback is acquired by grasping a block and moving it across a partition; the visual feedback is a real-time virtual presentation of the moving block and the partition (acquired using a motion capture system). Since the subject is prevented from seeing the actual hand movement, the only visual feedback is the virtual one. The congruent feedback is the reliable presentation of the movement of the block, so that the subject feels that the block is grasped and sees it move along with the path of the hand. The incongruent feedback appears as the movement of the block diverts from the actual movement path, so that it seems to drop from the hand when it is actually still held by the subject, thereby contradicting the tactile feedback. Three hypotheses were tested: when moving an object from one place to another using virtual visual feedback, (i) the path and duration of the object’s transfer motion will increase when incongruent visual-tactile stimuli is presented, (ii) this change will increase when incongruent visual-tactile stimuli is presented and VTF is activated on the moving arm, and (iii) a positive correlation will be found between the perceived difficulty level of performing the task with the VTF activated and the path and duration of the object’s transfer motion. The first hypothesis originates from aforementioned literature that report that various modalities of incongruent feedback affect our responses. The second hypothesis relates to the previous findings that VTF captures the attention of the individual. For the third hypothesis, we assumed that subjects who were more disturbed by the VTF, will trust the virtual visual feedback more than their tactile sense.

Protocol

The following protocol follows the guidelines of human research ethics committee of the university. See Table of Materials for reference to the commercial products.

NOTE: After receiving approval of the university ethics committee, 20 healthy individuals (7 males and 13 females, mean and standard deviation [SD] of age 30.2 ± 16.3 years) were recruited. Each subject read and signed an informed consent form pretrial. Inclusion criteria were right-handed individuals aged 18 or above. Exclusion criteria were any neurological or orthopaedic impairment affecting the upper extremities or uncorrected sight impairment. The subjects were naïve to the occurrences of incongruent visual-tactile feedback.

1. Pre-trial preparation

  1. Use the wooden box from the box and blocks test10. The dimensions of the box are 53.7 cm x 26.9 cm x 8.5 cm and in the middle of it, is a 15.2 cm high partition. Place a soft sponge layer on both sides of the partition. Place six passive reflective markers on the aspect opposite to the screen, at the four corners and on both ends of the partition (Figure 1a).
  2. Use a 3D-printer to manufacture a cube with the dimensions of 2.5 cm x 2.5 cm x 2.5 cm, attached to a base with the dimensions of 4.5 cm x 4.5 cm x 1 cm. Before printing, cut each corner of the base to create a square of size 1 cm x 1 cm at each corner (Figure 1a). Attach passive reflective markers on the four corners of the base.
  3. Place a large screen approximately 1.5 m in front of a table, so that a subject, standing behind the table, is approximately 2 m from the screen. Place the box on the table, 10 cm from the edge opposite to the screen.
  4. Use a 6-camera motion capture system, activated at 100 Hz, with a plug-in to visualize the partition and the movement of the block in real-time (Figure 1). Calibrate the motion capture system, according to the guidelines of the manufacturer, so that the block and partition of the box are recognized as rigid bodies.
    NOTE: Proper calibration of the motion capture system and usage of small markers that are firmly attached to the block and partition are required to maintain the illusion.

2. Placing the vibrotactile feedback system on the subject

NOTE: The VTF system described herein was previously published11,12,13,14.

  1. Instruct the subject to remove wristwatch, bracelets and rings. Attach the VTF system controller to the forearm of the subject (Figure 2, left image).
  2. Attach two thin and flexible force sensors to the palmar aspect of the thumb and index fingers over a thin spongy layer (Figure 2, right image).
  3. Place a cuff on the skin of the upper arm of the subject (Figure 2, left image) and use the fastener to close the cuff comfortably. The cuff will contain three vibrotactile actuators activated via an open-source electronic prototyping platform at a frequency of 233 Hz in a linear relation to the force perceived by the force sensors. The force sensors and vibrotactile actuators are connected to the open-source electronic prototyping platform via shielded electric wires.

3. VTF activation

  1. Press the button to activate the battery attached to the controller (Figure 2, left image).
  2. Ask the subject to press the force sensor’s instrumented fingers (i.e., the thumb and index fingers) together lightly. Note that the subject will report a sense of vibration at the area under the cuff.
  3. Instruct the subject to train for 10 min in grasping the block as lightly as possible, using only the two instrumented fingers. Ask the subject to lift the block, move it, and place it back on the table several times, attempting to apply a minimal amount of force on the block. Encourage the subject to attempt to reduce the applied force, even if the block is dropped during grasping.

4. Positioning and preparing the subject

  1. Instruct the subject to stand close to the table (up to 10 cm from it), where the box and partition are placed.
  2. Place a divider at the edge of the table near the subject and above the box, so that the subject is unable to see the box but can easily see the screen in front of him or her (Figure 1a). For the divider, use a hard non-reflective material, preferably wood, fixed on a four legs, which permit the adjustment of their height, to accommodate subjects of different heights.
  3. Instruct the subject to place the earphones on his or her head.
  4. Place the block in the middle of the right compartment of the box and guide the hand of the subject to it.

5. Commencing trial

NOTE: The described trial is repeated twice, with and without the VTF (a cross-over design is recommended to verify a no learning effect). To perform the trial without the VTF, turn off the battery attached to the controller (Figure 2).

  1. Activate the software controlling the cameras of the motion capture system.
  2. In the control panel of the visual feedback software (Figure 1b), select with/without VTF, type the code of the subject, click Run, Connect, Open and Start.
  3. Instruct the subject to perform 16 repetitions of transferring the block with the force sensor’s instrumented hand while viewing the movement of the virtual block on the screen (Figure 1b). After each transfer, move the block back across the partition to its starting location.
  4. After the subject completed 16 repetitions, click Stop.
  5. Ask the subject to rate the difficulty level of performing the task of transferring the block 16 times twice, with and without the VTF, according to the following scale: ‘0’ (not difficult at all), '1' (slightly difficult), '2' (moderately difficult), '3' (very difficult), and ‘4’ (extremely difficult).

6. Post analysis

  1. Use the 3D coordinate data of the block the compute the path of the block and its transfer time. Mark the onset and offset time of each transfer manually as when the block is at the height of the rims of the right (onset) and then left (offset) sides of the box. Calculate the path length of each transfer according to the following equation:
    (1) Equation 1
    where Equation 2 and Equation 3 are the 3D coordinate of the block in two subsequent time points.
  2. For both conditions, with and without VTF, average the path length and transfer time once for the two transfers with incongruent visual-tactile signals and once for the 14 transfers with the congruent visual-tactile signals.
  3. Normalized the path and time during block transfer in the presence of incongruent visual-tactile signals by the path and time during block transfer with the presence of congruent visual-tactile signals. Perform the normalization separately for the two conditions (with and without VTF).
  4. Perform a within-subject repeated-measures ANOVA with two factors: VTF (with and without) and incongruent visual-tactile feedback (with and without).
  5. If there are no statistical differences when analyzing the results following the instructions in subsection 6.4, use Bayesian repeated measures ANOVAs with two factors15.
  6. Use the Spearman’s correlation test with the perceived difficulty level of performing the task with the VTF activated and with the normalized path and duration of the motion
  7. Set the statistical significance to p < .05.

Representative Results

We used the described technique to test the three hypotheses that when moving an object from one place to another using virtual visual feedback: (i) the path and duration of the object’s transfer motion will increase when incongruent visual-tactile stimuli is presented; (ii) this change will increase when incongruent visual-tactile stimuli is presented and VTF is activated on the moving arm; and (iii) a positive correlation will be found between the perceived difficulty level of performing the task with the VTF activated and the path and duration of the object’s transfer motion.

The results support the third hypothesis. The reported difficulty levels of performing the task with and without the VTF are presented in Figure 3. According to Spearman’s correlation test, the perceived difficulty level (from ‘0’ = not difficult at all, to ‘4’ = extremely difficult) of performing the task with the VTF significantly correlated with the normalized path length of the block with VTF (r = 0.675, p = 0.002; Figure 4). In other words, the normalized path length in the presence of incongruent visual-tactile signals was longer for subjects who perceived the task as more difficult when using the VTF. There was no significant correlation between the perceived difficulty level of performing the task with the VTF and the normalized path length of the block without VTF (r = 0.132, p = 0.589). Also, there were no significant correlations between the perceived difficulty level of performing the task with the VTF and the normalized block transfer time, with and without the VTF (r = -0.056, p = 0.825 and r = -0.066, p = 0.788, respectively).

We suggest normalizing the path length and time, since the absolute time and path of each subject depended on the movement speed and strategy of each subject, so that a non-normalized value would not have reflected the individual’s change in movement pattern due to the appearance of the incongruent visual-tactile signals. Since, after each transfer, we repositioned the block back to the exact starting position, the path length of the block was not affected by the starting position. For the within-subject repeated-measures ANOVA with two factors, VTF (with and without) and incongruent visual-tactile feedback (with and without), no statistically significant main effects were found in the length of the hand paths during a block transfer for trials with VTF compared to trials without VTF (F(1,15) = 0.029, p = 0.866) and for trials with congruent visual-tactile feedback compared with trials with incongruent visual-tactile feedback (F(1,15) = 0.031, p = 0.863). Also, no statistically significant main effects were found in the times to transfer a block for trials with VTF compared to trials without VTF (F(1,15) = 0.354, p = 0.561) and for trials with congruent visual-tactile feedback compared with trials with incongruent visual-tactile feedback (F(1,15) = 1.169, p = 0.297).

During the trials without VTF, there were no statistically significant differences in the length of the hand paths during a block transfer between transfers recorded with incongruent and congruent visual-tactile signals (27.3 ± 13.1 cm and 25.9 ± 12.2 cm, respectively) and between the times to transfer a block recorded with incongruent and congruent visual-tactile signals (1.18 ± 0.56 s and 1.20 ± 0.57 s, respectively). Similarly, when adding VTF, there were no statistically significant differences in the lengths of the hand paths during a block transfer, recorded with incongruent and congruent visual-tactile signals (24.7 ± 7.4 cm and 26.1 ± 11.1 cm, respectively) and between the time to transfer a block recorded with incongruent and congruent visual-tactile signals (1.21 ± 0.38 s and 1.06 ± 0.41 s, respectively). According to Bayesian statistics, the absence of difference involving the group factors can only be taken as anecdotal evidence given that none of the fitted models are substantially better than the null model (2.769 < all BF01 < 33.573) with a maximal error of 2.72%.

Figure 1
Figure 1: The trial setup. (a) Markers placed on the box (6 markers in red, 2 of which were placed on the partition) and the block (4 markers in blue), hidden from the subject’s eyes. The markers were tracked in real time by the motion capture system and the 3D coordinates of all markers were recorded in real-time. (b) The partition and movement of the block were presented on a screen, situated in front of the subject. The software activation steps are described in the research protocol. Please click here to view a larger version of this figure.

Figure 2
Figure 2: The vibrotactile feedback system. The system controller is attached to the forearm of the subject and the cuff is wrapped around the upper arm (left image). The force sensors are placed on the palmar aspect of the thumb and index fingers (right image). Please click here to view a larger version of this figure.

Figure 3
Figure 3: The reported difficulty levels (0 = not difficult at all, 1 = slightly difficult, 2 = moderately difficult, 4 = extremely difficult) of performing the task with and without the vibrotactile feedback (VTF). Please click here to view a larger version of this figure.

Figure 4
Figure 4: A scatter plot of the perceived difficulty level (0 = not difficult at all, 1 = slightly difficult, 2 = moderately difficult, 4 = extremely difficult) of performing the task with VTF in relation to the normalized path length of the block when transferred with VTF. The normalized path length (the path length in the presence of incongruent visual-tactile signals divided by the path length in the presence of congruent visual-tactile signals) was significantly longer for subjects who perceived the task as more difficult when using the VTF. Please click here to view a larger version of this figure.

Discussion

In this study, a protocol that quantifies the effect of adding VTF on the object transfer kinematics in the presence of incongruent visual-tactile stimuli was presented. To the best of our knowledge, this is the only protocol available to test the effect of VTF on the response to incongruent visual-tactile stimuli. The several critical steps involved in the application of incongruent visual-tactile stimuli during object transfer with VTF include the following: attaching the VTF system to the subject, activating the VTF, preparing the motion capture system and movement task, and activating the visual feedback. It is critical that the subject is not aware of the possibility for misleading feedback during the trial. To ensure this, the bottom of the box is lined with soft sponge layer and the subjects wore earphones to eliminate auditory feedback of the block falling and hitting the wooden box. Also, the two transfers of incongruent feedback are chosen randomly out of the 16 transfers, and are programmed to simulate a fall in the start compartment of the box after reaching 2 cm below the partition’s height, so that the hand is in midair. The two random misleading visuals signals were programed so they would not occur in either the first or last two block transfers and with at least one non-misleading transfer between them.

One advantage of this protocol is that the misleading visual feedback appears randomly and only for a couple of times during the 16-transfer trial. This prevents the subject from mistrusting the virtual presentation. Since the conflict between the two signals, provided to the subject in this trial, is very high, the subjected protocol aims to increase the reliability of the misleading visual feedback, by presenting a reasonable number of block drops. It was discussed by Shams16 that the interaction between auditory and visual signals is affected by the degree of conflict between the two and prior knowledge regarding the trial setup. This might also be the case in the interaction between tactile and visual signals, designed herein. Another advantage of the system is that in order to present the location of the block in real-time, the motion capture system calculates the 3D coordinates of the block, so that the analysis of the movement time and path of the block in each repetition can be performed efficiently and precisely at the end of the trial.

Using this protocol, our preliminary results suggest that when analyzing visual-tactile incongruent signals, one acquired directly by light touch and the other acquired indirectly by vision (virtual representation), the subject may have ignored the indirect visual feedback, and responded to the direct tactile signal. This was also confirmed in the presence of VTF, thereby rejecting the second hypothesis. We expected that the VTF would distract the attention of the subject from the light direct tactile feedback, thereby compelling the subject to respond with hesitation to incongruent stimuli. The hesitation was expected to be expressed by longer path and duration of the block transfer. The assumption was based on results of previous research that showed that vibrotactile novel stimuli capture attention away from an ongoing visual task8. The path length and duration were chosen as indicators of hesitation, differentiating between two conditions: when the misleading visual feedback occurs, the subject can either trust the tactile feedback or the visual feedback. If the subject trusts the tactile feedback, then we expect that he or she will continue the smooth movement path. Conversely, if the subject trusts the false visual feedback, we expect he or she will move the hand back to grasp the fallen block and retransfer it (increasing the path length and duration). One possible explanation for the lack of effect of VTF on the block moving kinematics is that since the VTF was applied in conjunction with the direct tactile feedback, the VTF did not function as a disturbance but rather as an indirect amplifier to the direct tactile feedback. This allowed the subjects to keep trusting the tactile feedback, both direct and indirect, over the virtual visual feedback. In addition to the potential for distraction by vibrotactile signal, another aspect of applying vibration during motor tasking should be considered: its effect on our perception of tactile and proprioception senses. Studies showed that tendon vibration caused illusions between tactile perception and the sense of body dimension or position17,18,19. For example, vibrations to the biceps or triceps muscle tendons produced a proprioceptive illusion which had an effect on the sense of touch20. However, the effect of VTF (i.e., vibration in conjunction with the tactile stimuli) on the tactile perception itself has not been researched so that one cannot theorize whether the vibrations applied to the upper arm during the trial affected the perception of the proprioceptive or tactile senses of the subject. Finally, a possible explanation for the rejection of the second hypothesis is the individual differences in the ability of the subjects to process the VTF, as discussed below.

In this study, the strong correlation between the perceived difficulty level of performing the task with the VTF and the normalized path of the block with VTF, proves the third hypothesis and suggests that subjects, who felt that the VTF disturbed them, trusted the virtual visual feedback more than their tactile senses. Similar reports of individual differences in perception of sense illusions are documented in the literature. For example, subjects with higher Sensory Suggestibility Scale (SSS) scores rated the sense of ownership of a rubber hand in the rubber hand illusion as higher compared with subjects with lower SSS scores21. Another aspect of individual differences in perception of illusions may arise from differences in the temporal perceptual binding window, which causes alterations when integrating multisensory cues22. It was found that individuals with narrower perceptual binding windows were less likely to perceive an illusion, suggesting that they are more likely to dissociate temporally asynchronous inputs. It should be noted that there was no significant correlation between the perceived difficulty level of performing the task with the VTF and the normalized time of the block with VTF. This might be explained by the hand velocity. Specifically, when the subjects increased their hand path length in the presence of the misleading virtual signal, they might have sped up their hand movement so that the total duration for completing the task remained similar to that performed by them without the misleading feedback.

One limitation of this protocol is that the instructions provided to the subjects to apply a minimal force to the block were probably executed differently by the subjects, so that some applied higher forces than others, thereby perceiving a higher direct tactile signal, which might have affected the results. Unfortunately, the forces detected by the force sensors to affirm this assumption were not recorded. Recording the forces is an optional feature for future studies that might contribute information regarding the tactile information perceived by the subject during the trial. Also, the virtual representation designed herein provided the block movement and the location of the box partition so that there was no virtual representation of the hand of the subject. As previous research showed that visual information regarding apparent hand position may have crossmodal influences on tactile judgments, even when vision conflicts with proprioception23,24,25, the addition of hand representation in this trial may have altered the results of this study. Additionally, the chosen task of block transfer might have been too quick. Future modification to this protocol might include a more complicated task so that the differences in the durations for completion of the task, with and without VTF, would have been more considerable. Also, individual differences might be controlled by using the SSS. Last, only a minimal number of repetitions of the visual-tactile incongruent feedback are possible in this protocol, so as not to alert the subject regarding the misleading visual feedback. The protocol reliability would be compromised if the subjects would suspect that they are being deceived by the visual presentation. Therefore, the proportion of misleading feedback per the total number of trials should be minimal. Unfortunately, the small number of incongruent instances may limit the statistical power.

In summary, a new protocol, which presents misleading virtual visual feedback of movement, was tested with and without VTF. The preliminary results show that we trust direct and indirect tactile signals over an indirect visual signal. Furthermore, differences between subjects influence the response to incongruent signals, so that subjects, who felt more disturbed by the VTF, trusted the misleading visual signal over the tactile signal. This protocol could be further explored in upper limb amputees, who use prosthetics, equipped with VTF.

Offenlegungen

The authors have nothing to disclose.

Acknowledgements

This study was not funded.

Materials

3D printer Makerbot https://www.makerbot.com/
Box and Blocks test Sammons Preston https://www.performancehealth.com/box-and-blocks-test
Flexiforce sensors (1lb) Tekscan Inc. https://www.tekscan.com/force-sensors
JASP JASP Team https://jasp-stats.org/
Labview National Instruments http://www.ni.com/en-us/shop/labview/labview-details.html
Micro Arduino Arduino LLC https://store.arduino.cc/arduino-micro
Motion capture system Qualisys https://www.qualisys.com
Shaftless vibration motor Pololu https://www.pololu.com/product/1638
SPSS IBM https://www.ibm.com/analytics/spss-statistics-software

Referenzen

  1. Aggelopoulos, N. C. Perceptual inference. Neuroscience and Biobehavioral Reviews. 55, 375-392 (2015).
  2. van der Groen, O., van der Burg, E., Lunghi, C., Alais, D. Touch influences visual perception with a tight orientation-tuning. PloS One. 8 (11), e79558 (2013).
  3. Pei, Y. C., et al. Cross-modal sensory integration of visual-tactile motion information: instrument design and human psychophysics. Sensors. 13 (6), 7212-7223 (2013).
  4. Kording, K. P., Wolpert, D. M. Bayesian integration in sensorimotor learning. Nature. 427 (6971), 244-247 (2004).
  5. Tajadura-Jimenez, A., Banakou, D., Bianchi-Berthouze, N., Slater, M. Embodiment in a Child-Like Talking Virtual Body Influences Object Size Perception, Self-Identification, and Subsequent Real Speaking. Scientific Reports. 7 (1), (2017).
  6. Campos, J. L., Butler, J. S., Bulthoff, H. H. Multisensory integration in the estimation of walked distances. Experimental Brain Research. 218 (4), 551-565 (2012).
  7. Sun, H. J., Campos, J. L., Chan, G. S. Multisensory integration in the estimation of relative path length. Experimental Brain Research. 154 (2), 246-254 (2004).
  8. Parmentier, F. B., Ljungberg, J. K., Elsley, J. V., Lindkvist, M. A behavioral study of distraction by vibrotactile novelty. Journal of Experimental Psychology, Human Perception, and Performance. 37 (4), 1134-1139 (2011).
  9. Sigrist, R., Rauter, G., Riener, R., Wolf, P. Augmented visual, auditory, haptic, and multimodal feedback in motor learning: a review. Psychonomic Bulletin & Review. 20 (1), 21-53 (2013).
  10. Hebert, J. S., Lewicke, J., Williams, T. R., Vette, A. H. Normative data for modified Box and Blocks test measuring upper-limb function via motion capture. Journal of Rehabilitation Research and Development. 51 (6), 918-932 (2014).
  11. Raveh, E., Portnoy, S., Friedman, J. Adding vibrotactile feedback to a myoelectric-controlled hand improves performance when online visual feedback is disturbed. Human Movement Science. 58, 32-40 (2018).
  12. Raveh, E., Friedman, J., Portnoy, S. Evaluation of the effects of adding vibrotactile feedback to myoelectric prosthesis users on performance and visual attention in a dual-task paradigm. Clinical Rehabilitation. 32 (10), 1308-1316 (2018).
  13. Raveh, E., Portnoy, S., Friedman, J. Myoelectric Prosthesis Users Improve Performance Time and Accuracy Using Vibrotactile Feedback When Visual Feedback Is Disturbed. Archives of Physical Medicine and Rehabilitation. , (2018).
  14. Raveh, E., Friedman, J., Portnoy, S. Visuomotor behaviors and performance in a dual-task paradigm with and without vibrotactile feedback when using a myoelectric controlled hand. Assistive Technology: The Official Journal of RESNA. , 1-7 (2017).
  15. Dienes, Z. Using Bayes to get the most out of non-significant results. Frontiers in Psychology. 5, 781 (2014).
  16. Shams, L., Murray, M. M., Wallace, M. T. Early Integration and Bayesian Causal Inference in Multisensory Perception. The Neural Bases of Multisensory Processes. , (2012).
  17. D’Amour, S., Pritchett, L. M., Harris, L. R. Bodily illusions disrupt tactile sensations. Journal of Experimental Psychology, Human Perception, and Performance. 41 (1), 42-49 (2015).
  18. Tidoni, E., Fusco, G., Leonardis, D., Frisoli, A., Bergamasco, M., Aglioti, S. M. Illusory movements induced by tendon vibration in right- and left-handed people. Experimental Brain Research. 233 (2), 375-383 (2015).
  19. Fuentes, C. T., Gomi, H., Haggard, P. Temporal features of human tendon vibration illusions. The European Journal of Neuroscience. 36 (12), 3709-3717 (2012).
  20. de Vignemont, F., Ehrsson, H. H., Haggard, P. Bodily illusions modulate tactile perception. Current Biology. 15 (14), 1286-1290 (2005).
  21. Marotta, A., Tinazzi, M., Cavedini, C., Zampini, M., Fiorio, M. Individual Differences in the Rubber Hand Illusion Are Related to Sensory Suggestibility. PloS One. 11 (12), e0168489 (2016).
  22. Stevenson, R. A., Zemtsov, R. K., Wallace, M. T. Individual differences in the multisensory temporal binding window predict susceptibility to audiovisual illusions. Journal of Experimental Psychology, Human Perception, and Performance. 38 (6), 1517-1529 (2012).
  23. Maravita, A., Spence, C., Driver, J. Multisensory integration and the body schema: close to hand and within reach. Current Biology. 13 (13), R531-R539 (2003).
  24. Carey, D. P. Multisensory integration: attending to seen and felt hands. Current Biology. 10 (23), R863-R865 (2000).
  25. Tsakiris, M., Haggard, P. The rubber hand illusion revisited: visuotactile integration and self-attribution. Journal of Experimental Psychology, Human Perception, and Performance. 31 (1), 80-91 (2005).

Play Video

Diesen Artikel zitieren
Friedman, J., Raveh, E., Weiss, T., Itkin, S., Niv, D., Hani, M., Portnoy, S. Applying Incongruent Visual-Tactile Stimuli during Object Transfer with Vibro-Tactile Feedback. J. Vis. Exp. (147), e59493, doi:10.3791/59493 (2019).

View Video