Summary

Creating Virtual-hand and Virtual-face Illusions to Investigate Self-representation

Published: March 01, 2017
doi:

Summary

Here, we describe virtual-hand and virtual-face illusion paradigms that can be used to study body-related self-perception/-representation. They have already been used in various studies to demonstrate that, under specific conditions, a virtual hand or face can be incorporated into one's body representation, suggesting that body representations are rather flexible.

Abstract

Studies investigating how people represent themselves and their own body often use variants of "ownership illusions", such as the traditional rubber-hand illusion or the more recently discovered enfacement illusion. However, these examples require rather artificial experimental setups, in which the artificial effector needs to be stroked in synchrony with the participants' real hand or face—a situation in which participants have no control over the stroking or the movements of their real or artificial effector. Here, we describe a technique to establish ownership illusions in a setup that is more realistic, more intuitive, and of presumably higher ecological validity. It allows creating the virtual-hand illusion by having participants control the movements of a virtual hand presented on a screen or in virtual space in front of them. If the virtual hand moves in synchrony with the participants' own real hand, they tend to perceive the virtual hand as part of their own body. The technique also creates the virtual-face illusion by having participants control the movements of a virtual face in front of them, again with the effect that they tend to perceive the face as their own if it moves in synchrony with their real face. Studying the circumstances that illusions of this sort can be created, increased, or reduced provides important information about how people create and maintain representations of themselves.

Introduction

According to Western philosophy, the human self consists of two aspects1: For one, we perceive our own body and our activities in the here and now, which creates a phenomenal self-representation (often called the minimal self). For another, we create more enduring representations of ourselves by storing information about our personal history, integrating new information into the emerging self-concept, and present ourselves to our social environment accordingly, which amounts to the creation of a so-called narrative self. The minimal or phenomenal self has been argued to emerge from two sources of information. One is top-down information about longer-lasting aspects of our body, such as information about the effectors we own or the shape of our face. The other is bottom-up information provided by self-perception in the current situation.

Investigations of the latter were strongly inspired by a clever study of Botvinick and Cohen2. These authors presented human participants with a rubber hand lying in front of them, close to one of their real hands, which however was hidden from view. When the real hand and the rubber hand were stroked in synchrony, so to create intermodal synchronous input, participants tended to perceive the rubber hand as part of their own body-the rubber-hand illusion. Further studies revealed that perceived ownership even went so far that participants would start sweating and trying to withdraw their real hand when the rubber hand was being attacked by a knife or otherwise being "hurt"3.

While Botvinick and Cohen have interpreted their findings to demonstrate that self-perception arises from the processing of bottom-up information, other authors have argued that the rubber-hand illusion results from the interaction between intermodal synchrony of input, a bottom-up source of information, and stored representations of one's own hands, a top-down source of information4. The idea is that the stimulus synchrony creates the impression that the real and the rubber hand are one and the same thing, and given that the rubber hand looks like a real hand, this impression is considered reality.

Later research by Kalckert and Ehrsson5 added a visuo-motor component to the rubber hand paradigm, which allows for the investigation of both perceived ownership (the impression that the artificial effector belongs to one's own body) and perceived agency (the impression that one is producing observed movements oneself). Participants were able to move the index finger of the rubber hand up and down by moving their own index finger, and the synchrony between real and rubber hand finger movements, the mode of movement (passive vs. active mode), and the positioning of the rubber hand (incongruous vs. congruous with regard to the participant's hand) were manipulated. The findings were taken to provide support for the notion that agency and ownership are functionally distinct cognitive phenomena: while synchrony of movement abolished both sense of ownership and agency, mode of movement only affected agency, and congruency of the rubber hand position had an effect on ownership only. The latter two result were replicated in a follow-up study in which the distance between real and rubber hand in the vertical plane varied6: ownership for the rubber hand decreased as its position increasingly mismatched the participant's real hand. However, agency was not affected by misplacements of the rubber hand in any condition.

However, recent research using virtual-reality techniques, which provide the participant with active control over the artificial effector, suggests that the role of the top-down part and the distinction between ownership and agency may have been overestimated7,8. These techniques have replaced the rubber hand by a virtual hand presented to participants on a screen in front of them or by means of virtual-reality glasses9. Participants commonly wear a dataglove that translates the movements of the participant's real hand into the movements of the virtual hand, either synchronously or asynchronously (e.g., with a noticeable delay). Similar to the rubber-hand illusion, synchronous translation strongly increases the participant's impression that the virtual hand becomes part of his or her own body10.

Employing virtual-reality techniques to create the rubber-hand illusion has several advantages over both the traditional rubber-hand paradigm and the combination of the rubber-hand paradigm with the visuo-motor components11. Moving one's hand and seeing an effector moving in synchrony with it creates a much more natural situation than facing a rubber hand and being stroked by an experimenter. Moreover, the virtual manipulation provides the experimenter with much more experimental flexibility and much more control over the perceptual relation between perceiving and moving one's real hand and one's perception of the event created by the artificial effector. In particular, using virtual techniques facilitates the manipulation of factors that are likely to influence perceived ownership and agency. For instance, the shape of the virtual hand can be modified much easier and faster than the shape of a rubber hand, and the movements of the virtual hand can be of any kind and for instance involve biologically impossible movements. Among other things, this facilitates exploring the limits of the illusion, as the artificial effector need not look like a hand but may be replaced by any kind of static or dynamic event. Of both practical and theoretical interest, a virtual effector is arguably a lot more immersive and feels a lot more real than a rubber hand, which is likely to reduce the necessity to invoke top-down interpretations to make sense of the present situation.

Ownership illusions have, however, not been restricted to hands. Tsakiris12 was the first to use the stroking technique to create the impression of participants that a static face in a picture presented in front of them is their own. Sforza et al.13 have also found evidence for this phenomenon, to which they refer as enfacement: participants incorporated facial features of a partner when their own and their partner's face were touched in synchrony. The neural mechanism underlying the enfacement illusion has recently been investigated by various researchers; for a comprehensive commentary and interpretation of the findings see Bufalari et al.14. We have recently turned the regular enfacement illusion design into a virtual-reality version (the virtual-face illusion), in which participants are controlling the movements of a virtual face in front of them by moving their own head15.

Here, we describe two experiments that used the virtual-hand illusion7 and the virtual-face illusion15 paradigms, respectively, to investigate self-representation. The virtual-hand experiment included three, completely crossed experimental factors: (a) the synchrony between (felt) real-hand and (seen) virtual-effector movements, which was either close to zero to induce ownership and agency or three seconds as a control condition; (b) the appearance of the virtual effector, which looked either like a human hand or like a rectangle (so to test the effect of real-virtual effector similarity on the ownership illusion); and (c) the opportunity to control the behavior of the virtual effector, which was either nonexistent in a passive condition or direct in an active condition. The virtual-face experiment included two, completely crossed experimental factors: (a) the synchrony between real-face and virtual-face movements, which was either close to zero to induce ownership and agency or three seconds as a control condition; and (b) the facial expression of the virtual face, which was either neutral or showing a smile, to test whether positive mood would lift the mood of the participant and improve his or her performance in a mood-sensitive creativity task.

Protocol

All studies conformed to the ethical standards of the declaration of Helsinki and the protocols were approved by the Leiden University Human research ethics committee. Each condition tested about 20 participants.

1. Virtual-hand Illusion

  1. Experimental Setup
    1. Welcome the participant and collect additional information, like age, gender, etc.
    2. Establish an experimental setup that includes a virtual reality programming environment; a right-handed dataglove with six programmable vibration stimulators attached to the middle of palm and to the outside of the medial (second) phalanges of each of the five fingers (see Materials List); a 3-Degrees of Freedom (DOF) orientation tracker; SCR (skin conductance response) measurement equipment; a black box (depth: 50 cm; height: 24 cm; width: 38 cm) with a computer screen lying on top horizontally (serving to present the virtual reality environment); and a cape to cover the participant's hand.
    3. Ask the participant to put the dataglove on his or her right hand and the orientation tracker on the right wrist. Attach a SCR remote transmitter with a strap to the left wrist. Put the SCR electrodes on the medial (second) phalanges of the index and middle fingers of the left hand (see Figure 1A and B for an illustration of the setup).
    4. Seat the participant in front of the desk on which the box with the computer screen on top is placed. Ask the participant to put his or her right hand into the box along the depth axis, as to shield it from their view.
    5. Put a cape over the participant's right shoulder and cover the space between screen and participant. Ask the participant to rest his or her left hand on an empty part of the desk.
    6. Connect the cables of dataglove and orientation tracker to the computer, and start the virtual reality programming environment. Run the pre-written command script in the command window by clicking the "run" button in the virtual reality environment interface, so that the virtual reality environment starts. Monitor that the participant follows the instructions shown on the computer screen in front of the participants. Wait until the pre-written command script quits automatically.

Figure 1
Figure 1: (A) Participants wore an orientation tracker and a dataglove on their right hand, and SCR remote transmitter on their left hand. (B) Setup of the virtual-hand illusion experiment. (C) Setup of the virtual-face illusion experiment. (D) A screenshot of the computer screen. Please click here to view a larger version of this figure.

  1. Virtual Hand Design
    NOTE: Use Python command scripts in the command window of the virtual reality software and save them. Make sure that the main command script, the import commands, the module scripts, and other commands described below are part of the same script file. For the complete python script and necessary files see the attached "Virtual Hand Illusion.zip" file (NB: the zip-file is a supplemental materials of the manuscript and not part of the software package. Furthermore, it excludes the required plugins for the dataglove and orientation tracker, and any other python modules used throughout the script). In order to execute the experiment first unpack the contents of this file to any folder (e.g. the desktop) and then double click the "virtual-hand illusion_54784_R2_052716_KM.py" file to start the experiment. Note that the script is designed to work with the stated virtual reality programming environment and will not work using other programs.
    1. Import a pre-made virtual hand model and a pre-written hand script module (which can be found in the installment file of the virtual-reality-environment software package) into the virtual reality environment. The hand script module tracks the finger joint gesture and angles of the dataglove and feeds the information into the virtual hand model, which allows controlling the movements of the virtual hand by moving the real hand wearing the dataglove.
      1. Manually change the size and appearance of the virtual hand if necessary by specifying its parameters in the script, such as its x, y, and z scaling to change its size or change the mapped image.
      2. For synchrony conditions, use no transformation, so that the virtual hand moves the same way as the real hand and at (about) the same time. To create asynchrony, add a delay of 3 s, so that the virtual hand moves as the real hand but with a noticeable delay.
    2. Identify a suitable pre-made orientation tracker plugin in the virtual-reality-environment installment file and import it in the command scripts. Note that running the command scripts makes the orientation tracker module track the orientation changes of the real hand (provided by the orientation tracker participants wear on their right wrist), which can then be used to control the orientation changes of the virtual hand by setting the yaw, pitch, and roll data of the virtual hand in the command window. Channel the data tracked by the orientation tracker directly into the virtual-hand model for synchrony conditions but insert a delay of 3 s for asynchrony.
    3. Design the required additional virtual objects and their movement trajectories, so that they move to and from the virtual hand (here, design and import additional models for a stick, rectangle, ball, and knife, to be used during various parts of the experiment; see "Experimental Conditions"). Manually change the size, appearance and position for each of these objects in the command script in the same way as the parameters for the virtual hand are set. Set the required movement trajectories using the appropriate commands to set the start and end position of the movement trajectories for an object and the speed at which it should move.
    4. Determine the vibration strength and timing of each vibration stimulator in the command script; either without a delay for synchrony conditions (i.e., vibration starts exactly when the virtual hand is being contacted by the other virtual object) or with a delay of 3 s for asynchrony. All vibrators vibrate at the same time as the virtual hand is touched by the other virtual object (or at the delayed time point). Set the vibration strength to a medium level (i.e. to 0.5 on a scale of 0-1). Note that the actual strength of vibration depends on the programming environment and vibrators used for the experiment, and that a medium level of vibration in our experiment does not necessarily match the actual strength of vibration when different hardware (i.e. vibrators/dataglove) or software is used.
    5. Add a second part to the experiment script that is identical to the previous steps except for the following changes:
      1. Replace the virtual hand model with a virtual rectangle of a similar size as the virtual hand (so to realize the appearance factor of the experiment)
      2. Make sure that the rotation of the real hand as picked up by the orientation tracker is translated into rotational movements of the rectangle.
      3. Make sure that the opening and closing of the real hand as picked up by the dataglove is translated into color changes of the rectangle using the appropriate command for changing the color of an object in your programming environment (e.g., present the rectangle in green when the hand is completely closed, in red when it is completely opened, and let the color gradually change from red to green or green to red as the hand opens or closes).
  2. Experimental Conditions
    1. Run the eight experimental conditions (resulting from crossing the three experimental factors synchrony, appearance of the virtual effector, and active/passive) in an order that is either balanced across participants or randomized.
    2. For each condition, include three phases of about 2 to 3 min each to induce the virtual-hand illusion and a threat phase to measure electrophysiological skin responses (SCR). The concrete protocol differs somewhat for the eight conditions and is described below.
    3. Virtual hand/active/synchrony
      1. Configure the system such that the delay between the following events is close to zero and not noticeable: (a) the movements and orientation changes of the real hand and the corresponding movements and orientation changes of the virtual hand in the visuo-motor correlation phase; (b) the time points of contact between the virtual hand and the additional virtual object on the screen and the corresponding time points of vibration-induced stimulation of the real hand in the visuo-tactile phase; and (c) the movements and orientation changes of the real hand and the corresponding movements and orientation changes of the virtual hand; and the time points of contact between the virtual hand and the additional virtual object on the screen and the corresponding time points of vibration-induced stimulation of the real hand in the visuo-motor-tactile phase.
      2. For the visuo-motor correlation phase, have participants freely move or rotate their real right hand, including opening, closing, and rotating their real hand, and moving each finger individually. Have participants watch the corresponding movements of the virtual hand on the computer screen.
      3. For the visuo-tactile stimulation phase, have participants keep their real hand still while watching the screen. Present another virtual object on the screen, such as a virtual ball or stick (which was created in 1.2.3) that moves to and from the virtual hand, producing the impression of touching and not touching the virtual hand.
        1. Accompany each contact between this additional virtual object and the virtual hand by vibrator activity on the dataglove. Have the vibrator stimulate that part of the real hand that corresponds to the part of the virtual hand that is being touched by the additional virtual object (e.g., if the virtual object seems to touch the palm of the virtual hand, the palm of the participant's real hand should be stimulated by the vibrator16).
      4. For the visuo-motor-tactile correlation phase, have the participants move the virtual hand by moving their real hand in order to touch a virtual vibrating stick or similar object (see 1.2.3). Ensure that each contact between virtual hand and virtual stick/object is accompanied by vibration-induced stimulation of the participant's real hand as described in 1.3.3.3.
      5. For the threat phase, have participants keep their real right hand still while watching a virtual knife or needle appear on the computer screen. Make the virtual knife or needle go to and from the virtual hand. Ensure that each contact results in a visible apparent "cutting" or "puncturing" of the virtual hand.
        1. Stimulate that part of the real hand that corresponds to the cut or punctured part of the virtual hand by using the vibrators of the dataglove as described in 1.3.3.3.
    4. Virtual hand/active/asynchrony
      1. Run the procedure described under 1.3.3 after configuring the system such that the delay between the critical events is three seconds instead of close to zero.
    5. Virtual rectangle/active/synchrony
      1. Run the procedure described under 1.3.3 but with the virtual rectangle instead of the virtual hand.
    6. Virtual rectangle/active/asynchrony
      1. Run the procedure described under 1.3.4 but with the virtual rectangle instead of the virtual hand.
    7. Virtual hand/passive/synchrony
      1. Run the procedure described under 1.3.3 but ask the participant to keep his or her real hand still throughout all phases.
    8. Virtual hand/passive/asynchrony
      1. Run the procedure described under 1.3.4 but ask the participant to keep his or her real hand still throughout all phases.
    9. Virtual rectangle/passive/synchrony
      1. Run the procedure described under 1.3.5 but ask the participant to keep his or her real hand still throughout all phases.
    10. Virtual rectangle/passive/asynchrony
      1. Run the procedure described under 1.3.6 but ask the participant to keep his or her real hand still throughout all phases.
  3. Data Collection
    1. Collect SCR data using the measurement equipment (see Materials List) and its software. The recording frequency is every 0.1 ms.
    2. Ask the participant to fill out the questionnaire measuring sense of ownership, agency, location and appearance for the respective condition. Use either a paper version, in which each question (as described in 1.4.2.1 and 1.4.2.2) is printed, together with a Likert scale (as described in 1.4.2.3), and which can be filled in with a pen; or use a computerized version, in which each question is shown on the screen, together with the Likert scale, and in which the chosen scale value can be typed in.
      1. Include a questionnaire that minimally includes one or more ownership questions2; use the following four:
        (O1) "I felt as if the hand on the screen were my right hand or part of my body";
        (O2) "It seemed as if what I were feeling on my right hand was caused by the touch of the stick on the hand on the screen that I was seeing";
        (O3) "I had the sensation that the vibration I felt on my right hand was on the same location where the hand on the screen was touched by the stick";
        (O4) "It seemed my right hand was in the location where the hand on the screen was".
      2. Consider including further questions regarding agency questions; use the following:
        (A1) "I felt I can control this virtual hand" (for the active condition);
        (A1) "It seemed like I could have moved the hand on the screen if I had wanted to, as if it were obeying my will" (for the passive condition); .
        Note that the items listed in 1.4.2.1 and 1.4.2.2 refer to the hand condition. For the rectangle condition, replace all references to the virtual hand by references to the virtual rectangle.
      3. Use a Likert scale2 for each question (e.g., 1-7), so that participants can score the degree to which they agreed to the question; e.g., use 1 for "strongly disagree" and 7 for "strongly agree". Make sure each question appears on screen and can be responded to with the numbers 1 to 7 corresponding to the 7 response options of the Likert scale; appearance and response options are programmed in the experiment script.

2. Virtual-face Illusion

  1. Experimental Setup
    1. Welcome the participant and collect additional information, like age, gender, etc.
    2. Establish an experimental setup that includes a virtual reality programming environment; a head position tracking system, including corresponding hardware and software17; and a 3-DOF orientation tracker attached to the top of a hat or baseball cap.
      NOTE: Using this experimental setup, participants can freely move or rotate their own head to control the position and orientation of the virtual face but they cannot control the facial expressions of the virtual face
    3. Ask the participant to sit on the chair 2 meters in front of the computer screen. See Figure 1C and 1D for an illustrations of the experimental setup.
    4. Ask the participant to put on the cap with the attached orientation tracker.
    5. Connect position tracking system and orientation tracker to the computer and run the pre-written command script in the command window by clicking the "run" button in the virtual reality environment interface, so that the virtual reality environment starts. Monitor that the participant follows the instructions shown on the computer screen in front of the participants. Wait until the pre-written command script quits automatically.
  2. Virtual Face Design
    NOTE: For the complete python script and necessary files see the attached "Virtual Face Illusion.zip" file (NB: the zip-file is a supplemental materials of the manuscript and not part of the software package; it does not include the required plugins used for position and orientation tracking and any other python modules used throughout the script). In order to execute the experiment, first unpack the contents of this file to any folder (e.g. the desktop) and then double click the "virtual-face illusion_54784_R2_052716_KM.py" file to start the experiment. Note that the script is designed to work with the virtual reality programming environment presented here and will not work using other programs.
    1. Use a virtual face building program to design virtual faces with the appropriate age, race, and genders (corresponding to the participants being tested) by selecting the best fitting values on the corresponding scales of the program
    2. Create two versions of each face, one with a neutral facial expression and one with a smile, by selecting the corresponding values on the corresponding scales of the program (which varies expressions by changing eye size, curvature of the mouth and some other face muscles)
    3. For testing university students, create four 20-year-old virtual faces with the virtual face building program, one male face with a neutral facial expression, one male face that is smiling, one female face with a neutral facial expression, and one female face that is smiling
    4. In the virtual face building program export the faces to 3D VRML-formatted files.
    5. Using the appropriate commands of the virtual reality programming environment import the created VRML files, i.e., the virtual faces, into the virtual reality environment for use during the experiment. Vary their size or scale by setting their parameters accordingly using the appropriate commands.
    6. Find the pre-written tracking module for the head position tracking system in the installment file of the virtual environment and import it, which allows tracking the head positions of participant. In the scripts, change the data of the head positions and determine the time point of when head positions are translated into virtual-face positions (use a 0 ms delay for synchrony conditions and a 3 s delay for asynchrony).
    7. Find a pre-made orientation tracker plugin in the installment file of the virtual environment and import it in the command scripts. Note that, again, the script allows introducing temporal delays with respect to the time point of when orientation changes of the participant's head are translated into orientation changes of the virtual head (use a 0 ms delay for synchrony conditions and a 3 s delay for asynchrony).
    8. Design additional virtual objects (such as a virtual stick) and their motion trajectories, so they move to and from the virtual face. Set the size of the virtual object to be similar to the size of a virtual finger.
    9. Connect hardware and implement the saved command scripts, and then start the experiment.
  3. Experimental Conditions
    1. Run the command scripts and track the participant's head position by means of the head position tracking system and the participant's head orientation by means of a 3-DOF orientation tracker attached to a cap.
    2. Expose the participant to the virtual face for 30 s and instruct participants not to move. Once face has disappeared, have participants respond to the IOS scale (described under Data Collection) to assess how he or she perceives the relationship between him- or herself and the virtual face.
    3. Run the four experimental conditions (described below) in an order that is either balanced across participants or randomized. Each condition includes three phases of about 2 to 3 min each to induce the virtual-face illusion.
    4. Neutral/synchrony
      1. Configure the system such that the delay between the following events is close to zero and not noticeable: (a) the movements of the real head and the corresponding movements of the virtual head in the visuo-motor correlation phase and (b) the time points of contact between the participant's real hand and the participant's real cheek and between the virtual object and the virtual head in the visuo-tactile stimulation phase.
      2. For the visuo-motor correlation phase, have participants put on the cap with the attached orientation tracker. Ask them to keep moving or rotating their own head to control the position and orientation of the virtual face.
      3. For the visuo-tactile stimulation phase, have participants stretch their right arm to the right and back repeatedly, to touch their right cheek, while watching the screen. The touch is only momentary: participants touch the cheek, let go and stretch their right arm to the right, and repeat for the duration of this visuo-tactile stimulations phase.
      4. On the screen, present the virtual face being repeatedly touched at the cheek by a virtual object, such as a virtual ball. The touch is (or rather hand movement in general) is synchronized with the virtual object through the motion system that can track the location of a participant's limb (e.g. hand) in 3D space, which allowed us to directly map the participant's hand movements on to the trajectory of the of the virtual object, resulting in a synchronized movement of the participant's real hand movement trajectory and the virtual object's movement trajectory. Thus when the virtual object touches the virtual avatar, this corresponds to the participant touching their own cheek.
    5. Neutral/asynchrony
      1. Run the procedure described under 2.3.4 after having configured the system such that the delay between the critical events is 3 s instead of close to zero.
    6. Smiling/synchrony
      1. Run the procedure described under 2.3.4 after having configured the system to present the smiling face instead of the face with a neutral expression.
    7. Smiling/asynchrony
      1. Run the procedure described under 2.3.6 after having configured the system such that the delay between the critical events is 3 s instead of close to zero.
  4. Data Collection
    1. Ask the participant to fill out the questionnaire measuring sense of ownership and agency for the respective condition.
      1. Include a questionnaire that minimally includes one or more ownership questions; use the following four:
        (O1) "I felt like the face on the screen was my own face";
        (O2) "It seemed like I was looking at my own reflection in a mirror";
        (O3) "It seemed like I was sensing the movements and the touch on my face in the location where the face on the screen was";
        (O4) "It seemed like the touch I felt on my face was caused by the ball touching the face on the screen".
      2. Consider including agency questions; use the following two:
        (A1) "It seemed as though the movements I saw on the face on the screen was caused by my own movements";
        (A2) "The face on the screen moved just like I wanted it to, as if it was obeying my will".
    2. Include the "Inclusion of Other in the Self" (IOS) scale18, which is created by using a 7-point (1-7) Likert scale2 on which each score is indicated to correspond to a different degree of self-other overlap. Indicate the degree of overlap graphically through the overlap of two circles with one representing the "Self" and the other circle the "Other". Characterize the lowest score of the scale by zero-overlap of the two circles and the highest score by perfect overlap. Higher ratings thus represent a higher degree of self-other overlap.
    3. Optionally, include the Affect Grid19 to assess mood.
      1. Create a 2-dimensional (valence by arousal) Likert-kind grid, in which one dimension corresponds to valence (ranging from -4 for feeling unpleasant to +4 for feeling pleasant) and the other to arousal (ranging from -4 for feeling sleepy to +4 for feeling highly aroused).
      2. Have participants choose one point (e.g., with a pen) that corresponds to how pleasant and how aroused they currently feel.
        NOTE: The questionnaires, IOS and Affect Grid appear on screen after each of the experimental phases is finished. Participants used the keyboard to respond (identical to the Virtual Hand Illusion experiment).
    4. Optionally, include the Alternative Uses Task (AUT)20.
      1. Ask participants to list as many possible uses for a common household item such as a newspaper. The task is performed with pen and paper. Have participants to write down as many uses for the object as they can in 5 min.
      2. Repeat for another object (e.g., a brick). Score the outcomes later according to fluency (number of uses), flexibility (number of categories of uses), elaboration (how much detail or explanation that is provided for the use), and originality (how unique the use is). Ensure that higher scores indicate higher divergent thinking performance for all items. Use two different scorers and ensure that the inter-scorer correlation is high. Focus on the flexibility score for further analyses, as this is the most consistent and theoretically most transparent score of the task.
      3. Use the AUT as an implicit (and demand-characteristic-free) measure indicating mood, as performance in this task increases with better mood21.
        NOTE: If the AUT is to implemented change the script such that the virtual face remains on screen, is visible to and remains under the control of the participant while they are doing the AUT.

Representative Results

Virtual-hand Illusion

We ran several experiments using the virtual-hand illusion paradigm, to investigate how people represent their bodies, in this case their hands. The number of tested participants depended on the amount of conditions, usually around 20 participants for each condition. Here we provide relevant results for one of the most elaborated studies we conducted in our lab. We will restrict our discussion to the subjective data, the average of the Likert-scale responses to the four ownership questions (O1-O4) and the Likert-scale response to the agency question (A1).

In this study8, we systematically investigated the effects of synchrony (synchronous vs. asynchronous), appearance of the virtual effector (virtual hand vs. rectangle), and activity (passive vs. active) on the participants' sense of ownership and sense of agency (all conditions were tested within participants). The results were the same for ownership and agency. As indicated in Figure 2, perceived ownership and agency were stronger if real and virtual hand moved in synchrony [F(1,43) = 48.35; p < 0.001; and F(1,43) = 54.64; p < 0.001; for ownership and agency, respectively], if the virtual effector was a hand than if it was a rectangle [F(1,43)=14.85; p < 0.001; and F(1,43) = 6.94; p < 0.02], and if the participant was active rather than passive [F(1,43) = 9.32; p < 0.005; and F(1,43) = 79.60; p < 0.001]. The synchrony effect replicates the standard virtual-hand illusion.

Figure 2
Figure 2: Ownership and agency ratings as a function of synchrony, appearance of the virtual effector, and activity of the participant. Please click here to view a larger version of this figure.

Figure 3
Figure 3: Ownership and agency ratings as a function of synchrony and activity of the participant. Note that the synchrony effect is more pronounced for active participants. Please click here to view a larger version of this figure.

Even more interestingly, both ownership and agency showed a significant interaction between activity and synchrony [F(1,43) = 13.68; p = 0.001; and F(1,43) = 23.36; p < 0.001; see Figure 3], but not between appearance and synchrony. This pattern suggests that activity plays a more dominant role for the ownership and the illusion than appearance does, it even showed that the illusory ownership perception is stronger in virtual than traditional rubber hand illusion paradigm. According to Hommel22, objective agency (i.e., the degree to which an external event can objectively be controlled) contributes to both subjective ownership and subjective agency, which explains why in this experiment, active, synchronous control over the virtual effector increased both subjective ownership and subjective agency.

While appearance failed to interact with synchrony, suggesting that the ownership illusion does not rely on appearance, it did produce a main effect. This indicates that appearance does have an impact on perceived ownership. It makes sense to assume that people have general expectations about what external objects might or might not be a plausible part of their body, which supports ownership perception in general but does not moderate the effect of synchrony. We thus conclude that multiple sources of information contribute to the sense of subjective ownership: general top-down expectations and bottom-up synchrony information. The relationship between these two informational sources does not seem to be interactive but compensatory, so that general expectations may dominate in the absence of synchrony, and vice versa.

Virtual-face Illusion

In another study, we investigated how people represent their face. We were able to replicate the traditional enfacement illusion in a virtual environment, which we refer to as the virtual-face illusion12. We further investigated whether people adopt the mood expressed by a virtual face they identify with. There was one within-participant factor-synchrony (synchronous vs. asynchronous) and one between-participant factor-facial expression (happy vs. neutral). The IOS ratings before the induction phase were subtracted from the IOS ratings after the induction phase, also the Affect Grid ratings before the induction phase were subtracted from the Affect Grid ratings after the induction phase, and these rating changes were used as the IOS and affect grid results.

The analysis of the ownership scores (O1-4), the agency scores (A1-2), and the IOS scale18 changes all showed main effects of synchrony [F(1,58) = 38.24; p < 0.001; F(1,58) = 77.33; p < 0.001; and F(1,58) = 43.63; p < 0.001; respectively], showing that synchrony between one's own head movements and the movements of the virtual face increased perceived ownership and agency, and facilitated the integration of the other's face into one's own self (see Figure 4). Synchrony also improved mood, as indicated by a synchrony effect on the affect grid19 changes [F(1,58) = 7.99; p < 0.01].

Figure 4
Figure 4: Ownership and agency ratings, as well as IOS changes, as a function of synchrony. Note that positive IOS changes imply an increase of integration of the other into one's self. Please click here to view a larger version of this figure.

Figure 5
Figure 5: Affect grid changes (positive values imply positive-going affect) and flexibility scores in the AUT, as a function of synchrony and the expression of the virtual face. Note that the interactions between synchrony and expression are driven by more positive-going mood and particularly good flexibility performance for the combination of synchrony and happy virtual face. Please click here to view a larger version of this figure.

There were significant main effects of the facial expression on IOS changes, affect grid changes, and flexibility in the AUT20,21,23but more important was the fact that the affect grid changes and the flexibility score interacted with synchrony [F(1,58) = 4.40; p < 0.05; and F(1,58) = 4.98; p < 0.05; respectively]. As shown in Figure 5, participants reported improved mood and showed more creative behavior after enfacing (i.e., synchronously moving with) a happy face as compared to the conditions where they moved asynchronously with a happy face or synchronously with a neutral face.

F/P/PES EFF ACT SYN EFF*ACT EFF*SYN ACT*SYN EFF*ACT*SYN
O1 11.66 10.11 45.38 10.08
0.001 0.003 <0.001 0.003
0.21 0.19 0.51 0.19
O2 5.37 47.65
0.025 <0.001
0.11 0.53
O3 10.75 41.30 9.81
0.002 <0.001 0.003
0.20 0.49 0.19
O4 12.86 17.17 15.12 10.60
0.001 <0.001 <0.001 0.002
0.23 0.29 0.26 0.20
O1-4 14.85 9.32 48.35 13.68
<0.001 0.004 <0.001 0.001
0.26 0.18 0.53 0.24
A1 6.94 79.60 54.64 23.36
0.012 <0.001 <0.001 <0.001
0.14 0.65 0.56 0.37

Table 1: F, P and Partial Eta squared (PES) values for the effects of the questionnaire item ratings, with df = 43. Factors are EFF: virtual effector (virtual hand vs. rectangle); ACT: activity (active exploration vs. passive stimulation); and SYN: synchrony (synchronous vs. asynchronous). Only results for significant effects are shown.

M/SE H-P-SY H-P-AS H-A-SY H-A-AS R-P-SY R-P-AS R-A-SY R-A-AS
O1-4 4.37 3.44 5.09 3.50 3.79 3.14 4.68 3.05
0.20 0.23 0.19 0.25 0.23 0.23 0.20 0.21
A1 3.59 3.11 6.36 4.36 3.07 2.57 6.09 3.80
0.30 0.32 0.15 0.33 0.28 0.27 0.24 0.33

Table 2: Means (M) and standard errors (SE) for the ownership and agency ratings in all eight conditions. H: hand; R: rectangle; A: active; P: passive; SY: synchronous; AS: asynchronous.

F/P/PES Facial expression Synchrony Facial expression* Synchrony
Ownership (O1-4) 38.24
<0.001
0.40
Agency (A1-2) 77.33
<0.001
0.57
IOS Changes 4.03 43.63
0.049 0.001
0.07 0.43
Affect Grid Valence Changes 6.06 7.99 4.40
0.017 0.007 0.041
0.10 0.13 0.07
AUT-Flexibility 5.42 4.98
0.024 0.03
0.09 0.08
AUT-Fluency 7.89
0.007
0.12

Table 3: F, P and Partial Eta squared (PES) values for relevant dependent measures, with df = 58 for questionnaire and IOS results, and df = 56 for the valence dimension of the affect grid mood and AUT results. Only results for significant effects are shown.

M/SE Neutral-SY Neutral-AS Happy-SY Happy-AS
Ownership (O1-4) 2.88 2.03 3.38 2.36
0.27 0.16 0.23 0.22
Agency (A1-2) 5.90 4.25 6.16 4.08
0.20 0.25 0.13 0.32
IOS Changes 0.37 -0.80 1.00 -0.40
0.21 0.25 0.20 0.24
Affect Grid Valence Changes -1.07 -1.33 0.60 -1.20
0.42 0.33 0.39 0.31
AUT-Flexibility 5.87 6.07 7.43 6.10
0.31 0.37 0.29 0.39
AUT-Fluency 7.27 8.27 9.73 7.37
0.51 0.68 0.68 0.49

Table 4: Means (M) and standard errors (SE) for relevant dependent measures in the four conditions. Neutral: neutral facial expression; Happy: happy facial expression; SY: synchronous; AS: asynchronous.

Discussion

In this article we described two detailed protocols for the virtual-hand and virtual-face illusion paradigms, in which our virtual-face study was the first to replicate the traditional stroking-induced face-ownership illusion in virtual reality, together with representative results from the two paradigms.

The significant synchrony effects indicate that we were successful in inducing illusory ownership for the virtual hand and the virtual face, similar to more traditional illusion paradigms. Being able to reproduce these effects by means of virtual reality techniques has considerable advantages11,24. Virtual-reality techniques are freeing the experimenter from the rather artificial and interruptive stroking procedure and opens new possibilities for experimental manipulations. For example, morphing virtual effectors allowed us to systematically manipulate the impact of the appearance of the virtual hand and the similarity between the virtual and the participant's real hand, or the facial expression of the virtual face. The impact of agency can also be systematically explored by varying the degree (e.g., immediacy) to which participants can control the movements of the artificial effector.

Another promising avenue for future virtual reality research are first person perspective (1PP) virtual reality experiences. 1PP experiences can create an immense sense of immersion and feeling of presence, on a completely different scale than a third person perspective virtual reality experience25,26,27,28. In 1PP experiences one can truly feel like one is the avatar, that one is literally embodying the avatar. This opens up possibilities for all kinds of manipulations such as detaching parts of a person's body28, elongating29, rescaling body parts30, or changing a person's skin color31,32.

As the present and many other findings demonstrate, controlling virtual events in a synchronous fashion strongly increases the perception of these events belonging to one's own body. For example, our findings from the hand study suggest that immediate control is an important cue for distinguishing between self-produced and other-produced events (i.e., personal agency) and between self-related and other-related events (i.e., body ownership). The findings presented here and elsewhere suggest that bottom-up information plays a decisive role in the emergence of phenomenal self-representation, even for body parts that are not as identity-related as one's own body part4.

The most critical part of the described protocols is the induction process, which introduces correlations between visual, tactile and motor (i.e., proprioceptive) information-these correlations allows the cognitive system to derive ownership and agency. As these correlations rely on the relative timing of the respective events, such as the delay between the participant's own movements and the movements of the artificial effector, it is crucial to keep processing delays (especially with regard to the translation of data from the dataglove to the motion of the virtual effector on the screen) to a minimum. With our experiment setup the maximum time delay is around 40 ms, which is hardly noticeable and does not hamper the perception of causality and agency. Shimada, Fukuda, and Hiraki33 have suggested that the critical time window for the occurrence of multisensory integration processes constituting the self-body representation is 300 ms, which means that longer delays are likely to reduce the perception of control over virtual events.

Another important aspect of the protocol is tight experimental control over the participant's hand or face movements, depending on the paradigm. During induction, active movements of the respective factor are essential, as the required intersensory correlations rely on active explorative movements on the side of the participant. It is thus important to encourage participants to move frequently and to engage in active exploration. In other phases of the experiment, movements can impair the measurement however. For instance, in the virtual-hand illusion paradigm, moving the left hand (from which SCR was recorded) is likely to render measurements of the SCR level noisy and unreliable.

A limitation of the virtual-hand illusion paradigm technique is that, for practical reasons, participants commonly wear a dataglove and orientation tracker during the entire experiment (so to minimize distraction). This may not be comfortable, which in turn may affect the mood or motivation of the participant. One possible solution for that problem would be the use of lighter equipment or custom-made wearables. Another limitation of our current virtual-face illusion paradigm technique is that the equipment only registers head movements but no changes in facial expression. Allowing participants to control the facial expressions of a virtual face is likely to contribute to ownership illusions, but this would require hardware and software that provides reliable detection and categorization of facial expressions in humans-which we do not yet have available in our lab. The use of for example real-time (facial) motion capture utilities would be of great benefit in overcoming these limitations and would allow us to increase sense of agency and ownership of avatars to significantly higher levels.

As suggested by the findings from our study8, people consider various sources of information and update their body representation continuously. They seem to use bottom-up information and top-down information in a compensatory fashion, in the sense that one source of information plays a stronger role in the absence of the other-similar to what has been assumed for the sense of agency34. This provides interesting avenues for future research, as it for instance suggests that ownership can be perceived even for artificial effectors in awkward postures, provided a sufficient degree of surface similarity, or vice versa (i.e., if the artificial effector perfectly aligns with the real effector but differs from it in terms of surface features). The available findings also suggest that the boundaries between self and others are rather plastic, so that features of another person or agent can be perceived as a feature of oneself, provided some degree of synchrony between one's own behavior and that of the other35,36.

Divulgaciones

The authors have nothing to disclose.

Acknowledgements

This work was funded by the Chinese Scholarship Council (CSC) to K. M., and an infrastructure grant of the Netherlands Research Organization (NWO) to B. H.

Materials

Vizard (Software controlling the virtual reality environment) Worldviz Vizard allows importing hand models and integrating the hand, dataglove and orientation tracker modules through self-written command scripts. These scripts can be run to control the presentation of the virtual hand in the virtual environment, the appearance of the hand and the way it moves; they also control vibrator activities.
Cybertouch (Dataglove) CyberGlove Systems Cybertouch Participants wear this dataglove to control the movements of the virtual hand in the virtual environment. Measurement frequency = 100 Hz; Vibrator vibrational frequency = 0-125 Hz.
Intersense (Orientation tracker) Thales InertiaCube3 Participants wear the Intersense tracker to permit monitoring the orientation of their real hand (data that the used dataglove does not provide). Update rate = 180 Hz.
Biopac system (Physiological measurement device) Biopac MP100 The hardware to record skin conductance response.
Acquisition unit (Physiological measurement device) Biopac BN-PPGED The hardware to record skin conductance response.
Remote transmitter (Physiological measurement device) Biopac BN-PPGED-T Participants wear the remote transmitter on their left hand wrist; it sends signals to the Biopac acqusition unit.
Electrode (Physiological measurement device) Biopac EL507 Participants wear  the electrode on their fingers; it picks up skin conductance signals.
AcqKnowledge (Software controlling acquisition of physiological data) Biopac ACK100W, ACK100M The software to record skin conductance responses.
Box Custom-made Participants put their right hand into the box
Computer Any standard PC + Screen (could be replaced by VR glasses/devive) Necessary to present the virtual reality environment, including the virtual hand.
Cape Custom-made Participants wear this cape on their right shoulder so they cannot see their right hand and arm.
Kinect (Head position tracker) Microsoft Kinect tracks the X-Y position of the participant's head. Recording frame rate = 30 Hz.
FAAST (Head position tracker software) MXR FAAST 1.0 Software controls Kinect and is used to track the position of the participant's head.
Intersense (Head orientation tracker) Thales InertiaCube3 Intersense tracks rotational orientation changes of the participant's head. Update rate = 180 Hz.
Facegen (Face-model generator software) Singular Inversions FaceGen Modeller  Facegen allows creating various virtual faces by varying various parameters, such as male/female-ness or skin color.
Cap Any cap, e.g., baseball cap The cap carries the Intersense orientation tracker.
Computer Any standard PC + Screen Necessary to present the virtual reality environment, including the virtual head.

Referencias

  1. Tsakiris, M., Schütz-Bosbach, S., Gallagher, S. On agency and body-ownership: Phenomenological and neurocognitive reflections. Conscious. Cogn. 16 (3), 645-660 (2007).
  2. Botvinick, M., Cohen, J. Rubber hands ‘feel’ touch that eyes see. Nature. 391 (6669), 756-756 (1998).
  3. Armel, K. C., Ramachandran, V. S. Projecting sensations to external objects: Evidence from skin conductance response. Proc. R. Soc. B. 270 (1523), 1499-1506 (2003).
  4. Tsakiris, M. My body in the brain: A neurocognitive model of body-ownership. Neuropsychologia. 48 (3), 703-712 (2010).
  5. Kalckert, A., Ehrsson, H. H. Moving a rubber hand that feels like your own: Dissociation of ownership and agency. Front. Hum. Neurosci. 6 (40), (2012).
  6. Kalckert, A., Ehrsson, H. H. The spatial distance rule in the moving and classical rubber hand illusions. Conscious. Cogn. 30C, 118-132 (2014).
  7. Ma, K., Hommel, B. Body-ownership for actively operated non-corporeal objects. Conscious. Cogn. 36, 75-86 (2015).
  8. Ma, K., Hommel, B. The role of agency for perceived ownership in the virtual hand illusion. Conscious. Cogn. 36, 277-288 (2015).
  9. Slater, M., Perez-Marcos, D., Ehrsson, H. H., Sanchez-Vives, M. V. Towards a digital body: The virtual arm illusion. Front. Hum. Neurosci. 2 (6), (2008).
  10. Sanchez-Vives, M. V., Spanlang, B., Frisoli, A., Bergamasco, M., Slater, M. Virtual hand illusion induced by visuomotor correlations. PLOS ONE. 5 (4), e10381 (2010).
  11. Spanlang, B., et al. How to build an embodiment lab: Achieving body representation illusions in virtual reality. Front. Robot. AI. 1, 1-22 (2014).
  12. Tsakiris, M. Looking for myself: Current multisensory input alters self-face recognition. PLOS ONE. 3 (12), e4040 (2008).
  13. Sforza, A., Bufalari, I., Haggard, P., Aglioti, S. M. My face in yours: Visuo-tactile facial stimulation influences sense of identity. Soc. Neurosci. 5, 148-162 (2010).
  14. Bufalari, I., Porciello, G., Sperduti, M., Minio-Paluello, I. Self-identification with another person’s face: the time relevant role of multimodal brain areas in the enfacement illusion. J. Neurophysiol. 113 (7), 1959-1962 (2015).
  15. Ma, K., Sellaro, R., Lippelt, D. P., Hommel, B. Mood migration: How enfacing a smile makes you happier. Cognition. 151, 52-62 (2016).
  16. Ma, K., Hommel, B. The virtual-hand illusion: Effects of impact and threat on perceived ownership and affective resonance. Front. Psychol. 4 (604), (2013).
  17. Suma, E. A., et al. Adapting user interfaces for gestural interaction with the flexible action and articulated skeleton toolkit. Comput. Graph. 37 (3), 193-201 (2013).
  18. Aron, A., Aron, E. N., Smollan, D. Inclusion of Other in the Self Scale and the structure of interpersonal closeness. J. Pers. Soc. Psychol. 63 (4), 596 (1992).
  19. Russell, J. A., Weiss, A., Mendelsohn, G. A. Affect grid: a single-item scale of pleasure and arousal. J. Pers. Soc. Psychol. 57 (3), 493-502 (1989).
  20. Guilford, J. P. . The nature of human intelligence. , (1967).
  21. Ashby, F. G., Isen, A. M., Turken, A. U. A neuropsychological theory of positive affect and its influence on cognition. Psychol. Rev. 106 (3), 529-550 (1999).
  22. Hommel, B., Haggard, P., Eitam, B. 34;Action control and the sense of agency&#34. The sense of agency. , 307-326 (2015).
  23. Akbari Chermahini, S., Hommel, B. The (b)link between creativity and dopamine: Spontaneous eye blink rates predict and dissociate divergent and convergent thinking. Cognition. 115 (3), 458-465 (2010).
  24. Sanchez-Vives, M. V., Slater, M. From presence to consciousness through virtual reality. Nat. Rev. Neurosci. 6 (4), 332-339 (2005).
  25. Slater, M., Spanlang, B., Sanchez-Vives, M. V., Blanke, O. First person experience of body transfer in virtual reality. PLOS ONE. 5 (5), (2010).
  26. Maselli, A., Slater, M. The building blocks of the full body ownership illusion. Front. Hum. Neurosci. 7 (83), (2013).
  27. Pavone, E. F., et al. Embodying others in immersive virtual reality: Electro cortical signatures of monitoring the errors in the actions of an avatar seen from a first-person perspective. J. Neurosci. 26 (2), 268-276 (2016).
  28. Tieri, G., Tidoni, E., Pavone, E. F., Aglioti, S. M. Body visual discontinuity affects feeling of ownership and skin conductance responses. Sci. Rep. 5 (17139), (2015).
  29. Kilteni, K., Normand, J., Sanchez-Vives, M. V., Slater, M. Extending body space in immersive virtual reality: A long arm illusion. PLOS ONE. 7 (7), (2012).
  30. Banakou, D., Groten, R., Slater, M. Illusory ownership of a virtual child body causes overestimation of object sizes and implicit attitude changes. Proc. Natl. Acad. Sci. U.S.A. 110 (31), 12846-12851 (2013).
  31. Martini, M., Perez-Marcos, D., Sanchez-Vives, M. V. What color is my arm? Changes in skin color of an embodied virtual arm modulates pain threshold. Front. Hum. Neurosci. 7 (438), (2013).
  32. Peck, T. C., Seinfeld, S., Aglioti, S. M., Slater, M. Putting yourself in the skin of a black avatar reduces implicit racial bias. Consc. Cogn. 22 (3), 779-787 (2013).
  33. Shimada, S., Fukuda, K., Hiraki, K. Rubber hand illusion under delayed visual feedback. PLOS ONE. 4 (7), (2009).
  34. Synofzik, M., Vosgerau, G., Newen, A. Beyond the comparator model: A multifactorial two-step account of agency. Conscious. Cogn. 17 (1), 219-239 (2008).
  35. Hommel, B., Müsseler, J., Aschersleben, G., Prinz, W. The theory of event coding (TEC): A framework for perception and action planning. Behav. Brain. Sci. 24 (5), 849-878 (2001).
  36. Hommel, B. Action control according to TEC (theory of event coding). Psychol. Res. 73 (4), 512-526 (2009).

Play Video

Citar este artículo
Ma, K., Lippelt, D. P., Hommel, B. Creating Virtual-hand and Virtual-face Illusions to Investigate Self-representation. J. Vis. Exp. (121), e54784, doi:10.3791/54784 (2017).

View Video