概要

Testing Sensory and Multisensory Function in Children with Autism Spectrum Disorder

Published: April 22, 2015
doi:

概要

We describe how to implement a battery of behavioral tasks to examine the processing and integration of sensory stimuli in children with ASD. The goal is to characterize individual differences in temporal processing of simple auditory and visual stimuli and relate these to higher order perceptual skills like speech perception.

Abstract

In addition to impairments in social communication and the presence of restricted interests and repetitive behaviors, deficits in sensory processing are now recognized as a core symptom in autism spectrum disorder (ASD). Our ability to perceive and interact with the external world is rooted in sensory processing. For example, listening to a conversation entails processing the auditory cues coming from the speaker (speech content, prosody, syntax) as well as the associated visual information (facial expressions, gestures). Collectively, the “integration” of these multisensory (i.e., combined audiovisual) pieces of information results in better comprehension. Such multisensory integration has been shown to be strongly dependent upon the temporal relationship of the paired stimuli. Thus, stimuli that occur in close temporal proximity are highly likely to result in behavioral and perceptual benefits – gains believed to be reflective of the perceptual system's judgment of the likelihood that these two stimuli came from the same source. Changes in this temporal integration are expected to strongly alter perceptual processes, and are likely to diminish the ability to accurately perceive and interact with our world. Here, a battery of tasks designed to characterize various aspects of sensory and multisensory temporal processing in children with ASD is described. In addition to its utility in autism, this battery has great potential for characterizing changes in sensory function in other clinical populations, as well as being used to examine changes in these processes across the lifespan.

Introduction

Traditional neuroscience research has often approached understanding sensory perception by focusing on the individual sensory modalities. However, the environment consists of a wide array of sensory inputs that are integrated into a unified perceptual view of the world in a seemingly effortless manner. The fact that we exist in such a rich multisensory environment requires that we better understand the way in which the brain combines information across the different sensory systems. The need for this understanding is further amplified by the fact that the presence of multiple pieces of sensory information often results in substantial improvements in behavior and perception1-3. For example, there is a large improvement (up to 15 dB in the signal-to-noise ratio) in the ability to understand speech in a noisy environment if the observer can also see the speaker’s lip movements4-7.

One of the major factors that affects how the different sensory inputs are combined and integrated is their relative temporal proximity. If two sensory cues occur close together in time, a temporal structure that suggests common origin, they are highly likely to be integrated as evidenced by changes in behavior and perception8-12. One of the most powerful experimental tools for examining the impact of multisensory temporal structure on behavioral and perceptual responses is simultaneity judgment (SJ) tasks13-16. In such a task, multisensory (e.g., visual and auditory) stimuli are paired at various stimulus onset asynchronies (SOAs) ranging from objectively simultaneous (i.e., a temporal offset of 0 msec) to highly asynchronous (e.g., 400 msec). Participants are asked to judge the stimuli as simultaneous or not via a simple button press. In such a task, even when the visual and auditory stimuli are presented at SOAs of 100 msec or more, subjects report that the pair was simultaneous on a large proportion of trials. The window of time in which two inputs can occur and have a high probability of being perceived as occurring simultaneously is known as the temporal binding window (TBW)17-19.

The TBW is a highly ethological construct, in that it represents the statistical regularities of the world around us19. The “window” provides flexibility for the specification of events of common origin; one that allows for stimuli occurring at different distances with different propagation times (both physical and neural) to still be “bound” to one another. However, although the TBW is a probabilistic construct, changes that expand (or contract) the size of this window are likely to have cascading and potentially detrimental effects on perception20,21.

Autism spectrum disorder (ASD) is a neurodevelopmental disorder that has been classically diagnosed on the basis of deficits in social communication and the presence of restricted interests and repetitive behaviors22. In addition, and as recently codified in the DSM-5, children with ASD frequently exhibit alterations in their responses to sensory stimuli. Rather than being restricted to a single sense, these deficits often encompass multiple senses including hearing, touch, balance, taste and vision. Along with such a “multisensory” presentation, individuals with ASD often exhibit deficits in the temporal realm. Collectively, these observations suggest that multisensory temporal function may be preferentially altered in autism17,23-25. Although concordant with the view of altered sensory function in ASD, changes in multisensory temporal function may also be an important contributor to the deficits in social communication in ASD, given the importance of rapid and accurate binding of multisensory stimuli for social and communication functions. Take as an example the speech exchange described above in which important information is contained in both the auditory and visual modalities. Indeed, these tasks have been used to demonstrate significant differences in the width of the multisensory TBW in high functioning children with autism26-28.

Due to its importance for normal perceptual function, its potential implications for higher order processes such as social communication (and other cognitive abilities), and its clinical relevance, a battery of tasks designed to assess multisensory temporal function in children with ASD is described.

Protocol

Ethics statement: All subjects must provide informed consent prior to the experiment. The research described here has been approved by the Vanderbilt University Medical Center’s Institutional Review Board.

1. Experiment Set Up

  1. Ask the participants to complete the tasks in a dimly lit, sound controlled room.
    NOTE: Consider implementing a visual schedule29,30 as part of the study design. Although each task in this battery is relatively short, performing several tasks in a row can cause fatigue in some children, both with typical development (TD) and with ASD. A visual schedule should include all the planned activities (both tasks and hearing/vision screening), as well as short breaks between tasks. This structure will help contribute to an overall positive research experience for the participant, and has even been shown to elicit more accurate responses in some tasks31.
  2. Affix a chin rest to the table where the participant will sit while completing the task, with the computer monitor placed 60 cm away from the participant. This is to ensure that the stimuli are the same intensity for each participant. Use noise-cancelling headphones or speakers for auditory stimulus delivery.
  3. Due to differences in individual experimental platforms (sound card, graphics card, operating system, etc.), verify the stimulus duration and stimulus onset asynchronies (SOAs) with an oscilloscope, photovoltaic cell, and microphone on each computer for the experiment.
    NOTE: Depending on the individual platform (for example, a slow sound card), make adjustments to the experiment code so that the timing of stimulus presentation is accurate.

2. Stimuli

  1. Generate 2 .wav or .mp4 files with duration of 16 msec (including a 2 – 3 msec up ramp and down ramp) at 500 Hz and 1,000 Hz. Do this by specifying a sine wave of the desired frequency with a gradual ramp up to full amplitude, followed by a down ramp at the end of the tone. Save the sine wave as an auditory file. Test the volume of each tone with a sound pressure level meter to verify that it is played at 60 dB. If speakers are used to present auditory stimuli, the sound should be tested at 60 cm away from the screen (where the participant will sit). If headphones will be used, measure the volume directly next to each headphone.
    NOTE: It is easier to keep the computer at a standard volume and adjust the volume of the tone accordingly by adjusting the code used to generate the stimulus or an audio editing program.
  2. Create visual stimuli by either specifying the size and location of the flash in the experiment code, or by generating a JPEG or bitmap image with a black background and a white ring centered around a fixation crosshairs, and displaying at the appropriate time. Set the duration of the visual flash to 16 ms in the experiment code.
  3. Record speech stimuli by a native speaker in a quiet room against a plain white background from the shoulders up with the speaker in the center of the frame. Record the the video stimuli with the highest resolution video camera available. Alternatively, publicly available stimulus videos may be utilized if desired.
    NOTE: Video and audio of the speaker saying the syllables “ba” and “ga” are required for this experiment.
    1. Using any video editing program, export the auditory component of each track and save as a separate .wav file. Do this by going into export settings window and select “wav audio file” from the “Format” drop-down menu. Check “Export Audio” and then click “Export” to the bottom of the Export Settings window. 
    2. Next, export the visual component (i.e., silent video) of each track and save as a separate .avi file. Do this by going into export settings window and select “Uncompressed AVI” from the “Format” drop-down menu. Check “Export Video” and then click “Export” to the bottom of the Export Settings window.
    3. Finally, remove the auditory component of the “ga” track and replace it with the auditory component of the “ba” track to make the McGurk stimulus. To do this select the “.avi” file from the desktop into the as the video source by selecting “Source”, in this case “ga_VOnly.avi”. Similarly select the other video “ba_Aonly.avi”. In the Program sequence menu, ensure that the video (V1) “Video 1” source is “ga_VOnly.avi” and the audio (A1) “Audio 1” source is “ba_Aonly.avi”. Verify that the onset of the visual stimulus “ga” is temporally aligned with the auditory stimulus “ba”.
      NOTE: It is important that the auditory stimuli are the exact same recording in both the audiovisual and auditory only stimulus (not just the same syllable) so that the only difference between an audiovisual “ba” and the McGurk stimulus is the video component. This will ensure that one can make a proper comparison to examine the influence of the visual stimulus on the perceived auditory syllable.

3. Task Battery

NOTE: This task requires that all participants are able to understand and comply with verbal instructions from the experimenter.

  1. Ensure that all participants have normal vision by conducting a simple screening prior to testing. Use a Snellen eye chart at 20 feet and ask the participant to read each line with both eyes open (participants will be viewing stimuli with both eyes open). Record the lowest line that participants accurately report the visual stimuli. Participants should have 20/40 vision or better.
  2. Ensure that all participants have normal hearing by testing hearing thresholds at 500, 1,000, 2,000, and 4,000 Hz in each ear. Auditory testing should be completed in a sound controlled room with an audiometer.
    1. To find a participant’s threshold, instruct the participant to raise their hand each time they detect a tone. Play a pulsed 500 Hz tone routed to the right ear starting at 35 dB and decrease the volume in 5 dB steps. Once a participant no longer detects a tone, increase the volume in 5 dB steps to verify the lowest perceptible volume. Repeat this procedure with each frequency, and then repeat all tone frequencies in the left ear. Participants should have thresholds of 20 dB or lower.
  3. Ensure that participants are able to understand and comply with verbal instructions by measuring both IQ and receptive language skills with standardized neuropsychological measures prior to testing. Participants should have a measured IQ of 70 or more. If desired, additional neuropsychological testing can be completed at this time.

4. Simultaneity Judgment (SJ)

NOTE: The SJ task is a two alternative forced-choice task (2-AFC) and consists of a visual ring and 1,000 Hz auditory tone presented at various SOAs (negative = auditory preceding visual, positive = visual proceeding auditory) presented in random order.

  1. Be sure to include fairly large SOAs (at least -400 to +400 msec) to get an accurate measurement of the full width of the TBW (typical stimulus set: -400, -300, -200, -150, -100, -50, 0, 50, 100, 150, 200, 300, 400 msec SOA). Use the same set of SOAs for each participant, which allows for easier comparison of task performance across participants. Present a minimum of 20 trials per SOA for a precise estimate. The task takes approximately 15 – 20 min to complete. Provide a short break every 100 trials to reduce participant fatigue.
  2. Instruct the participant to observe a flash and a beep and explain that their task is to decide if the flash and beep occurred at the same time or at different times. Instruct the participant to press “1” on the number pad, if the stimuli occurred at the same time, or “2”, if the stimuli occurred at a different time.
    NOTE: If a response box is available, this may also be used for collecting responses. Include these same instructions on the response screen after each trial.
  3. As an alternative substitute the flash and beep with a visual and auditory speech token (mouthed “ba” and voiced “ba”) and present at the same SOAs with the same task instruction (“Same time or different time?”). In this manner, compare the TBW for stimuli varying in complexity and social content within individual subjects27.

5. Temporal Order Judgment (TOJ)

NOTE: The auditory TOJ task is a 2-AFC task used to examine the temporal acuity of auditory processing. The visual TOJ task is a 2-AFC task used to examine the temporal acuity of visual processing. The multisensory TOJ task is a 2-AFC task used to examine temporal acuity across audition and vision. Each task takes approximately 10 – 15 min to complete.

  1. In the auditory TOJ task, instruct the participant to listen to two beeps presented (500 Hz and 1,000 Hz) at various delays and ask the participant to press “1” if the higher tone is played first or press “2” if the lower tone is played first. Present 20 trials for each SOA in random order.
    NOTE: Compared to the SJ task, there is a much smaller dynamic range of SOAs over which perception of unisensory auditory and visual temporal order changes, therefore use a stimulus set where smaller SOAs are more heavily represented (typical stimulus set: -250, -200, -150, -75, -50, -35, -20, -10, 10, 20, 35, 50, 75, 150, 200, 250 msec SOA, where negative = higher tone preceding lower tone, positive = lower tone proceeding higher tone) for the unisensory TOJ tasks.
  2. In the visual TOJ task, instruct the participant to observe two circles (above and below a central fixation crosshair) at various delays and ask the participant to press “1” if the top circle appears first or press “2” if the bottom circle appears first. Present 20 trials for each SOA in random order.
    NOTE: In this task, negative SOAs indicate that the top circle was presented first and positive SOAs indicate that the bottom circle was presented first.
  3. In the audiovisual TOJ task, instruct the participant to observe a small central flash and listen to a single tone (1,000 Hz) at various delays and ask the participant to press “1”, if the beep was presented first or press “2” if the flash was presented first. Present 20 trials for each SOA in random order.
    NOTE: Accuracy in the audiovisual TOJ is typically much worse compared to the unisensory auditory TOJ and visual TOJ tasks. This requires a wider range of SOAs compared to the unisensory TOJ tasks (typical stimulus set: -300, -250, -200, -150, -100, -80, -50, -20, 0, 20, 50, 80, 100, 150, 200, 250, 300). In this task, negative SOAs indicate that the auditory was presented first and positive SOAs indicated that the visual was presented first.
    NOTE: As with the SJ task, the TOJ task can be adapted to examine temporal processing across multiple kinds of stimuli. Here the TOJ task was completed with simple stimuli (auditory beeps and visual flashes), but this can be expanded to look at other stimulus pairs like speech and biological motion24.

6. McGurk Task

NOTE: The McGurk illusion consists of a video of the visual syllable “ga” paired with an auditory recording of the syllable “ba”. Many subjects will actually fuse the visual and auditory syllables and perceive this pair as the syllable “da” or “tha”32.

  1. Instruct the participant to observe different syllables and ask the participant to report the syllable that they perceived. In one block present 20 trials each of the unisensory syllables (auditory only syllables (A-“ba”, A-“ga”) and visual only syllables (V-“ba”, V-“ga”) in random order. In a second block, present 20 trials each of the audiovisual syllables (AV-“ba”, AV-“ga”, and the A-“ba”/V-“ga” McGurk stimulus) in random order. Ask the participant to press the letter on the keyboard corresponding to the perceived syllable (“press b for ba, press g for ga, press d for da, press t for tha”). This task takes approximately 5 – 10 min total to complete.
  2. A more conservative estimate consists of an open response format33 in which the participant reports out loud the perceived syllable and the response is recorded by the experimenter.

Representative Results

This task battery has proven very successful in measuring individual differences in temporal processing in individuals with and without ASD17,18,23,27. For the SJ task, plot the resulting data from each individual subject by first calculating the proportion of responses at each SOA that subject responded “synchronous” and then fitting the resulting response curve with a Gaussian curve. As illustrated in Figure 1A, there is a window of time in which visual-auditory stimuli pairs can be presented with a delay and will be perceived as synchronous on a high proportion of trials. The width of the “left” (covering auditory-first asynchronies) and “right” (covering visual-first asynchronies) side of the TBW is measured by calculating the width of the window from 0 ms to the SOA on each side that corresponds to 50% synchronous responses (dashed lines, Figure 1A). A robust finding across both TD participants and clinical populations is that the right TBW (visual first) is typically wider than the left (auditory first) TBW. Participants with ASD also show a wider TBW than their TD counterparts (Figure 1B).

For the TOJ tasks, the data from each individual subject is first plotted by calculating the proportion of responses at each SOA that the “positive” stimulus was perceived as being presented first (higher tone, bottom circle, visual flash), and the resulting response curves from each task are fit with a cumulative Gaussian curve. Example TOJ curves from a single TD subject are shown in Figure 2. While performance on the unisensory TOJ tasks is highly accurate for all but the smallest SOAs (2A and 2B), determining temporal order across modalities is much more difficult, as indexed by a much more shallow curve (2C) and decreased accuracy (2D) for the multisensory TOJ task. The point of subjective simultaneity (PSS) for each subject is measured by calculating the SOA at which subjects perform at chance (see dashed line, Figure 2A-C). Perform a t-test to determine if there are differences between groups. To compare performance across tasks or across subjects, calculate accuracy at each SOA and plot as a function of the delay between the stimulus pair (collapsing across the positive and negative SOA at each delay; see Figure 2D). Some studies examining sensory processing in ASD have found differences in TOJ tasks between ASD and TD groups23,34, while others have not observed significant differences between groups27. The reason for these discrepancies is unclear, although high heterogeneity across individuals with ASD35 and slight differences in task structure across these studies may play a role.

McGurk perception is analyzed by calculating the proportion of trials that the participant perceived the fused percept “da” compared to the total number of trials presented. Example results from the McGurk task are shown for an ASD and TD subject group in Figure 3A. Even within the same individual, responses to the stimulus can often vary from trial-to-trial, therefore it is useful to consider the distribution of these responses. There is currently some debate in the literature about differences in multisensory integration as indexed by McGurk perception. Some groups have found that TD subjects have increased McGurk perception compared with ASD subjects27,36, while others have found that ASD subjects had higher McGurk perception37. Some of these discrepancies may be explained by differences in the McGurk stimulus used in each study. Some McGurk stimuli are “stronger” than others (i.e., they are more likely to elicit the illusory McGurk percept on a high proportion of trials for a subject) than others, which can be quantified by a recent model of variability McGurk perception38. As an example of the utility of this battery, individual differences in temporal processing (such as the width of the TBW) can be correlated with performance differences on a perceptual task like the McGurk illusion (Figure 3B). Several studies have observed a link between temporal acuity in the SJ task and perceptual differences in speech perception in the McGurk task and other measures of multisensory integration18,27.

Figure 1
Figure 1. Simultaneity Judgment (SJ) results. Representative data from the Simultaneity Judgment (SJ) task for a single ASD subject (age = 8) and a single TD subject (age = 9). (A) Raw data from the SJ task for a single ASD subject is shown in black. The fitted Gaussian curve is shown in blue. The blue dashed lines show width of the left and right TBW (227 ms and 333 ms, respectively) for this individual subject. (B) Fitted TBW curves for the same ASD subject in blue and a single TD subject is shown in red. The TD subject has a smaller TBW (left TBW = 166 msec, right TBW = 196 msec) than the ASD subject. Please click here to view a larger version of this figure.

Figure 2
Figure 2. Temporal Order Judgment (TOJ) results. Representative data from the Temporal Order Judgment (TOJ) tasks from a single TD subject (age = 15). (A) Raw data and fitted curve for auditory TOJ task. Data are plotted as a function of lower pitch first responses across the different SOAs (negative SOAs indicate higher pitch came first, positive SOAs indicate lower pitch came first). (B) Raw data and fitted curve for visual TOJ task. Data are plotted as a function of bottom circle first responses across the SOAs (negative SOAs indicate top circle came first, positive SOAs indicate bottom circle came first). (C) Raw data and fitted curve for multisensory TOJ. Data are plotted as a function of visual flash first responses across the SOAs (negative indicates auditory beep came first, positive SOAs indicate visual flash came first). (D) Same data from A-C plotted as the average accuracy (correct identification of temporal order) at each delay (collapsed across the negative and positive SOA). Please click here to view a larger version of this figure.

Figure 3
Figure 3. McGurk Task results and comparison of McGurk performance with Simultaneity Judgment performance. Representative data from the McGurk task with ASD and TD subject groups, adapted with permission from27. (A) Responses to the McGurk stimulus for TD (shown in black) and ASD (shown in red) subjects. Because of the variability of responses for the same stimulus both within individual subjects and across subjects in a group, responses are shown as the percent of trials that were perceived as each phoneme. ASD subjects heard the auditory syllable “ba” on a larger percentage of trials than TD subjects, while the TD subjects heard the fused audiovisual syllable “da” on a larger percentage of trials than ASD subjects. (B) Correlation between the width of the temporal binding window (TBW) from the SJ task and the proportion of trials in which the fused audiovisual syllable “da” was perceived from the McGurk stimulus in the same group of ASD subjects. There was a significant negative correlation where the low McGurk perception was correlated with a larger TBW (r = 0.46, p < 0.05). Please click here to view a larger version of this figure.

Discussion

The manuscript describes elements of a psychophysical task battery that are used to assess temporal processing and acuity in sensory and multisensory systems research. The battery has wide applicability for a number of populations and has been used by our laboratory in order to characterize audiovisual temporal performance in typical adults18, children10,39, and in children and adults with autism17,23. In addition, it has been used to examine how various facets of the battery relate to one another in correlational analyses27, and is currently being used to relate sensory and multisensory performance measures to cognitive domains including language and communication, attention and executive function. It is important to note that the main limitation of this task battery with regards to testing individuals with ASD is that the format of the tasks requires that participants have the receptive language skills to understand verbal instructions and indicate this understanding. As such, the task battery is currently only suitable for testing high-functioning individuals with ASD.

The emphasis of the battery on temporal factors is grounded in the importance of these factors for the construction of veridical sensory and perceptual representations. In the multisensory realm, this is best captured in the construct of a multisensory “temporal binding window (TBW),” the epoch of time in which auditory and visual cues can strongly influence one another. As previously suggested, this window is a highly ecological construct, in that sensory events and their associated energies happen at different distances. Thus, accounting for the differences in propagation times of the auditory and visual signals, the brain assesses audiovisual temporal structure in relation to this window, and thus makes a probabilistic judgment as to whether the stimuli belong together or not. These data strongly argue for the TBW as a measure of temporal acuity and strength of multisensory integration, and indeed it has been shown that the width of this window appears to be correlated with the magnitude of the binding process, with those with smaller windows having larger indices of integration18,27.

In addition to be a probabilistic construct across individuals, the TBW is also very much dependent on stimulus and task. Indeed, as highlighted in the battery presented here, multisensory temporal function can be assessed using stimuli ranging from the very simple and non-ecological (e.g., flashes and beeps) to the most ethologically relevant of audiovisual signals (i.e., speech). In addition, the TBW can be derived from measures including simultaneity judgments, temporal order judgments, perception of illusory stimuli, etc. Hence, the collective use of tasks that differ in both their stimulus and task contingencies provide the most comprehensive window into audiovisual temporal function.

An individual’s TBW is measured by extracting parameters from a curve fit to the participant’s raw performance from the SJ task. Therefore, care should be taken to examine individual subjects’ curve fits to ensure that the fitted curve accurately describes the raw data. Although an array of definitions for measuring the width of the TBW exists in the literature, it is suggested that the following criteria be used to easily compare across subjects while still capturing individual differences in performance. First, the “left” and “right” TBW should be measured from 0 msec (objectively auditory leading asynchrony vs. visual leading asynchrony) as opposed to the individual PSS (the mean of the fitted curve). Secondly, the width should be measured at 50% report of synchronous trials (not 50% of the maximum response for that subject), capturing the range of asynchronies in which a subject reported “same time” for a majority of trials. Because some subjects never report “same time” for more than 75% of the trials on any SOA, this will allow the greatest number of subjects to be included in the analysis.

Along with its utility in characterizing multisensory temporal function in “neurotypical” populations across the lifespan, elements of the described task battery have been used to assess sensory and multisensory processes in individuals with ASD26-28,37. Although sensory disturbances have been classically associated with autism, it is only recently that these disturbances have entered the diagnostic vernacular, and that a stronger appreciation of how altered multisensory function may contribute to the autism phenotype has been gained. Indeed, the core impacted domains in autism (i.e., social communication) are representations that are built on the basis of multisensory processes, strongly suggesting that alterations in these processes could have detrimental effects on social communication. Using elements of the temporal battery described here, it has been established that multisensory temporal acuity is poorer in autism, and that this poorer performance is related to speech comprehension measures28. Ongoing work is seeking to relate various aspects of audiovisual temporal performance to a host of cognitive measures.

開示

The authors have nothing to disclose.

Acknowledgements

This research was supported by NIH R21CA183492, the Simons Foundation, the Wallace Research Foundation, and by CTSA award UL1TR000445 from the National Center for Advancing Translational Sciences.

Materials

Oscilloscope
Photovoltaic cell
Microphone
Noise-cancelling headphones
Chin rest
Audiometer

参考文献

  1. Calvert, G. A., Spence, C., Stein, B. E. . Handbook of Multisensory Processes. , (2004).
  2. Stein, B. E., Meredith, M. A. . The Merging of the Senses. , 224 (1993).
  3. King, A. J., Calvert, G. A. Multisensory integration: perceptual grouping by eye and ear. Curr Biol. 11 (8), R322-R325 (2001).
  4. Stevenson, R. A., James, T. W. Audiovisual integration in human superior temporal sulcus: Inverse effectiveness and the neural processing of speech and object recognition. NeuroImage. 44 (3), 1210-1223 (2009).
  5. MacLeod, A., Summerfield, A. Q. A procedure for measuring auditory and audio-visual speech-reception thresholds for sentences in noise: rationale, evaluation, and recommendations for use. Br J Audiol. 24 (1), 29-43 (1990).
  6. Sumby, W. H., Pollack, I. Visual Contribution to Speech Intelligibility in Noise. J. Acoust. Soc. Am. 26, 212-215 (1954).
  7. Bishop, C. W., Miller, L. M. A multisensory cortical network for understanding speech in noise. J Cogn Neurosci. 21 (9), 1790-1805 (2009).
  8. Stevenson, R. a., Wallace, M. T. Multisensory temporal integration: task and stimulus dependencies. Exp Brain Res. 227 (2), 249-261 (2013).
  9. Colonius, H., Diederich, A., Steenken, R. Time-window-of-integration (TWIN) model for saccadic reaction time: effect of auditory masker level on visual-auditory spatial interaction in elevation. Brain Topogr. 21 (3-4), 177-184 (2009).
  10. Hillock, A. R., Powers, A. R., Wallace, M. T. Binding of sights and sounds: age-related changes in multisensory temporal processing. Neuropsychologia. 49, 461-467 (2011).
  11. Wallace, M. T. Unifying multisensory signals across time and space. Exp Brain Res. 158 (2), 252-258 (2004).
  12. Alais, D., Newell, F. N., Mamassian, P. Multisensory processing in review: from physiology to behaviour. Seeing Perceiving. 23 (1), 3-38 (2010).
  13. Conrey, B., Pisoni, D. B. Auditory-visual speech perception and synchrony detection for speech and nonspeech signals. J Acoust Soc Am. 119 (6), 4065-4073 (2006).
  14. Stevenson, R. A., Fister, J. K., Barnett, Z. P., Nidiffer, A. R., Wallace, M. T. Interactions between the spatial and temporal stimulus factors that influence multisensory integration in human performance. Exp Brain Res. 219 (1), 121-137 (2012).
  15. Wassenhove, V., Grant, K. W., Poeppel, D. Temporal window of integration in auditory-visual speech perception. Neuropsychologia. 45 (3), 598-607 (2007).
  16. Eijk, R. L. J., Kohlrauch, A., Juola, J. F., Van De Par, S. Audiovisual synchrony and temporal order judgments: Effects of exerpimental method and stimulus type. Percept Psychophys. 70 (6), 955-968 (2008).
  17. Foss-Feig, J. H. An extended multisensory temporal binding window in autism spectrum disorders. Exp Brain Res. 203 (2), 381-389 (2010).
  18. Stevenson, R. A., Zemtsov, R. K., Wallace, M. T. Individual differences in the multisensory temporal binding window predict susceptibility to audiovisual illusions. J Exp Psychol Hum Percept Perform. 38 (6), 1517-1529 (2012).
  19. Wallace, M. T., Stevenson, R. A. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia. 64C, 105-123 (2014).
  20. Hairston, W. D., Burdette, J. H., Flowers, D. L., Wood, F. B., Wallace, M. T. Altered temporal profile of visual-auditory multisensory interactions in dyslexia. Exp Brain Res. 166 (3-4), 474-480 (2005).
  21. Carroll, C. A., Boggs, J., O’Donnell, B. F., Shekhar, A., Hetrick, W. P. Temporal processing dysfunction in schizophrenia. Brain Cogn. 67 (2), 150-161 (2008).
  22. Kanner, L. Autistic Disturbances of Affective Contact. Nervous Child. 2, 217-250 (1943).
  23. Kwakye, L. D., Foss-Feig, J. H., Cascio, C. J., Stone, W. L., Wallace, M. T. Altered auditory and multisensory temporal processing in autism spectrum disorders. Front Integr Neurosci. 4, 129 (2011).
  24. Boer-Schellekens, L., Eussen, M., Vroomen, J. Diminished sensitivity of audiovisual temporal order in autism spectrum disorder. Front Integr Neurosci. 7, 8 (2013).
  25. Bebko, J. M., Weiss, J. A., Demark, J. L., Gomez, P. Discrimination of temporal synchrony in intermodal events by children with autism and children with developmental disabilities without autism. J Child Psychol Psychiatry. 47 (1), 88-98 (2006).
  26. Stevenson, R. A. Brief Report: Arrested Development of Audiovisual Speech Perception in Autism Spectrum Disorders. J Autism Dev Disord. 44 (6), 1470-1477 (2013).
  27. Stevenson, R. A. Multisensory temporal integration in autism spectrum disorders. J Neurosci. 34 (3), 691-697 (2014).
  28. Stevenson, R. A. Evidence for Diminished Multisensory Integration in Autism Spectrum Disorders. J Autism Dev Disord. 44 (12), 3161-3167 (2014).
  29. Hodgdon, L. Q., Quill, Q. A. . Teaching children with autism: Strategies to enhance communication and socialization. , 265-286 (1995).
  30. Bryan, L. C., Gast, D. L. Teaching on-task and on-schedule behaviors to high-functioning children with autism via picture activity schedules. J Autism Dev Disord. 30 (6), 553-567 (2000).
  31. Liu, T., Breslin, C. M. The effect of a picture activity schedule on performance of the MABC-2 for children with autism spectrum disorder. Res Q Exerc Sport. 84 (2), 206-212 (2013).
  32. McGurk, H., MacDonald, J. Hearing lips and seeing voices. Nature. 264, 746-748 (1976).
  33. Colin, C., Radeau, M., Deltenre, P. Top-down and bottom-up modulation of audiovisual integration in speech. European Journal of Cognitive Psychology. 17 (4), 541-560 (2005).
  34. Boer-Schellekens, L., Eussen, M., Vroomen, J. Diminished sensitivity of audiovisual temporal order in autism spectrum disorder. Front Integr Neurosci. 7 (8), (2013).
  35. Lenroot, R. K., Yeung, P. K. Heterogeneity within Autism Spectrum Disorders: What have We Learned from Neuroimaging Studies. Front Hum Neurosci. 7, 733 (2013).
  36. Irwin, J. R., Tornatore, L. A., Brancazio, L., Whalen, D. H. Can children with autism spectrum disorders ‘hear’ a speaking face. Child Dev. 82 (5), 1397-1403 (2011).
  37. Woynaroski, T. G. Multisensory Speech Perception in Children with Autism Spectrum Disorders. J Autism Dev Disord. 43 (12), 2891-2902 (2013).
  38. Magnotti, J. F., Beauchamp, M. S. The Noisy Encoding of Disparity Model of the McGurk Effect. Psychonomic Bulletin & Review. , (2014).
  39. Hillock-Dunn, A., Wallace, M. T. Developmental changes in the multisensory temporal binding window persist into adolescence. Dev Sci. 15 (5), 688-696 (2012).

Play Video

記事を引用
Baum, S. H., Stevenson, R. A., Wallace, M. T. Testing Sensory and Multisensory Function in Children with Autism Spectrum Disorder. J. Vis. Exp. (98), e52677, doi:10.3791/52677 (2015).

View Video