Summary

Experience is Instrumental in Tuning a Link Between Language and Cognition: Evidence from 6- to 7- Month-Old Infants’ Object Categorization

Published: April 19, 2017
doi:

Summary

At 3-4 months, listening to human and nonhuman primate vocalizations boosts infants' cognition; by 6 months, only human vocalizations exert this cognitive advantage. We describe an exposure manipulation that reveals the powerful shaping role of experience as infants specify which sounds to link to cognition and which to tune out.

Abstract

At birth, infants not only prefer listening to human vocalizations, but also have begun to link these vocalizations to cognition: For infants as young as three months of age, listening to human language supports object categorization, a core cognitive capacity. This precocious link is initially broad: At 3 and 4 months, vocalizations of both humans and nonhuman primates support categorization. But by 6 months, infants have narrowed the link: Only human vocalizations support object categorization. Here we ask what guides infants as they tune their initially broad link to a more precise one, engaged only by the vocalizations of our species. Across three studies, we use a novel exposure paradigm to examine the effects of experience. We document that merely exposing infants to nonhuman primate vocalizations enables infants to preserve the early-established link between this signal and categorization. In contrast, exposing infants to backward speech — a signal that fails to support categorization at any age — offers no such advantage. Our findings reveal the power of early experience as infants specify which signals, from an initially broad set, they will continue to link to cognition.

Introduction

Human infants are born ready to acquire language, a cultural and cognitive tool that defines us as a species and fundamentally shapes our development. Central to our capacity for learning language are two unique features of human development: Our altriciality and our powerful learning strategies. Together, these features unlock a remarkable degree of early plasticity that enables infants to be highly attuned to input from their environments. For example, although infants come into the world with a set of perceptual preferences and discriminatory capacities sufficiently broad to include the faces and voices of human and nonhuman primates, within months they narrow these preferences to exclusively human signals1,2,3. This process of perceptual narrowing is adaptive on two counts: It increases the signal-to-noise ratio of relevant communicative signals — those that will guide infants to efficiently and accurately navigate their complex social world — and it paves the way for integrating information across perceptual modalities (e.g., integrating faces and voices)4 — an essential element for the multimodal nature of human language. Moreover, because of infants' early plasticity, perceptual narrowing can be prevented or reversed with exposure to nonnative signals (e.g., nonnative language, nonhuman faces, foreign musical rhythms)5,6,8. This illustrates that perceptual narrowing is experience-driven.

Yet to master their native language, infants must do more than tune in to the faces and voices of its speakers. The power of human language is inextricably linked to cognition9. Incredibly, even in their first year of life, infants have begun to link language and cognition: Simply listening to human language promotes infants' ability to form object categories, a building block for their cognition10,11,12.

This is not to say that infants can only form object categories if they are listening to language. On the contrary, decades of research reveal that infants in their first months of life can form at least some object categories in the absence of any sounds13. But not all object categories are equally easy for infants to form. The slightly more difficult categories provide a powerful opportunity to uncover what effect, if any, language may have on infant categorization. To do so, researchers identify object categories that are more difficult for infants to discern in the absence of other supporting information (like language) and then ask whether language (and other sounds) offer infants any advantage in categorization.

Using this logic, an early link between language and object categorization was documented in a novelty preference paradigm designed to accommodate very young infants (Figure 1b)11. This paradigm has two phases. During the familiarization phase, all infants view images of a series of distinct objects (e.g., dinosaurs) presented sequentially in conjunction with a sound. During the test phase, all infants view two new images in silence: One is a new member of the now-familiar category (e.g., another dinosaur) and the other is a member of a novel category (e.g., a fish). Infants' looking time at test serves as an index of categorization. If infants form the category during familiarization, then at test they should distinguish the novel from the familiar image. If infants fail to do so, then at test they should perform at chance14,15,16.

The results were striking. At 3, 4, and 6 months, infants listening to language during familiarization — but not tone sequences — successfully formed object categories11,17. Surprisingly, language is not the only sound that exerts this cognitive advantage: At 3 and 4 months, listening to vocalizations of nonhuman primates (Madagascar, blue-eyed lemur: Eulemur macaco flavifrons) confers the same cognitive advantage as human vocalizations17. By 6 months of age, however, infants have tuned this initially broad link specifically to human vocalizations; lemur vocalizations no longer exert an advantageous effect on infant categorization17.

But what mechanism underlies infants' increasing precision in linking language and cognition? Here, we consider the contribution of infants' experience. Does infants' rich experience with human language (and their dearth of experience listening to lemur vocalizations) play a role as they narrow the link between initially privileged signals (e.g., human and nonhuman primate vocalizations) and the core cognitive process of object categorization? Certainly, we cannot test this by manipulating infants' exposure to language. But we can manipulate their exposure to lemur vocalizations. We focus on infants at 6 and 7 months of age — infants who, in the absence of exposure, no longer link lemur vocalizations and object categorization17. In Experiment 1, we ask whether brief exposure to lemur vocalizations permits them to preserve the link between this signal and object categorization. In Experiment 2, we ask whether brief exposure to backward speech — a signal that consistently fails to support object categorization at any age17 — also promotes object categorization. In Experiment 3, we put the question of exposure to a more stringent test, examining whether more prolonged exposure to lemur vocalizations permits them to preserve their initially-broad link.

Protocol

The following procedures were approved by the Northwestern University Institutional Review Board and informed consent was obtained from the caregivers of all infants. We recruited infants being raised in environments with more than 50% exposure to English11,17. Infants in all experiments first listened to a soundtrack in which we manipulated their exposure to either lemur vocalizations or backward speech; they then participated in an object categorization task. What varied across infants was (a) the kind of signal they heard in the exposure manipulation and the categorization task (either lemur vocalizations or backward speech) and (b) the extent of the exposure phase (either brief exposure in the lab or prolonged exposure at home). For all infants, the categorization task took place in the lab.

1. Materials

  1. How to create the exposure manipulations
    1. Select 8 distinct samples of lemur vocalizations and 8 distinct samples of backward speech, each 2 to 4 s in duration. Together, the 8 samples should yield 30 s of each acoustic signal. Ensure that the lemur and backward speech segments are matched as closely as possible for duration (e.g., ~3 s in duration) and for mean frequency (e.g., ~300 Hz).
      1. To obtain lemur vocalizations, search a database of animal vocalizations or contact a researcher who collects animal vocalizations.
      2. To create backward speech segments, use a laptop and a software program (e.g., Audacity) to (1) record a research assistant speaking ~3 s sentences in infant-directed speech and (2) temporally reverse (in Audacity: Effect > Reverse) each speech segment.
    2. Choose a piece of classical instrumental music that is longer than 4 min. Here, we used the first 4 min of Beethoven's String Quartet in C Minor, Op 18 No. 4: III. Menuetto.
    3. Use an audio editing software program (e.g., Audacity) to import the music (in Audacity: File > Import) and select a portion of the piece that is approximately 4 min in duration (highlight and delete the remainder of the piece).
      1. Create 2 distinct soundtracks by inserting (i.e., copying and pasting) the 8 selected segments of each type (lemur vocalizations or backward speech) into the musical soundtrack.
      2. To create the "Lemur" soundtrack, open the 8 lemur vocalization sound files in the same software program (e.g., Audacity) and insert them into the file with the music.
        1. To do so, highlight an entire lemur vocalization, navigate to Edit > Copy, place the cursor in the music file, and navigate to Edit > Paste. Insert the vocalizations at irregular intervals throughout the soundtrack. Do not place them at musical phrase boundaries. Do this twice for each lemur vocalization, inserting them in a pseudorandomized order to ensure that no 2 identical vocalizations occur in a row. Together, this yields a 5 min loop (4 min of music, interspersed with 1 min of lemur vocalizations).
        2. Repeat this 5 min soundtrack so that the entire soundtrack lasts approximately 10 min. To do so, highlight the entire 5 min section, navigate to Edit > Copy, place the cursor at the end of the 5-min section, and navigate to Edit > Paste.
      3. To create the 10 min "Backward Speech" soundtrack, replace each lemur vocalization in the Lemur soundtrack with a selected segment of backward speech, using the same pseudorandomized order.
  2. How to create the categorization task
    1. Visual materials: Create 2 sets of line drawings by hand. In each set, include 9 members of the same object category (here, use 9 distinct drawings of dinosaurs and 9 distinct drawings of fish). Use a different bright color for each drawing within a given category. Scan these drawings onto a computer and save them as image files. Note that the drawings should be unfamiliar objects to infants at the ages tested.
    2. Acoustic materials: Select one lemur vocalization and one backward speech segment that were not included in either of the exposure manipulations (1.1.3). Match the lemur and backward speech segments as closely as possible for duration (e.g., ~3 s in duration) and for mean frequency (e.g., ~300 Hz).
    3. Program a task in MATLAB that includes a familiarization phase and a test phase; this task will be projected onto a 5 ft by 5 ft screen in front of the infant.
      1. Program the familiarization phase such that 8 of the visual exemplars from a single category (e.g., 8 dinosaurs, each a different color) appear sequentially. Each exemplar should appear for 20 s, on either the right or left side of a screen, in alternating fashion. Create 2 distinct versions, pairing each familiarization exemplar with one of the acoustic stimuli below ("Lemur" and "Backward Speech" conditions, respectively). In both conditions, the acoustic stimulus should occur twice on each familiarization trial; once when the visual exemplar appears, and once again 10 s later.
        1. In the Lemur condition, program each familiarization exemplar to appear simultaneously with a lemur vocalization.
        2. In the Backward Speech condition, program each familiarization exemplar to appear simultaneously with a backward speech segment.
      2. During the test phase, present the 9th exemplar from the familiar category (e.g., another dinosaur) and an exemplar from the other category (e.g., a fish). These two images should be identical in color and presented simultaneously (one presented on the right and the other on the left side of the screen) in silence for 20 s.
    4. In programming the categorization task, counterbalance (1) which category will be presented during the familiarization phase, (2) the side on which the first familiarization image will appear, and (3) the side on which the novel and familiar test images will appear.

2. General Procedures

  1. Exposure phase
    1. Place the infant close to the caregiver or on their lap in a quiet area. Place a laptop or tablet within the infant's viewing range (from 1 to 4 ft from the infant).
    2. Use the laptop or tablet to present either the Lemur or the Backward Speech soundtrack.
  2. Categorization task
    1. In a semi-darkened test room in the lab, seat the caregiver on a chair 4 ft from a 5 ft by 5 ft screen. Connect the laptop with the categorization task programmed in MATLAB to (1) the screen that will display the categorization task and (2) a projector that will project the task onto the screen.
    2. Seat the infant on the caregiver's lap, facing forward.
    3. Instruct the caregiver to avoid influencing the infant's behavior in any way. More specifically, instruct them to remain quiet and still throughout the duration of the task and to keep their infant centered, using their midline as a guide. Provide caregivers with a pair of blacked-out glasses to ensure that they cannot see the visual materials.
    4. Turn on the projector.
    5. Turn on recording equipment to capture the infants' behavior throughout the task.
    6. Begin the categorization task by pressing "Run" in MATLAB: Present the Lemur condition if the infant was exposed to the Lemur soundtrack and the Backward Speech condition if the infant was exposed to the Backward Speech soundtrack.
  3. Coding (categorization task only)
    1. Code the onset and offset of each infant's left and right looks during both the familiarization and test phases.
    2. Exclude infants who (1) look at fewer than 6 of the familiarization exemplars; (2) look during less than 40% of the familiarization phase. Also exclude infants if there is parental interference, experimental error or technical failure, or if their performance at test is greater than 2 SD from the mean.
  4. Analyses
    1. For the familiarization phase, calculate the total the amount of time each infant looked at the familiarization objects.
    2. For the test phase, calculate a preference score for each infant: (total looking time to novel test image)/(total looking time to both test images).

3. Experiment 1: Brief Exposure to Lemur Vocalizations

  1. Participants
    1. Recruit 14 infants at 6 to 7 months of age.
  2. Procedure
    1. Invite the caregiver and infant into a quiet, comfortable room. In our lab, this room is adjacent to the testing room.
    2. Exposure manipulation: Play the Lemur soundtrack once, with the music visualizer (e.g., iTunes) present on the screen.
    3. Categorization task: When the soundtrack finishes, guide the caregiver and infant into the testing room. Begin the categorization task (Lemur condition).

4. Experiment 2: Brief Exposure to Backward Speech

  1. Participants
    1. Identical to Experiment 1.
  2. Procedure
    1. Invite the caregiver and infant into a quiet, comfortable room.
    2. Exposure manipulation: Play the Backward Speech soundtrack once, with the music visualizer present on the screen.
    3. Categorization task: When the soundtrack finishes, guide the caregiver and infant into the testing room. Begin the categorization task (Backward Speech condition).

5. Experiment 3: Prolonged Exposure to Lemur Vocalizations

  1. Participants
    1. Recruit 14 infants at 4.5 months of age.
  2. Procedure
    1. By phone, invite the caregiver to participate in a 6 week long study. Explain that the study requires them to expose their infant, at home, to a 10 min soundtrack that contains 8 min of music and 2 min of lemur vocalizations (i.e., Lemur soundtrack). Explain that the study lasts for 6 weeks, beginning when the infant is 4.5 months and ending when the infant is 6 months of age.
    2. Provide caregivers with this precise tapering schedule7,8,18
      1. Week 1: Play the soundtrack to infant once a day, every day.
      2. Week 2: Play the soundtrack to infant once every other day.
      3. Week 3: Play the soundtrack to infant 3 times during the week.
      4. Weeks 4-6: Play the soundtrack to infant 2 times per week.
    3. To ensure the fidelity of the exposure manipulation throughout the 6 week exposure manipulation, share with caregivers an online interactive document that (a) outlines their role at every step throughout the 6 week period and (b) asks them to record each date and time at which they played the soundtrack to their infant.
    4. Send weekly reminders by phone or email to keep caregivers engaged and on track.
    5. During Week 5, schedule a lab visit for the following week (Week 6). Instruct caregivers that infants should listen to the soundtrack 2 to 4 days before their scheduled lab visit; infants should not listen to the soundtrack either on the day of or the day preceding their visit.
      NOTE: At the lab visit, there is no exposure manipulation: Infants participate only in the categorization task from Experiment 1 (Lemur condition).

Representative Results

Using the procedures outlined above (Figure 1), we ran three experiments to test the effects of exposure as infants refine the links between certain sounds and cognition.

Results of Experiment 1 revealed that brief exposure to lemur vocalizations had a striking effect. Seven month-olds exposed to lemur vocalizations reliably preferred the novel test image, M = 0.57, SD = 0.09; t(13) = 3.12, p = 0.008, d = 1.73 (Figure 2). Their success in forming object categories diverges with infants' failure to do so without exposure, at an age after which they had tuned out lemur vocalizations17. In contrast, results of Experiment 2 revealed that brief exposure to backward speech had no effect. Seven-month-olds exposed to backward speech performed at chance, M = 0.47, SD = 0.11; t(13) = -0.98, p = 0.34, d = 0.54 (Figure 2). Their chance performance mirrors that of infants who receive no exposure17. Importantly, infants' successful categorization in Experiment 1 and failure in Experiment 2 cannot be a consequence of differences in their visual engagement with the two signals: Between the two experiments, there were no differences in infants' mean accumulated looking times during familiarization, t(26) = 1.5, p = 0.14.

Together, results from Experiments 1 and 2 demonstrate that merely exposing infants to signals that were once part of infants' initially broad template — but not signals that were not — enables infants to reinstate a link between these signals and cognition. Next, we tested the robustness of the effects of exposure to an initially privileged signal.

Results of Experiment 3 revealed that prolonged exposure to lemur vocalizations had the same effect as in Experiment 1. Six month-olds, who had not heard a lemur vocalization for days (M = 2 days, SD = 2 days), nonetheless successfully formed object categories, M = 0.59, SD = 0.14, t(13) = 2.52, p = 0.026, d = 1.4 (Figure 2). Their successful categorization paralleled that of infants who had heard lemur vocalizations moments before the categorization task (Experiment 1), t(22.1) = 0.44, p = 0.66.

Figure 1
Figure 1: Experimental design. In the Exposure Phase (A), infants listened to either 2 min of lemur vocalizations (Experiments 1 and 3) or 2 min of backward speech segments (Experiment 2), embedded within a 10 min soundtrack of classical music. After exposure, infants participated in the Categorization Task (B) from Ferry et al., 2013. During familiarization, each infant viewed 8 distinct visual images (20 s each) from the same category, presented sequentially, in conjunction with either a lemur vocalization (Experiments 1 and 3) or a backward speech segment (Experiment 2). Signals presented during exposure differed from those presented in the Categorization Task; in the latter, the same signal (either a lemur call or a backward speech segment) was presented twice during each familiarization trial. At test (20 s), infants viewed 2 images — a new member of the now-familiar category and a member of a novel category — presented simultaneously in silence. Please click here to view a larger version of this figure.

Figure 2
Figure 2: Infants' Preference Scores at test across experiments. Infants exposed to lemur vocalizations (Experiments 1 and 3) reliably preferred the novel test image, indicating that they had formed the object category. In contrast, infants exposed to backward speech (Experiment 2) performed at chance. Error bars represent ±1 SEM. Significant differences between Preference Score and chance performance (0.50) and between test conditions are marked by a single asterisk (p < 0.05) or double asterisk (p < 0.01). Please click here to view a larger version of this figure.

Discussion

Here we outline a procedure for examining the role of experience in linking sounds and core cognitive processes early in infancy. Our combined experiments document the first evidence that experience plays a central role in guiding infants to specify which signals, from a broad initial set of possibilities, they will harness to core conceptual processes that ultimately provide the foundations of meaning. They also reveal an intricate interface between nature and nurture: Although experience plays little role, if any, in determining the broad set of signals that infants initially link to cognition17, experience is vital in guiding infants to tune out irrelevant signals from the initially privileged set and, instead, tune into the those they will continue to link to meaning.

Our exposure procedure has demonstrated that at 6 and 7 months, when the link between lemur vocalizations and object categorization would otherwise have been severed17, merely exposing infants to this signal has a dramatic effect. As has similarly been observed in the perceptual narrowing literature5,6,7,8, this exposure procedure demonstrated infants' plasticity, permitting them either to reinstate or maintain a developmentally prior link (Experiments 1, 3). It has also identified principled limits on the kinds of signals that exposure alone may act on. For signals not included in infants' initial endowment (e.g., backward speech), experience alone appears to be insufficient for creating, de novo, a link to cognition (Experiment 2).

Three aspects of our protocol are essential for interpreting the effects of exposure on infants' acquisition of an increasingly precise link between acoustic signals and cognition. First, during the exposure manipulation, the signals must be embedded within a non-social context (we chose classical music) to engage infants' attention without introducing social or communicative cues. Second, the exposure manipulation must include several different samples of the acoustic signal (we used 8); such variation helps keep infants engaged in the listening task. Third, the sound presented during the categorization task must not be one that infants heard earlier in the exposure manipulation.

Importantly, our exposure manipulation also opens several new avenues for future work. Are there critical periods for tuning the link between privileged signals and cognition? Which other cognitive capacities, if any, are supported by these signals? And what does it mean to be a "privileged signal?" The current results represent an important first step in identifying how exposure to certain ambient signals guides infants to identify which of those signals will ultimately carry meaning; further studies will help paint a more nuanced picture of the delicately balanced, cascading interactions between infants' inborn capacities and their environments, which collectively scaffold the foundation for their acquisition of language.

Disclosures

The authors have nothing to disclose.

Acknowledgements

This research was supported by an NSF Graduate Research Fellowship to Danielle R. Perszyk and a NIH grant to Sandra R. Waxman (R01HD083310).

Materials

Laptop 1 Use for presenting exposure soundtrack; preferably using iTunes Visualizer
Laptop 2 Use for programming and presenting categorization task
Laptop 3 (optional) Use for coding infant looking behavior (can also use one of the above laptops)
Coding software SuperCoder Use for coding infant looking behavior
Video recorder Use for recording infants' face (looking behavior during categorization task)
Mixer Use for integrating information from the video recorder (infant looking behavior) and the visual stimuli (categorization task) 
DVD player Use for recording the input from the video recorder (infant looking behavior)
Television Use for viewing input from mixer (screen-in-screen; categorization task in corner of screen showing infant looking behavior) 
Projector Use for projecting visual stimuli of categorization task on screen
Speakers Use for presenting auditory stimuli during categorization task
Blacked-out sunglasses Use for blocking caretaker's vision during categorization task
Statistical analysis software R Use for analyzing infant looking behavior

References

  1. Di Giorgio, E., Leo, I., Pascalis, O., Simion, F. Is the face-perception system human-specific at birth. Dev psychol. 48 (4), 1083-1090 (2012).
  2. Vouloumanos, A., Hauser, M. D., Werker, J. F., Martin, A. The tuning of human neonates’ preference for speech. Child dev. 81 (2), 517-527 (2010).
  3. Vouloumanos, A., Werker, J. F. Tuned to the signal: the privileged status of speech for young infants. Dev sci. 7 (3), 270-276 (2004).
  4. Lewkowicz, D. J., Ghazanfar, A. A. The emergence of multisensory systems through perceptual narrowing. Trends cogn sci. 13 (11), 470-478 (2009).
  5. Fair, J., Flom, R., Jones, J., Martin, J. Perceptual Learning: 12-Month-Olds’ Discrimination of Monkey Faces. Child dev. 83 (6), 1996-2006 (2012).
  6. Friendly, R. H., Rendall, D., Trainor, L. J. Plasticity after perceptual narrowing for voice perception: reinstating the ability to discriminate monkeys by their voices at 12 months of age. Frontiers in psychology. 4, 718 (2013).
  7. Pascalis, O., Scott, L. S., et al. Plasticity of face processing in infancy. Proc Natl Acad Sci USA. 102 (14), 5297-5300 (2005).
  8. Heron-Delaney, M., Anzures, G., et al. Perceptual training prevents the emergence of the other race effect during infancy. PloS one. 6 (5), 19858 (2011).
  9. Brown, R. . Words and things: An introduction to language. , (1958).
  10. Fulkerson, A. L., Waxman, S. R. Words (but not tones) facilitate object categorization: Evidence from 6- and 12-month-olds. Cognition. 105 (1), 218-228 (2007).
  11. Ferry, A. L., Hespos, S. J., Waxman, S. R. Categorization in 3- and 4-month-old infants: An advantage of words over tones. Child dev. 81 (2), 472-479 (2010).
  12. Balaban, M. T., Waxman, S. R. Do words facilitate object categorization in 9-month-old infants. J exp child psychol. 64, 3-26 (1997).
  13. Mareschal, D., Quinn, P. C. Categorization in infancy. Trends Cogn Sci. 5 (10), 443-450 (2001).
  14. Aslin, R. N. What’s in a look. Dev sci. 10 (1), 48-53 (2007).
  15. Colombo, J. Infant attention grows up: The emergence of a developmental cognitive neuroscience perspective. Curr Dir Psychol Sci. 11 (6), 196-200 (2002).
  16. Golinkoff, R. M., Hirsh-Pasek, K., Cauley, K. M., Gordon, L. The eyes have it: lexical and syntactic comprehension in a new paradigm. J child language. 14 (1), 23-45 (1987).
  17. Ferry, A. L., Hespos, S. J., Waxman, S. R. Nonhuman primate vocalizations support categorization in very young human infants. Proc Natl Acad Sci. , 1-5 (2013).
  18. Scott, L. S., Monesson, A. The origin of biases in face perception. Psychol sci. 20 (6), 676-680 (2009).

Play Video

Cite This Article
Perszyk, D. R., Waxman, S. R. Experience is Instrumental in Tuning a Link Between Language and Cognition: Evidence from 6- to 7- Month-Old Infants’ Object Categorization. J. Vis. Exp. (122), e55435, doi:10.3791/55435 (2017).

View Video