Tuesday, December 07, 2004

Music and Language: A Developmental Comparison

Music and Language: A Developmental Comparison
Erin McMullen & Jenny R. Saffran


The basic idea of this article is to compare music and language and the links between them. It experiments using infants and adults to discover how each develops music and language differently.
The first section of this article talks about the structures of language and music and what we learn. They compare both of them by saying that they are developed from a limited set of sounds (notes or letters), that come out of a larger possible set of sounds. Sounds in both music and language are subject to being interpreted into categories. Musical materials can be grasped categorically, even by nonmusicians; adults that were taught labels for musical intervals, like “Here Comes the Bride” for a perfect fourth, recognize those intervals categorically. The article then goes on to describe how infants from other countries hear things differently. An example of this would be Japanese babies treating “r” and “l” as the same thing. Also, they say that infants prefer certain sounds over others like consonant sounds over dissonant sounds. They held an experiment with monkeys and the monkeys too preferred consonant sounds over dissonant sounds. The article then goes into great detail dealing with statistics on infants and at what age they begin to learn the language in their environment. They learn the sounds and get used to them before they can even talk. Basically this portion of the article focused on how babies heard letters and sounds and at what age they started to remember a melody that was played.
The next section of the article gets into the particular structure of language and music. Patterns of rhythm, stress, intonation, phrasing, and contour most likely drive the early learning in both language and music. It talks about how newborn infants prefer their mother’s voice and recognize it because of when they were in the womb. Fetal learning happens because the fetus hears the mother’s native language and the rhythm of it , allowing the infants to detect the differences between languages. It is likely that infants learn about musical rhythm if the mother sings to the baby while in the womb. Even after birth, infants continue to learn about rhythms both musically and spoken, due to learning in the womb. When mom’s and dad’s play lullaby’s for the babies or sing them nursery rhymes it helps them learn and the infants prefer the child like nature of the songs. The article then goes on to describe more experiments that deal with how infants react to songs played in different pitches, or musical passages that pause at the ends of phrases rather than in the middle of phrases. It remains an open question whether infants are using the same mechanism to detect these parallel cues across domains, or whether instead they have learned about these properties independently.
The third section of the article explains the grammatical structure of language and music. The major theoretical position in human speech has been that infants come prewired with a “universal grammar,” a dedicated speech system containing a combination of standard knowledge and toggle switches for certain aspects of native languages. Marcus and colleagues showed that infants exposed to a collection of short sentences following a simple pattern, such as AAB, will prefer it over a sentence that fails to conform to a pattern, such as ABA, indicating that by age 7 months, humans are capable of pattern recognition. The article describes more experiments done with infants showing that infants prefer grammatical sentences over ungrammatical sentences. They also mention that Western listeners preferentially end pieces on the tonic, less frequently on other notes in the tonic chord, still less frequently on other notes within the scale, and rarely on notes outside of the diatonic context. Through many more experiments dealing with brain reaction, they lead into emotions that music has on someone.
Meaning in language and music is the next section of this long article. It talks about how music can and does often bring about strong, predictable emotional responses from people who may vary by culture. In the case of music, the “meaning” that adult listeners give to phrases is most strongly related to the emotional responses they generate. One of the basic building blocks for this is present from early infancy; several studies have found that infants as young as 2 months old, like adults, prefer consonance to dissonance. In addition, research has demonstrated that infants prefer higher-pitched music, which often brings about positive thoughts. However, adult responses to specific pieces of music are complex and most likely influenced by a variety of other factors as well. For instance, many adults report having strong reactions to certain musical pieces and to particular sections within them, including tears, heart acceleration, and “chills” or “shivers down the spine”. In the case of the “chills” reaction, the emotion is linked to increased blood-flow in brain regions associated with emotion, motivation, and arousal. Adult western listeners often associate the major mode with being happy and the minor mode with being sad. While conducting an experiment, Nawrot found that infants looked longer at happy faces while listening to “happy” music, but did not look longer at sad faces while “sad” music was played. In a nut shell, infants and adults prefer “happy” music to “sad” music because it makes them feel better.
The next section talks about memory for language and music. For successful learning to occur, the article says young learners must be able to represent musical experiences in memory. An experiment is described in this section of the article that describes how infants were exposed at home to CD recordings of Mozart piano sonata movements, played daily for 2 weeks. Following a 2-week retention interval, during which the infants did not hear these musical selections. It demonstrated that the infants were not merely remembering snippets of the music, but instead had represented aspects of the overall structure of the piece. In yet another experiment, six-month-old infants remember the specific tempo and timbre of music with which they are familiarized, failing to recognize pieces when they are played at new tempos or with new timbres, although recognition is maintained when pieces are transposed to a new key.
As you can see, learning begins at a very young age. We’re learning even though we may not know it. This article has described many experiments and examples that this is true. It also mentioned how metaphors play a powerful role in directing our thinking and suggesting new insights. Whether or not music and language share common ancestry, thinking about them as related functions may still be quite helpful in generating hypotheses that can help us to better understand them as separate domains.

Music and Language: A Developmental Comparison, Erin McMullen & Jenny R. Saffran. Musical Perception Spring 2004, Vol. 21, No. 3, 289-311