Wednesday, December 08, 2004

Tritone Paradox

Hello my name is Joshua Perez, and today I will be presenting a report from a scientific music journal

The name of the journal entry is Speech Patterns Heard Early in Life Influence Later Perception of the Tritone Paradox. It was written by Mark Dolson, Diana Deutsch, and Trevor Henthorn. They did experiments to test the theory that perception of the tritone paradox is influenced by speech heard early in life.

The journal explains of experiments conducted to test the theory of the tritone paradox. The tritone paradox occurs when two tones that are a 4+ or 5° apart are heard in secession. But the pitches are heard in such a way that pitch classes are clearly heard but the placement of the octave is vague. This paradox is an auditory illusion because the listener could perceive the notes to be descending or ascending.

There were many experiments that tested this tritone paradox. Dolson personally worked on the aspect of pitch ranges in speech. Throughout his studies he discovered that each language, dialect, and sub-dialect had its own pitch class. Meaning that each had specific note ranges where in which voices inflections fell. There was also the discovery that the physical being did not affect the pitch ranges in speech. This study showed that the pitch template, range of pitches during speech, is acquired through early interaction with the parental units.

To further study this phenomenon, Deutsch studied the perception in two groups. The two groups tested were from California and England. The study found that while the Californian’s perceived the tritones to be ascending, the English found it to be descending. This strengthened the hypothesis that pitch class template is acquired through the culture. Most of the testing done for their later experiments was on younger children and their parents. Further studies have been discussed one in particular belongs to the authors of the article. The following experiments were conceived to test if the childhood template survived into adulthood.

In experiment 1 there were three different groups tested the Vietnamese Late Arrival which contained 6 men and 10 women from South or Central Vietnam who moved to the United States when they were adults, and spoke only Vietnamese.

Another group tested was the Vietnamese Early Arrival which consisted of 3 men and 13 women from South or Central Vietnam. This group moved to the United States when they were infants or young children they primarily spoke English but had exposure to Vietnamese.

The final group tested was the Californian English. This group had University students, 6 men and 4 women, who were born and raised in California, spoke only English and had little or no exposure to Vietnamese.

The purpose of the experiment was to test whether or not each group tested perceived the tritone paradox to be ascending or descending. The results from this experiment showed that the perception of ascending/descending was influenced by each groups, own pitch template. They also observed that the perception of the tritone paradox varied significantly based on the first language to which the experiment participants were exposed.

Experiment two focused more on the fluent Vietnamese speakers. The participants included 2 men and 5 women, 6 of which had participated in the previous experiment.
This experiment tested the tritone paradox and how the pitch template range affected the perception of this. The findings showed that the pitch range of speaking, in which a person was exposed to first, is strongly influences the perception of the tritone paradox.

In conclusion; the findings from both experiments showed that there was a direct and strong link between speech and music perception. The findings also showed that the perception of the tritone effect is heavily influenced by the speech heard early in life.


Music Perception Spring 2004, Vol. 21, No.3, 357-372

Tuesday, December 07, 2004

Script...

Is Recognition of Emotion in Music Performance an
Aspect of Emotional Intelligence?



When musicians perform, they are expected to play a piece in different ways, to show happiness, sadness, anger or fear, and the listeners should be able to identify these emotions and recognize them throughout the performance. Through studies of Juslin we can see that emotions can be brought out or communicated very effectively through music performance. Also in Juslin’s studies, it was found that people even with little musical training still has the ability to recognize emotions in music. Mayer and Salovey describe this ability as emotional intelligence. Recently Mayer and Salovey have found a way to test emotional intelligence. This test measures four different aspects of emotional intelligence: perceiving emotions, using emotions to show thought, understanding emotions, and managing emotions. The actual purpose of this study or test, was to show whether or not the recognition of emotions in music performance is related to emotional intelligence.
In this experiment there were twenty-four undergraduate students and their musical training ranged from 0 to 15 years of music lessons. They were asked to listen to three different short piano pieces composed by Bach, Bartok, and Persichetti. Each piece was recorded by a classically trained pianist five times, first with the expression that was appropriate for the music (or the “normal” way), and then with four different emotional intentions: happiness, sadness, anger, and fearfulness. The happy and sad performances seemed to have a faster tempo than sad and fearful performances, and angry performances were louder than happy performances. After each performance, all of the participants were told to rate it on how happy, sad, angry or fearful it was. Also realize that none of the participants said that they were familiar with any of the pieces.
The results showed that the normal performance of the Bach piece was rated as sad, which is consistent with it being slow in tempo and in a minor key. The Bach performance that was intended as happy (the 2nd time it was played differently) was rated only slightly more happy, but rated much less sad and more angry than the normal performance. The performance that was intended sad had basically the same ratings as the normal performance, which showed that there was a limit to how sad this piece could actually sound. Finally, the performance intended to be angry was rated only a little less sad but not as fearful. Overall, the Bach performances were not very successful in showing the intended emotions. The Bartok piece performances overall, were more successful in showing the intended emotions. The Persichetti piece performances were the most successful in showing the intended emotions, as each emotion was rated exactly what was supposed to be intended by the performer. The Bach performances really only successfully showed two of the four emotions, the Bartok performances showed three of the four emotions, and the Persichetti showed all four of the emotions.
This study or test, showed that individual differences in sensitivity to emotion showed by the music performance are related to individual differences in emotional intelligence. Although recognizing or noticing emotion in music performance is less important in everyday life, it probably requires a lot of the same processes and sensitivities as recognizing emotion in speech. This would be entirely consistent with evidence the the emotional cues in music performance are very similar to those in speech.
There are many things such as mode, pitch register, range, consonance and dissonance, rhythm, tempo and dynamics that help people recognize the emotion in music performance. This shows that the Bach piece used in this study is a somewhat sad piece because of its minor key and slow tempo, whereas the major-key and moderate tempo Bartok piece makes a quietly happy impression, and the Persichetti piece is kind of neutral, not having a strong tonality. These characteristics were shown in the participants’s ratings of the “normal” performances, which also showed the performer’s certain responses to the respective musical structures. In the end, a performance is only able to show happiness, sadness, or any other emotion only if the music is actually being played happy or sad or any other emotion.


Sited Work:

Music Perception, Joel E. Resnicow & Peter Salovey.

Music in Everyday Life

Music surrounds us wherever we go. In the past two centuries the times, places, and ways in which people listen to music has incredibly changed. Before mass media and the technology of today, people heard music only in concert halls, social gatherings, and occasionally at home, and when they heard music it was usually because they were purposely setting out to hear it. Today, music sets up the background of our lives. So arguably, it is less prized than before. Before music was a treat, a treasure, a profound method of communication. Now music is packed up and shipped just like any other household item. Music is so widely available today in so many different formats and of so many different styles. People of today now use music much more often than before and in everyday situations. They can control it in their homes, and their cars while doing everyday tasks. Music is now a resource not just a commodity.
Because of these changes, the role of music in everyday life has also changed. There have been several different approaches to way this role has been studied. One way is the how music affects an individual's identity. Studies have also been done on how music was used in various contexts including shopping malls, and karaoke bars. In this study music was regarded more as a process and not just an object, helping the various activities along. Another study focused upon self-proclaimed music lovers and how they developed individual personality traits based upon this. Studies have also been done on the music industry's effect on personal musical tastes. Although these studies are all interesting they lack defining the reasons for everyday musical listening.
Research has been done on these reasons however. Social psychologists have adopted the approach known as "uses and gratifications." Participates are placed in a laboratory setting and asked to choose from a preset list the functions music serves for them. Most of these style studies have produced extremely inconsistent results. The most important thing these studies proved was that music fulfills completely different functions for each individual person and each individual situation. Similar studies have shown that the choice of the function of music is determined by social and interpersonal context.
The problem with the aforementioned studies is that they lack a focus upon the participants. The experiments have often chosen the music, the situations and the possible responses.
A participant centered approach conducted in 2001 involved giving each of the participants' electronic pagers and paging them once every two hours. When they received a page they were asked to document the last time they had heard music and the way in which they experienced it. They were also asked to describe who they were with, what emotions they felt, and the type of music. This type of experiment has potential for error in the participants' reports. Also this study was only done with 8 participants so it limits the amount of different responses.
The study that we will be focusing upon today asks five main questions; Who are people with when they listen to music? What do they listen to? When do they listen? Where do they listen? and Why do they listen?
The predictions of the study believe that people previously listened to music primarily on their own but due to technological changes that people are more likely to listen with others. Also that when listening alone people are more involved in actively listening while with others music serves as a background. Also that music listened to alone is probably more liked since the listener has stronger control of the musical choice.
In the regards of what music is heard the study predicted that because of technology the responses will be more widespread. The study predicted that the choice of type of music will also depend upon their motives for listening.
The when question was predicted as having fairly predictable patterns, such as that during the day music serves as background for other work while at night and on weekends it fulfills other functions. At different times of the day and on different days of the week, music fulfills different functions as well.
The predictions in regard to where people hear music is that responses will be largely widespread, including the home, cars, and commercial situations.
All of the above questions will effect why people listen to music.
The study had 346 volunteer participants who were recruited from universities and business throughout Britain. The participants were of ages between 13 and 78 and of various cultural backgrounds. Everyday for 14 days the participants were sent text messages on their mobile phones. When they received this message they were asked to fill out a short questionnaire about the music they could hear when they received the text message. The questionnaire consisted of five sections. The first section asked demographic information of the participants, the time they received their message and whether or not they could hear any music at the time. If they couldn't hear any music they were asked to fill out the questionnaire based upon the last time they had heard music. The second section consisted of questions concerning who they were with when they heard music. The third section asked the type of music they heard by choosing from a list of styles, whether they had any choice in listening to the music, the volume of the music and their liking of it. The fourth section asked where the music was heard. The fifth section consisted of two separate parts. The first part was for participants who had chosen to hear the music. It asked the function of the music. The second part was for people who had not chosen to hear the music. It asked the effect the music had on them.
The results of the study produced many interesting findings. That data collected regarding the "who" shows that most of the listening episodes occurred when the participant was with other people. These findings go along with the notion that technology has made it easier to access music thus making it easier to hear music with others. The data also proves that people had the most liking for music they could hear when they were on their own, while the lowest amount of liking was experienced with strangers. The function of music changed with who the person was listening with. Contrary to the predictions music did not serve as a background while with others and move to the foreground while alone. The greatest amount of attention was paid to the music when participants were with a boy pr girlfriend. Music was liked more when it was heard alone, but it was not necessarily more important at the moment.
Data collected regarding the "what" shows that pop music was the type of music most heard and that classical was the least. When people chose to hear music they used different types for different reasons. When people did not choose to hear music it still had some effect upon them, but they were less likely to enjoy the music.
Music was most often heard in the evening, which is consistent with the hypothesis. The lack of daytime music listening can be accounted for through work and non-leisure opportunities. However there is no link between being able to choose to listen to music and the time of day. Increased leisure time only increased the chances that a person would hear music and not their ability to choose to listen to music. The same applies to the link between the ability to choose and the day of the week. Data collected about the "when" shows that listening during leisure time is for pleasure and that listening during the workday is to help some other function.
Just as predicted, music was heard in many different places, the most popular being the home with half of reported incidents, others include restaurants, shops, gyms, nightclubs, and places of religious worship. Music was usually not the central focus and participants listened to music in different places for different reasons.
The reasons people listened to music has been mentioned in most of the aforementioned questions. Music generally served as a background both when people chose to listen to it and when then didn't choose to listen to it. The number one answer as to why people listened to music was that it was for enjoyment and that it helped pass the time. Other responses as to why people were listening to music is that it was habit, it help create the right atmosphere, it helped concentration and that it helped create an emotion. When people did not choose to listen to music, they had a generally unengaged attitude towards it.
In conclusion, the findings of this study actively prove that people use music as a resource in everyday life. This has happened recently due to the increase in access to music because of technology. People tend to view music passively and it seems that because it is so accessible that they take it for granted. People use music in different places, for different reasons, and experience it in different ways. But people use music. Music is in peoples lives now more than ever and much more research is to be done concerning the way in which it affects us all.

North, Adrian C., David J. Hargreaves, and Jon J. Hargreaves "Uses of Music in Everyday Life" Music Perception Volume 22, Number 1, Fall 2004

Vowel Modification

"Vowel Modification Revisited"- John Nix

The idea of vowel modification is actually a relatively new one. Singers used to train without “cover” or modification, but this changed with the career of a famous tenor named Tito Schipa. Tito Schipa modified his vowels where no other singer had before. The results were enough to revolutionize the vocal world. Today, teachers all have different philosophies about vowel modification and when and where it is appropriate. However, it is universally accepted that cover is an integral part of singing classically.
There are many bad facts floating about cover among singers. Sometimes, cover or vowel modification is associated with singing to darkly. Other times it is used to describe voices that are darker naturally, or those that aren’t placed correctly. The idea of cover comes from the Italian concept of Chiaroscuro. Chiaroscuro literally means light and shadow. Classical singers must find a careful balance between these two elements to sing freely. The scuro or shadow is achieved by vowel modification or cover. For those who have listened to brighter singers, it is easy to imagine the problems with a voice that is all light to sing the classical repertoire. High notes would simply sound pinched. Coloratura would be next to impossible to negotiate. Cover is an integral part of healthy and beautiful singing. It should not be thought of as necessarily a bad thing.
The idea behind vowel modification is to not only unify the voice, but also to maximize the formants, which are what give the voice carrying power. Acoustically, singers need these formants to carry over an orchestra. The formants occur in sound ranges where few other instruments vibrate. The result is the “big” operatic voice that can fill an opera house. Vowel modification also helps unify the voice because it makes it easier to negotiate the various breaks and shifts in register. In classical singing a unified voice is a necessity. Perhaps most importantly, modification allows singers greater flexibility and dynamic contrast.
The article “Vowel Modification Revisited” talked of six important concepts key to understanding vowel modification and when and why it is used by singers. First, the formants are different in each singer because of anatomical differences. Second, different voices require different amounts of modification and this depends on size and the actual song being sung. For example, a tenor singing an art song that lies mainly below an E probably would not need as much modification as would a baritone. Some singers even say that the amount of vowel modification they use depends on the time of day and how much they have warmed up. Third, vowel formants have to do with a band of frequencies rather than a specific pitch. Fourth, it is impossible to tune each note absolutely when singing. Fast songs with lots of moving notes simply do not allow a singer enough time to tune absolutely. In these cases the movement among the notes becomes more important than the actual individual notes. Five, men and women tune differently. Men generally try to match the formants while women usually tune to the fundamental. Six, there are six guidelines for vowel modification: as the vocal tract lengthens the frequency of the formants decreases. The same phenomenon is seen when the lips are rounded while the frequencies are raised by lip spreading. Singers can lower the frequencies of first formant and raise the second one by fronting and arching the tongue. By backing and lowering the tongue, the opposite occurs. Also, lowering the jaw raises the frequencies of the first formant and lowers those of the second.
In the discussion of vowel modification it is also important to touch on the sub glottal formants. These formants unlike those in the vocal tract are not changed by altering the position of the tongue, lips, jaw, etc. Also, they do not change from vowel to vowel. Rather, only the laryngeal position has an effect on these formants. Scientists studying these formants noticed that their intensity decreased in certain pitch areas. Part of the process of unifying the voice is learning to compensate for these different pitch areas. The change in intensity accounts for the changes in pitch intensity in different areas of the voice.
Another important use of vowel modification is to negotiate the passagio. By adding cover, tenors for example are able to shift from tuning mainly to the first formant to the second which produces the brighter, freer color seen in singers such as Alfredo Kraus and Luciano Pavarotti. Other singers choose instead to continue tuning to the first formant. An example of this style of singing is the tenor Placido Domingo.
Tuning directly to a formant, however, can be detrimental to a singer. It is better to tune slightly below the formant. This allows the vibrato to be freer and allows its cycles to follow the formant itself which prevents the classic out of sync vibrato that sounds too fast or too wide. To do this a singer must sing a slightly more open vowel. Examples of singing this more open vowel are when a soprano is required to sing an [i] vowel on a high note, such as a Bb5. The frequency of the vowel and the note do not match, so the singer picks a more open vowel with a higher frequency to match the higher frequency of the note. In this case, the vowel sung will not be a pure [i] but a more generic sound.
The idea of covering the vowels actually aids in the ease of production and helps the vocal tract to work as little as possible in producing notes. In classical singing it is always more important for a note to sound beautiful than for the words attached to it to be understood. In other styles such as the Musical Theater belt, singers tend to focus more on the words than the beauty of the vowels; however, by focusing simply on the vowels and the acoustics of singing the diction of a singer will improve.
After all is said and done, the amount of cover that a singer should use depends on both their sense of aesthetics and the tessitura of their instrument. Some singers prefer a more covered sound and they tend to focus on the heavier repertoire. It would not be pleasing to hear the Rossini repertoire, for example, with the same amount of cover as Verdi or Puccini operas. Tessitura has to do with where a singers breaks lie and where their voice sits comfortably. A tenor that easily negotiates Fs and Gs, for example, would not need as much cover as a baritone that struggles to obtain a G. The best advice to a singer is to consult your teacher about this issue.
In conclusion I would just like to talk about what teachers can do to better teach the idea of vocal cover. Some singers respond better to images. For example, giving a singer a color or an analogy to brighten or darken the sound can help a teacher find the proper amount of modification. Still, other singers respond better to physiological examples such as “raise your soft palette” or “elongate that vowel.” Whatever tactic is chosen by the teacher, it is important for the student to get to know their own voice and understand the idea of cover.
Singing is about producing the most natural and beautiful sound that the body is capable. Singers will find that an understanding of cover will actually help their natural voice come out more and that the pitfalls of breaks and difficult vowels will be neatly avoided.

Source: Journal of Singing, November/December 2004

"Is Recognition of Emotion in Music Performance an Aspect of Emotional Intelligence?"

This experiment measured the relationship between the ability to recognize emotion in music and an individual’s emotional intelligence, or their ability to understand, “read”, and manage emotions. I want to start out with some psychology background and definitions that aren’t in the article so you can understand the article better.

There are many theories about emotions, the two main being that there are fundamental emotions (this is the dominant approach) and the other is that there are no basic emotions, there are just dimensions (a continuum). Focusing on the theory of fundamental emotions, experts have suggested a variety of basic emotions from happiness to contempt. These theories overlap on six basic emotions: happiness (joy), fear, surprise, sadness, anger and disgust. This study focused on four of these: happiness, sadness, anger and fear.

Two other terms used throughout the article, are the “significance” and the “correlation” of results. When results are significant, it simply means that they could not have been caused by chance alone. The correlation of two factors determines their significance. There are three types of correlation. A positive correlation occurs as the number gets closer to +1, this means that as one variable goes up, the other goes up. A zero correlation occurs when the number gets closer to -1 (as one goes up, the other goes down). Positive and negative correlations are equally strong. A negative correlation means that the two variables aren’t related at all. If the correlation is above +.50 or below -.50, the results are significant. This will make more sense as I get more into the experiment. Keep in mind however, that if two variables are correlated, this does not mean that one caused the other.

As stated before, this study focused on the relationship between emotional intelligence and the ability to recognize emotion in music performance. Twenty four undergraduate students participated in this study. They took an emotional intelligence test called the MSCEIT (Mayer-Salovey-Caruso Emotional Intelligence Test). This test measures four things: “perceiving emotions, using emotions to facilitate thought, understanding emotions, and managing emotions.” These factors are measured using pictures, generating emotion and then matching sensations to it, being able to identify combinations of emotions, etc. The researchers compared these results to the results of the musical test.

The musical test was made of three piano pieces that were played with the four different emotions. The pieces were Prelude No. 6 in D minor by Bach, Bartók’s “Children’s Song” in C major, and Persichetti’s “Dialogue” No. 3, Andante. These pieces were selected because of their length, they are all contrasting styles, and the natural emotions contained within each piece is relatively neutral. Author and amateur pianist, B.R performed the pieces.

The participants took the MSCEIT test at least twenty four hours before they came to the lab to hear the recordings. The performances stayed in the same order- Bach first, Persichetti second, and Bartók third. The “normal” performance was always first and the four emotions were played randomly afterwards. The participants rated on a scale of 1-10 the emotional content. This table, shows the difference between the emotions played and the normal performance in the duration of intervals, and loudness (show figure 1).

The results of the MSCEIT test scores ranged from 78-142. A man held the highest score, but there was a tendency for women to score higher. The difference between men’s and women’s scores was only .14- not significant. The results of the music test were similar, women tended to score higher on this as well, with a correlation of .21- also not significant. Each piece had different results. This figure shows the differences between each piece (show figure 2).

The Bach piece had the least success in conveying the emotion. The normal performance was rated as sounding sadder, which makes sense because it is in a minor key at a somewhat slower tempo. The Bach was only successful in showing two of the four emotions- happiness and anger.

When played normally, the Bartók piece was rated as being relatively happy which also makes sense because it is in a major key and has a quicker tempo. This performance was slightly more effective than the Bach, conveying three of the intended emotions- everything but happiness. This could be because the normal performance was rated as happy.

The Persichetti was the most successful, conveying all four emotions. The normal performance was rated as a little sad but this didn’t effect any other emotions as it did in the other two.
The level of musical training varied greatly in participants as well, ranging anywhere between zero to fifteen but the correlation between years of musical training and scores on the musical tests was a mere .08. Also, the correlation between the total scores of the two tests (MSCEIT and music test) was significantly and positively related (.54) As I said before, this means that as one score rose, the other did as well. The MSCEIT test was split into a few different categories, two of which were the experimental score and the strategic score. The experimental score was significantly and positively related to the score on the music test.

The results tell researchers that an individual’s ability to read emotion in real life and in music performance is related. There is a difference between recognizing emotion in music performance and recognizing the emotion of the way the music is built, its structure. This includes “mode, pitch register, range, and contour, dissonance, harmonic progression, and rhythm…”
As any performer would know, playing a piece involves a lot of emotional involvement. It wouldn’t seem right to play an inherently happy or even neutral piece, sadly. This experiment on one hand tests an individual’s emotional recognition. One the other hand, it is “somewhat like changing one’s tone of voice or facial expression in order to disguise one’s true feelings.” What the performer is playing and what is natural for the music are conflicting in this experiment. What this all means, is that in this experiment, the performer was trying to play the piece a certain way. In a real performance, it would take a lot more emotional intelligence to detect the emotion the performer is using.

Much more research needs to be done on this topic for a couple reasons. The sample of participants was so small, the experiment needs to be done with more people. Also, the music was played by an individual author. There might be a difference in the results if the performer was a professional.

This experiment provides a lot of insight to how musical performance and emotional intelligence are positively and significantly correlated. The results encourage us to pay more attention to auditory events as we can interpret the emotions involved.

Music and Language: A Developmental Comparison

Music and Language: A Developmental Comparison
Erin McMullen & Jenny R. Saffran


The basic idea of this article is to compare music and language and the links between them. It experiments using infants and adults to discover how each develops music and language differently.
The first section of this article talks about the structures of language and music and what we learn. They compare both of them by saying that they are developed from a limited set of sounds (notes or letters), that come out of a larger possible set of sounds. Sounds in both music and language are subject to being interpreted into categories. Musical materials can be grasped categorically, even by nonmusicians; adults that were taught labels for musical intervals, like “Here Comes the Bride” for a perfect fourth, recognize those intervals categorically. The article then goes on to describe how infants from other countries hear things differently. An example of this would be Japanese babies treating “r” and “l” as the same thing. Also, they say that infants prefer certain sounds over others like consonant sounds over dissonant sounds. They held an experiment with monkeys and the monkeys too preferred consonant sounds over dissonant sounds. The article then goes into great detail dealing with statistics on infants and at what age they begin to learn the language in their environment. They learn the sounds and get used to them before they can even talk. Basically this portion of the article focused on how babies heard letters and sounds and at what age they started to remember a melody that was played.
The next section of the article gets into the particular structure of language and music. Patterns of rhythm, stress, intonation, phrasing, and contour most likely drive the early learning in both language and music. It talks about how newborn infants prefer their mother’s voice and recognize it because of when they were in the womb. Fetal learning happens because the fetus hears the mother’s native language and the rhythm of it , allowing the infants to detect the differences between languages. It is likely that infants learn about musical rhythm if the mother sings to the baby while in the womb. Even after birth, infants continue to learn about rhythms both musically and spoken, due to learning in the womb. When mom’s and dad’s play lullaby’s for the babies or sing them nursery rhymes it helps them learn and the infants prefer the child like nature of the songs. The article then goes on to describe more experiments that deal with how infants react to songs played in different pitches, or musical passages that pause at the ends of phrases rather than in the middle of phrases. It remains an open question whether infants are using the same mechanism to detect these parallel cues across domains, or whether instead they have learned about these properties independently.
The third section of the article explains the grammatical structure of language and music. The major theoretical position in human speech has been that infants come prewired with a “universal grammar,” a dedicated speech system containing a combination of standard knowledge and toggle switches for certain aspects of native languages. Marcus and colleagues showed that infants exposed to a collection of short sentences following a simple pattern, such as AAB, will prefer it over a sentence that fails to conform to a pattern, such as ABA, indicating that by age 7 months, humans are capable of pattern recognition. The article describes more experiments done with infants showing that infants prefer grammatical sentences over ungrammatical sentences. They also mention that Western listeners preferentially end pieces on the tonic, less frequently on other notes in the tonic chord, still less frequently on other notes within the scale, and rarely on notes outside of the diatonic context. Through many more experiments dealing with brain reaction, they lead into emotions that music has on someone.
Meaning in language and music is the next section of this long article. It talks about how music can and does often bring about strong, predictable emotional responses from people who may vary by culture. In the case of music, the “meaning” that adult listeners give to phrases is most strongly related to the emotional responses they generate. One of the basic building blocks for this is present from early infancy; several studies have found that infants as young as 2 months old, like adults, prefer consonance to dissonance. In addition, research has demonstrated that infants prefer higher-pitched music, which often brings about positive thoughts. However, adult responses to specific pieces of music are complex and most likely influenced by a variety of other factors as well. For instance, many adults report having strong reactions to certain musical pieces and to particular sections within them, including tears, heart acceleration, and “chills” or “shivers down the spine”. In the case of the “chills” reaction, the emotion is linked to increased blood-flow in brain regions associated with emotion, motivation, and arousal. Adult western listeners often associate the major mode with being happy and the minor mode with being sad. While conducting an experiment, Nawrot found that infants looked longer at happy faces while listening to “happy” music, but did not look longer at sad faces while “sad” music was played. In a nut shell, infants and adults prefer “happy” music to “sad” music because it makes them feel better.
The next section talks about memory for language and music. For successful learning to occur, the article says young learners must be able to represent musical experiences in memory. An experiment is described in this section of the article that describes how infants were exposed at home to CD recordings of Mozart piano sonata movements, played daily for 2 weeks. Following a 2-week retention interval, during which the infants did not hear these musical selections. It demonstrated that the infants were not merely remembering snippets of the music, but instead had represented aspects of the overall structure of the piece. In yet another experiment, six-month-old infants remember the specific tempo and timbre of music with which they are familiarized, failing to recognize pieces when they are played at new tempos or with new timbres, although recognition is maintained when pieces are transposed to a new key.
As you can see, learning begins at a very young age. We’re learning even though we may not know it. This article has described many experiments and examples that this is true. It also mentioned how metaphors play a powerful role in directing our thinking and suggesting new insights. Whether or not music and language share common ancestry, thinking about them as related functions may still be quite helpful in generating hypotheses that can help us to better understand them as separate domains.

Music and Language: A Developmental Comparison, Erin McMullen & Jenny R. Saffran. Musical Perception Spring 2004, Vol. 21, No. 3, 289-311

Perceiving Acoustic Source Orientation in Three-Dimensional Space

Experiment conducted by John G. Neuhoff from the College of Wooster – Department of Psychology

Many studies have been done that indicate listeners can identify the where a sound is coming from, yet there have been relatively few studies that show that we as listeners can decipher which direction the sound is projecting from a given source. This experiment tries to prove just that – whether or not the human ear can perceive which direction the sound is projecting without visual clues. Assuming that the loudspeaker (refer to drawing) will not move other than in a 360 degree rotation pattern, listeners are asked which direction the speaker is pointing in relation to themselves. Obviously, Neuhoff, the conductor of the experiment, wanted to remove the ability of the listener to watch the speaker as it rotated, so he decided to blindfold all of the listeners. Essentially what they were measuring was the ability of the auditory system to spatially take over for the visual system. So many studies have been done identifying the sound source because it is the auditory system that initializes the visual system when you hear something. This is the localization part. By the time that the projection comes into play, the visual system has already taken over. You see what is making the sound and then which direction the sound is projecting. This experiment attempts to eliminate the visual system to see if the auditory and spatial systems can take over for the visual system. The auditory system is very unused to identify the source of sound without this orientation that the visual system allows and this experiment was designed to show how well it can adapt.

The subjects for all of the parts of the experiment were 18 to 25-year-old undergraduate students. They all said that they had normal hearing.

The experiment they designed tested the listeners on their accuracy for determining the facing angle of the loudspeaker. In this experiment, facing angle (point at the HELPFUL HINTS sheet) can be defined as the direction that the loudspeaker is facing in relation to the listener. In this experiment, they hoped to measure two main variables. The first was how much the distance from the loudspeaker affected the listener’s ability to gauge the facing angle of the loudspeaker. The second thing that they were trying to measure was the ability of the listener to identify the facing angle of the loudspeaker with either a constant sound as the loudspeaker rotated or only having the loudspeaker sound at the start and finish of the rotation.

The first variable had to do with this part of the experiment (point to the first row on the EXPERIMENT sheet). The listeners were placed at two different differences away from the speaker (point at the EXPERIMENTAL SETTING sheet). The first group of listeners was seated at .91 meters away from the loudspeaker while the second group was seated twice as far away at 1.82 meters from the loudspeaker. Their findings were not surprising. They found that listeners were much better at identifying the facing angle when they were closer to the loudspeaker. The only hard part from there was explaining why this occurred. Neuhoff proposed that this was because of the interaural level differences, also called ILDs. These are usually stronger as the sound source gets closer to the listener. This combination of the facing angle and how indirectly or directly the sound is reaching us is how Neuhoff explained the fact that the people that were closer were more accurate in their estimates of the facing angle. The second part – the indirect versus direct measure of sound – is believed to be caused by changing the ratio of direct sound to reflected sound. For example, if you had speakers pointed directly at you, very little of the sound that you would hear would be bouncing off of the wall behind the speaker, however if the speaker was faced 180˚ away from you, the majority of the sound that you would hear would be first reflected off of the wall before coming to you. Neuhoff believes that it is this synthesis of the ILDs and the ratio of direct-to-indirect sound that enables us to tell what angle the sound is coming from.

Then, they split this previously described experiment addressing the distance at which the listener is sitting, into two more separate experiments (point at EXPERIMENT sheet). This experiment measured the ability of the listener to identify the facing angle when the sound source was constant or not. The first part of this subdivision of the experiment measured the listener’s ability to guess the facing angle when given dynamic rotation cues while the second section of the experiment used only static directional cues. Dynamic rotation cues, which were part of the first experiment, means that the loudspeaker was sounding the entire time while it was rotating. Using static directional cues, like in the second section of the experiment, means that loudspeaker sounded only after it had already been rotated. The speaker was sounded at the beginning and at the end of the rotation only. As you would probably be able to guess, the listeners were better able to identify the angle of the loudspeaker with the constant sounding of the dynamic cues, especially when the speaker passed directly in front of the listener at some point.

The experiment also found another interesting finding, however they did not expect to measure when they first designed the experiment. Usually the error were no more than 60 degrees, however, they found that the number of reversals spiked around 180 degrees. For this experiment, a reversal (point at the HELPFUL HINTS sheet) means that the listener made an error of over 165 degrees. The interesting part of this finding is that it was the highest when the speaker was facing 180 degrees away from the listener. The most common mistake made by listeners was that the speaker was facing directly at them. This was interesting because common sense would tell us that the position 180 degrees – as indirect a sound as you can get from the speaker – was often mistaken for the speaker pointing straight at the listeners. Neuhoff hypothesized that this may be due to the lack of a direct sound coming from a specific direction, either left or right. Essentially, having the loudspeaker facing directly at you is 100 percent direct sound whereas having the loudspeaker faced 180 degrees away from the listener would be 100 percent indirect sound.

Revision

ILDs (interaural level differences) the inequity between the intensities of sound entering each of the ears. In theory, this would help your mind figure out where the sound is coming from by this degree of inequality between your two ears.

I don’t know where my last paragraph disappeared to (perhaps it took an early holiday break…however, it is more probable I accidentally erased it ☺ ), but here is a new one for all of you to enjoy.

Throughout the experiment, Neuhoff manipulated two different variables. The first was the distance between the listener and the loudspeaker. He found that the listener was much more accurate in their predictions about which direction the loudspeaker was facing when the subject was closer. He attributed this to the IDLs and the ratio of direct to indirect sound that the listener hears. Secondly, he changed the loudspeaker setting to sounding constantly or sounding in the stopped position only. His results were not surprising; he found that giving the listener to hear the speaker as it rotated really aided them in identifying the facing angle of the loudspeaker. This was especially true when the loudspeaker passed directly in front of the listener. This experiment was successful at measuring the auditory system's ability to identify the projection angle of a particular noise in the absence of a visual system.