Group 2 First Project Writeup

Agreement in Musical Experts Identification of Beat Levels and their Salience

Schroeder, J., Simmons, G.

Yale University, Cognition of Musical Rhythm, Virtual Lab

1. BACKGROUND AND AIMS

1.1  Introduction

This experiment aimed to look at the salience of beat (or pulse) levels, or subdivisions, in certain songs. Salience is a measure of how perceivable each beat level is, and is made of up a number of different variables, including volume, timbre, and pitch. The purpose of studying the number of salient pulse levels, or subdivisions, was to explore whether their variance might affect the perception of a song’s groove. A pulse level is a steady beat in the music, a stream of musical events that happen in equal and predictable intervals, and has also been defined more anecdotally as a beat that you might feel compelled to tap along or move to. However, in many pieces of music, there are several possible pulse levels that one could focus on. We theorized, after reading Janata, Tomic, and Haberman (2011), that having more pulse levels accessible in the music might be connected with a higher groove rating. There are, of course, many different factors that make up the perception of groove; in this study we wanted to isolate this one factor as best as possible to see what the relationship is.

1.2  Previous Research

Our initial inspiration was drawn from the “Sensorimotor Coupling in Music and the Psychology of the Groove” study by Janata, Tomic, and Haberman (2011). Many other studies have investigated the meaning of ‘groove’ and the rhythmic properties related to it, by comparing microtiming deviations (Gouyon, Hornstrom, Madison, Ullen, 2011) or just categorizing the prominent factors “regular-irregular, groove, having swing, and flowing” (Madison, 2006).

Methods included assessing correlations between listeners’ ratings and a number of quantitative descriptors of rhythmic properties for one hundred music examples from five distinct traditional music genres (Gouyon, Hornstrom, Madison, Ullen, 2011) and in terms of differences in ratings across sixty-four music examples taken from commercially available recordings. (Madison, 2006).

Janata et al. explored the urge to move in response to music using phenomenological, behavioral, and computation techniques. Showing that groove is a psychological construct, their methods proved that the “degree of experienced groove is inversely related to experienced difficulty of bimanual sensorimotor coupling under tapping regimes with varying levels of expressive constraint and that high-groove stimuli elicit spontaneous rhythmic movements” (Haberman, Janata, Tomic, 2011).

1.3  Present Research

Does the saliency of beat level pulses affect perceived groove rating?

Our initial proposal was to have a panel of musical experts rate levels in songs for confirmation and a to be able to choose songs with a variety of beat levels before giving subjects songs to rate grooviness of, but because of difficulties in collecting data and inconsistencies between experts’ opinions, we have decided to use on the first part of our initially proposed project.

2. METHOD

1.1  Participants

There have been 5 participants, all students from the Yale School of Music, as well as one professor. The participants were contacted by email, and were not offered any sort of compensation.

1.2  Stimuli

The experiment consisted of a Qualtrics survey, built with the Qualtrics website, and contained fourteen 30 second excerpts of songs of various style and genre, which were supplied by Petr Janata, and had been used in Janata et al. (2008). Each of the fourteen excerpts constituted a trial, and the number of beat levels present in the song, the salience of each of those beat levels, and the primary instrument that contributed to the creation of each beat level were used as variables. Salience was rated on a scale from 0 – 10, and the labelling of instrumentation was left up to the subjects. The tempos were found using the toolbox described in Tomic & Janata (2008), and a few were halved, due to the fact that they were obviously associated with a faster metric levels. One song (Step it Up Joe) was excluded due to a lack of information.

Song Artist Genre Tempo (bpm) Groove Rating
Superstition Stevie Wonder Soul 99 108.7
Yeah! Usher feat. Lil’ John & Ludacris Soul 211 (really 105.5) 89.7
Freedom of the Road Martin Sexton Folk 25 59.7
What a Wonderful World Louis Armstrong Jazz 36 66.4
Beauty of the Sea The Gabe Dixon Band Rock 63 32.1
Thugamar Fein an Samhradh Linn Barry Phillips Folk 33 29.3
The Child is Gone Fiona Apple Rock 195 (really 92.5) 62.3
Mama Cita (Instrumental) Funk Squad Soul 95 101.6
Citi Na GCumman William Coulter & Friends Folk 20 35.2
Summertime Ella Fitzgerald & Louis Armstrong Jazz 99 67.9
Goodies Ciara feat. Petey Pablo Soul 50 92.3
Step it Up Joe Mustard’s Retreat Folk
In the Mood Glenn Miller & His Orchestra Jazz 162 (really 81) 96.9
Squeeze Robert Randolph & The Family Band Rock 58 63.4

 

1.3  Task & Procedure

Participants were asked to complete a survey which presented 14 excerpts of songs in random order, each 30 seconds long. They were then asked to identify the salience of each beat level, up to five, with the first being the slowest, and the last being the fastest. They were instructed to only put down those that they believe existed clearly in the music, not those that they were able to find due to musical training. They were also asked to provide the instrument the most contributed to the creation of each beat level.

Screen Shot 2014-12-03 at 2.16.08 AM

This figure shows the basic setup of each trial. An additional space was provided below in each trial for miscellaneous or explanatory comments.

1.4  Data Collection & Analysis

The data was collected through the Qualtrics website, and then exported into an excel sheet. An analysis was conducted by looking at the experts’ agreement on the number of beat levels in each song, as well as the most salient beat level of those. These measures were then used to compare to the groove ratings and tempos found in Janata et al. (2008).

More Group Project Citations

How Hooker Found his Boogie – A Rhythmic Analysis of a Classic Groove

Analyzes the rhythmic components in John Lee Hooker’s boogie. Hooker recasts a signature riff from a ternary to a binary beat subdivision, paving the way for the triple-to-duple shift that characterized mid-century American popular music. Further, the boogie’s hypnotic feel is attributed to two psychoacoustic phenomena: stream segregation and temporal order misjudgment. Stream segregation occurs when the musical surface is divided by the listener into two or more auditory entities (streams), usually as a result of timbral and registral contrasts. In Hooker’s case, these contrasts occur between the guitar groove’s downbeats and upbeats, whose extreme proximity also blurs their temporal order. These expressive effects are complemented by global and gradual accelerandos that envelop Hooker’s early performances.

 

The secret ingredient: State of affairs and future directions in groove studies

In African-American music studies (jazz, soul, funk, rock), ‘groove’ is a concept with strong, positive connotations. Its principal meaning describes the music’s effect on musicians and listeners: music with a good groove incites people to engage emotionally with the music and to participate with their bodies. Groove makes people dance, bob their heads, and tap their toes. There have been two major scholarly approaches to the groove phenomenon: one focusing on groove as a process, another explaining it from a structural perspective. This double meaning has a basis in the parlance of musicians themselves: jazz musicians use the verb ‘to groove’/’grooving’ in order to denote the process or activity of playing successfully together in such a way that musicians and listeners participate both emotionally and bodily in the music. Music also use the noun ‘a groove’ in order to talk about a particular pattern of composition or arrangement. ‘Grooving’ (in the verbal sense) can happen on the structural basis of ‘a groove’ (in the substantive sense). The inverse is also true in a beat-oriented musical context: when musicians are ‘grooving’, they do it on the structural basis of ‘a grove’, which can be described in analytical way.

 

Perception and analysing methods of groove in popular music

The rhythm (groove) of Western popular music cannot be described in just one string of notes or rhythmic symbols; all the interacting rhythms played by different instruments have to be included in the analysis. The instruments have different perceived beat-weights, that is, different strengths to establish meter. On a metrical level chosen by the listener there exists an off-beat—an event in the temporal middle between beats. If an instrument with a higher beat-weight is played on the beats, an off-beat feeling is produced. There is a qualitative difference in perceiving grooves with higher and lower ‘degrees of off-beat’. Finally, there seems to exist a relation between the perceived meters and the motor action of a listener, which he or she uses to perceive the rhythm. It is possible for a listener to move in different ways simultaneously and, according to his or her motor actions, peceive several meters at the same time.

A Review of Musical Narrative

Review of Musical Narrative

From the beginning of our species, humans have been telling stories; we’re obsessed with them.  From ancient origin myths to movies and television, Greek tragedies to Broadway, and papyrus scrolls to paperback novels, we tell stories in all sorts of ways, and we can’t get enough of them.  We can see narrative all around us; in the stories we enjoy in books, movies, and theater productions, but also in the histories we teach and pass down, in the way we communicate daily through the telling of stories, and in the self-narrative of memories that we are constantly making and remaking for ourselves.  With them we find meaning, we imagine, and we emote; storytelling is uniquely human, and evokes the very behaviors that are generally thought to define what make us human.

This phenomenon hasn’t gone unnoticed, though; story and narrative has been a topic of scholarly thought and research for millennia.  Aristotle’s The Poetics is an in-depth analysis on the elements of stories and their effects, and there are countless English and Humanities professors who have devoted themselves to studying how narratives are created, how they work, and why they’re important.  Gratier et al. (2008), following Bruner (1990), defines narrative as, “a fundamental mode of human collective thinking — and acting — and that its basic function is the production of meaning or ‘world making’.”[1] Stephen Malloch defines narrative as fundamentally temporal and intersubjective: “Narratives are the very essence of human companionship and communication. Narratives allow two persons to share a sense of passing time, and to create and share the emotional envelopes that evolve through this shared time.”[2]  These are the concepts driving my experiment, though I will be equating the terms “narrative” and “story”, and referring to them in the sense that they are a perceived chain of causally connected events (with events being as vague as possible) by or from which we generate meaning.

A particularly fascinating area of research concerned with narrative is in relation to music.  Michael Imberty says, “All interactive musical communication has a regular implicit rhythm that has been called the pulse of the interaction. It presents also a sequential organization whose units are most often shapes or melodic contours. Finally, it transmits something like a content that can be described as narrative.”[3] For the purposes of this literature review, I will be focusing on two aspects of narrative in music: meaning and emotion, as these two subjects are the most researched and most pertinent to my experiment in narrative in music.  Questions about how music produces meaning, why music arouses our emotions, and how music can tell stories without words are huge questions that researchers are continuing to pursue today. However, there is debate over whether one can even truly speak of narrative in music; as Aniruddh Patel notes in his book Music, Language, and the Brain, there is a wide spectrum of ways people approach the question of meaning in music.  There are those who argue that music cannot be meaningful, nor even meaningless, as “meaning” is defined by these researchers as the symbolic connection of an idea or concept represented by a sound or collection of sounds.  Music is unable to do this; an F major chord doesn’t readily translate to any one idea or concept other than a chord characterized by the notes F, A, and C.  Others argue for a more inclusive definition of meaning, stating that meaning is whenever and object or event brings something to mind other than the object/event.  This would imply that meaning is “inherently a dynamic, relational process.”, that the meaning changes with the context and interpretation, and it also implies that meaning isn’t something contained within the object or event, but rather something generated by the observer.[4]  These are only a couple of the many theories of musical narrative and meaning that exist, but they serve to demonstrate the wide variety and approaches that have been used to try to understand this phenomenon.  Over the course of this literature review, I will review a few more of these theories, as well as discuss the methods and implications of several experiments that have investigated the intersection of narrative, music, and emotion.

Many investigations into this subject take the form of intellectual conjectures or theories, followed by analyses of different pieces of music.  For example, Candace Brower posits a theory of how musical meaning, or metaphor, is created, and relies on two theories from Cognitive Science: that pattern recognition and matching plays a part in thought, and that we map our bodily experiences onto the patterns of other domains.  Thus, through a mix of intra-domain mapping (matching patterns to patterns heard previously in the piece, as well as matching patterns from the piece to patterns conventionally found in music) and cross-domain mapping (matching patterns of music onto bodily experiences, i.e., the idea of strong and weak beats, higher and lower pitches, expansion and contraction, etc.) we create musical meaning.  Brower then goes on to show these concepts at work in an analysis of Schubert’s “Du bist die Ruh”.  For example, she points out the suspensions found throughout the piece as indicators of “resisting gravity”, a very physical concept.  This is an example of cross-domain mapping, which helps listeners construct a narrative, but she also claims that the narrative is told through the varied repetition of the melody, through things like the change from an A♭ to an A♮ symbolizing the blocking of an attempted move to a more stable area.  It’s through these two processes that we unconsciously analyze a piece of music as we listen in order to create narrative from music without lyrics (though in this case, the piece did).[5]

Another method used by researchers, and a more neurological approach, is to use different types of brain scans, usually as the music is being played, in order to see which brain regions are being activated at certain points in the music. This relies on an aspect that stories and music both possess, that is that they both unfold in time.  The theory behind many of these studies is that narrative and emotion in music are created through both the fulfillment and denial of expectations set up by the music – this reaches all the way back to and draws on Leonard Meyer’s discussion of musical meaning and emotion.[6]  In particular, experiments have looked at the phenomenon of “chills”, or when a phrase in a piece of music produces a physiological response of shivers and possibly even tearing up.  10 subjects identified musical passages that consistently gave them chills, which were played while their brain activity was being monitored (mostly by PETs), and music that didn’t give them chills was used as a control.  It was discovered using this method that these chills produced activity in the Nucleus Accumbens, which is a kind of “reward center” in the brain, and that these chills produced responses in the brain similar to joy and/or euphoria.  With this scanning we can see that there are distinct brain states for music, and that they match up well spatially to states associated with emotional responses to other stimuli.  In addition, the brain areas found to be associated with listening to music are believed to be evolutionarily ancient structures/regions, and so it is believed that music somehow appropriates a more ancient system in order to give us pleasure.[7]

A third theory was put forward by Laura-Lee Balkwill and William Forde Thompson, which states that emotion and meaning are communicated in music through a combination of both universal and cultural cues.  In other words, there are cues within music that are recognized worldwide as cues for certain emotions, but there are also certain cues that are specific to certain cultures, such as the western music culture.  In order to test this theory, Balkwill and Thompson set up an empirical study, in which Western music listeners were exposed to 12 Hindustani “ragas”, each of which was intended to convey one of 4 emotions: joy, sadness, anger, and peace. The subjects then rated each piece for these 4 emotions, as well as 4 attributes of the music, including tempo, rhythmic complexity, melodic complexity, and pitch range.  The results showed that subjects were sensitive to three of the four emotions portrayed respectively by each raga: joy, sadness, and anger, and these judgments of emotion were related to the perception of the musical attributes.  This suggests that listeners are able to extract emotion from unfamiliar music using universal cues, even in absence of the listeners’ usual cultural cues.[8]

For all the theories out there, of which the above are only a few, we seem to possess a very healthy folk knowledge of meaning, emotion, and narrative in music.  Many studies done with children give evidence of this.  By the age of 11 – 13, children are reliably able to match music with different structures (directed action with solution and closure, action without direction or closure, and no action or direction) with three different styles of western music (La fille aux cheveux de lin by Debussy; the prelude in G, op. 28 by Chopin; and the prelude op. 74, no. 4 by Skrâbin).  Results showed a very high degree of agreement among the children; the Chopin piece was matched with the well-structured story, with directed action and closure, the Debussy piece was matched with the story with no action or direction, and the Skrâbin piece was matched to the story with action but no direction or closure.[9]  Even by the age of 5 – 6, children are able to use the perceived emotion in music to judge other stimuli.  The children heard a neutral story read aloud while either sad music (minor, slower tempo), happy music (major, faster tempo), or no music was played.  The kids were then asked questions about the story, and told to pick a sad face, a happy face, or a neutral face to describe certain moments in the story.  Kids who heard the happy music were more likely to interpret the story/character as happier, while kids who heard the sad music interpreted the story/character as more sad.[10]  In a study done by Mayumi Adachi and Sandra Trehub, children of different ages and of no particular musical ability were asked to perform either Twinkle, Twinkle Little Star or the ABC song, once to make people feel happy, and once to make people feel sad. These renditions were recorded both aurally and visually, and each of these was presented to other children as well as adults, who then made a judgment on which version sounded happier.  Children were accurately able to not only produce versions of the songs that effectively communicated happiness or sadness, but were able to judge from both audio and visual input as well.  There was a trend found that the older children and adults did better than the younger, providing evidence for the idea that our perception of emotion, meaning, and narrative in music is a skill that develops over time.[11]

As is clear from this review, there’s been a fair amount of research done into this subject, spanning a wide breadth and depth of methods and topics.  With my experiment, I hope to further this body of study and literature, and provide a more concrete look at the interaction of narrative and music

 

Bibliography

[1] Gratier, Maya, and Colwyn Trevarthen. “Musical Narratives and Motives for Culture in Mother-Infant Vocal Interaction.” Journal of Consciousness Studies15.10-11 (2008): 122-58. PsycARTICLES. Web.

[2] Malloch, S.N. (1999-2000). Mothers and Infants and Communicative Musicality. Musicæ Scientiæ, Special issue: Rhythm, musical narrative, and the origins of human communication, 29-57.

[3] Imberty, Michael, and Maya Gratier. “Narrative in Music and Interaction Editorial.” Musicae Scientiae 12.1 Suppl (2008): 3-13. PsycARTICLES. Web.

[4] Patel, Aniruddh D. Music, Language, and the Brain. Oxford: Oxford UP, 2008. Print.

[5] Brower, Candace. “A Cognitive Theory of Musical Meaning.” Journal of Music Theory 44.2 (2000): 323. Répertoire International de Littérature Musicale. Web.

[6] Meyer, Leonard B. Emotion and Meaning in Music. Chicago: U of Chicago, 1956. Web.

[7] Malloch, Stephen, and Colwyn Trevarthen. “Brain, Music, and Musicality: Inferences from Neuroimaging.” Communicative Musicality: Exploring the Basis of Human Companionship. Oxford: Oxford UP, 2009. N. pag. Répertoire International de Littérature Musicale. Web.

[8] Balkwill, Laura-Lee, and William Forde Thompson. “A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues.” Music Perception: An Interdisciplinary Journal 17.1 (1999): 43-64. JSTOR. Web.

[9]Ziv, Na. “Narrative and Musical Time: Children’s Perception of Structural Norms.”Proceedings of the Sixth International Conference on Music Perception and Cognition (2000): n. pag. Web.

[10] Ziv, Naomi, and Maya Goshen. “The Effect of ‘Sad’ and ‘Happy’ Background Music on the Interpretation of a Story in 5 to 6-year-old Children.” British Journal of Music Education 23.03 (2006): 303. PsycARTICLES. Web.

[11]Adachi, Mayumi, and Sandra E. Trehub. “Decoding the Expressive Intentions in Children’s Songs.” Music Perception: An Interdisciplinary Journal 18.2 (2000): 213-24. Web.

Project Outline

Does the number of levels of beats or pulses in a song enhance people’s perception of the groove? How does tempo affect this perception?

As for our experimental design, we plan to have 6 conditions, created by 2 variables

# of Pulse Levels: We plan to enlist a panel of musical experts to determine the number of salient pulse levels present in the songs we use in the experiment, and then will divide these songs into high and low categories based on this measurement.  Low (1-2 pulse levels), High (3-5 pulse levels)

Tempo: Slow (< 74 bpm), Medium (75 – 99 bpm), Fast (> 100 bpm)

Low # of Pulse Levels

High # of Pulse Levels

Slow Tempo

Low # + Slow Tempo

High # + Slow Tempo

Medium Tempo

Low # + Medium Tempo

High # + Medium Tempo

Fast Tempo

Low # + Fast Tempo

High # + Fast Tempo

We plan to randomly take songs (unchosen as of yet) from the list compiled by Janata et al., and use these in the experiment, and it is these that will be reviewed by the panel of experts

Subjects, after listening to the pieces, will be asked about the grooviness of the music, using the same scale as Janata et al., which was a 7 point scale with 1 = least groove and 7 = most groove.

The experiment will likely be administered by an online questionnaire with music inserted into it. Participants will respond for each song directly after it plays, and will be told to feel free to move along to the music. There will also be questions to address musical and cultural background of the subject, and whether they are familiar with the songs played.

We hypothesize that songs that are reported having more salient beat levels will be rated as groovier, and that the songs with slow tempos will also be rated as more groovy than those with faster tempos.

Group Project Questions – Jordan

A couple questions I think would be interesting to explore:

I think it would be interesting to see if we could replicate findings from previous studies that suggested that musicians were more able to access different levels of pulse in music, but then take it further and see whether more/less salient and more/less apparent pulse levels contribute to peoples’ perception of groove, or desire to move to the beat.

The other question I’ve been thinking about involves the question of microtiming: we’ve seen how slight variations in timing can give music expressiveness and a more human quality, even evoking emotion in a way that more exact, robotic renditions doesn’t. This effect has a limit however; in studies with varying levels of changing microtimes, subjects chose an average as the best sounding. My question is, why is this? Why do mistakes, or inexactness, give the appearance of being more human, more intentional, more emotional? This is a finding that is found in other areas of CogSci as well, so I’d be interested to see if we can come up with a way to test why this is so, or come up with some theoretical hypotheses.

How does the interaction of music and story affect perceptions of emotion, meaning, and structure? Additionally, how does this affect memory and comprehension?

Balkwill, Laura-Lee, and William Forde Thompson. “A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues.” Music Perception: An Interdisciplinary Journal 17.1 (1999): 43-64. JSTOR. Web.

This article proposes a theory of emotion in music that states that emotion is communicated in music through a combination of both universal and cultural cues, and it is these cues that we use to perceive and understand the emotions the music is conveying.  Western music listeners were exposed to 12 Hindustani “ragas”, each of which was intended to convey one of 4 emotions: joy, sadness, anger, and peace. The subjects then rated each piece for these 4 emotions, as well as 4 attributes of the music, including tempo, rhythmic complexity, melodic complexity, and pitch range.  Subjects were sensitive to joy, sadness, and anger, and these judgments were related to the judgments of the musical attributes, suggesting that listeners are able to extract emotion from unfamiliar music, and that musical cues help them do this.

 

Boltz, Marilyn. “Temporal Accent Structure and the Remembering of Filmed Narratives.” Journal of Experimental Psychology: Human Perception and Performance 18.1 (1992): 90-105. PsycARTICLES. Web.

A study was conducted where filmed narratives were broken up by commercials either between major episode boundaries (so-called “breakpoints”) or within these episodes (“non-breakpoints”).  Those who experienced the narratives with the more logical commercial placements were able to recall details from the story better, better recognition, and better memory for temporal information. This suggests that people use episode boundaries for attention and remembering, and also suggests that narratives have a natural rise and fall, that within the larger arc of the story are smaller arcs that form a regular structure that is also found in other forms of media.

 

Brower, Candace. “A Cognitive Theory of Musical Meaning.” Journal of Music Theory 44.2 (2000): 323. Répertoire International de Littérature Musicale. Web.

This article puts forward a theory of how musical meaning, or metaphor, is created, and relies on two theories from Cognitive Science: that pattern recognition and matching plays a part in thought, and that we map our bodily experiences onto the patterns of other domains.  Thus, through a mix of intra-domain mapping (matching patterns to patterns heard previously in the piece, as well as matching patterns from the piece to patterns conventionally found in music) and cross-domain mapping (matching patterns of music onto bodily experiences, i.e., the idea of strong and weak beats, higher and lower pitches, expansion and contraction, etc.) we create musical meaning.  The author explains the concepts, and then applies them in an analysis of Schubert’s “Du bist die Ruh”.

 

Cohen, A. J. (2001). Music as a source of emotion in film. In Juslin P. & Sloboda, J. (Eds.). Music and emotion. (pp.249-272). Oxford: Oxford University Press. Google Scholar. Web.

This chapter discusses the role of music in films, and what they add to the narrative, as well as how they evoke emotion.  The music in films is a bit of an oddity, as it is not directed at the characters of the film, but rather the audience. Cohen outlines 6 different ways music in films evokes our emotions.

 

Gratier, Maya, and Colwyn Trevarthen. “Musical Narratives and Motives for Culture in Mother-Infant Vocal Interaction.” Journal of Consciousness Studies15.10-11 (2008): 122-58. PsycARTICLES. Web.

Researchers looked at the non-verbal communications between mothers and infants, theorizing that communications and narratives conveyed through gestures and other non-verbal communications help the child become a being that participates in culture.  They also look at the organization of these factors in time, and at the end of the article, they provide empirical evidence for their claims.

 

Imberty, Michael, and Maya Gratier. “Narrative in Music and Interaction Editorial.” Musicae Scientiae 12.1 Suppl (2008): 3-13. PsycARTICLES. Web.

This article speculates on the definition of narrative, and about wordless narrative, including music.  The author also focuses on the musicality in communication, including gestures and speech, and the temporality of both narrative and music.

 

Klein, Michael Leslie, and Nicholas W. Reyland. Music and Narrative Since 1900. Bloomington: Indiana UP, 2013. Print.

This book focuses directly on the connection between music and narrative, especially in recent years, and seeks to challenge the claim that some modern music has lost its narrative.  The book looks at the phenomenon of narrative and music over time, tracking how it has changed, and the effect of other types of narrative on contemporary music and musical narrative.  There are also many different analyses of various pieces presented in this book, which display musical narrative at work.

 

Malloch, Stephen, and Colwyn Trevarthen. “Brain, Music, and Musicality: Inferences from Neuroimaging.” Communicative Musicality: Exploring the Basis of Human Companionship. Oxford: Oxford UP, 2009. N. pag.Répertoire International de Littérature Musicale. Web.

This article serves as a literature review of studies done with all different kinds of brain scans, seeking to look for commonalities between the processing of music and the processing of language. These studies seem to suggest strong connections and similarities between the two (similar areas of processing, similar patterns), but also slight differences, such as the fact that music is likely to be processed more bilaterally, which suggests that capacity to be affected by music is likely to be innate.  This also provides evidence for the idea that the faculties we use in the cognition of music may help facilitate language acquisition. The studies also provide evidence for the localization of certain parts of music cognition; for example, music activates the areas that generally deal with emotion, as well as many other specific areas.

 

Miell, Dorothy, Raymond A. R. MacDonald, and David J. Hargreaves. Musical Communication. Oxford: Oxford UP, 2005. Print.

This book seeks to bring together ideas and concepts from many different fields to look at the topic of musical communication. Researchers cover themes such as “Music and meaning, ambiguity, and evolution”, “Singing as communication”, and “The role of music communication in cinema” in an attempt to understand how humans share emotions, intentions, meanings, and stories with each other through music.

 

Patel, Aniruddh D. Music, Language, and the Brain. Oxford: Oxford UP, 2008. Print.

Patel’s book explores the connection between language and music, including such topics as rhythm, melody, syntax, and meaning. Patel reviews the relevant studies, and summarizes the joint scientific field of music and language as of now.

 

Porter-Reamer, Sheila Veronica. Song Picture Books and Narrative Comprehension. N.p.: n.p., 2006. Web.

This study sought to compare the effects of reading a story vs. reading a story with a song, to measure whether narrative comprehension was better with song. While the results showed no such effect, they did show that memory improved with the song picture books.

 

Ziv, Naomi, and Maya Goshen. “The Effect of ‘Sad’ and ‘Happy’ Background Music on the Interpretation of a Story in 5 to 6-year-old Children.” British Journal of Music Education 23.03 (2006): 303. PsycARTICLES. Web.

This article details an experiment run where children heard a neutral story read aloud while either sad music (minor, slower tempo), happy music (major, faster tempo), or no music was played.  The kids were then asked questions about the story, and told to pick either a sad face, a happy face, or a neutral face to describe certain moments in the story.  Kids who heard the happy music were more likely to interpret the story/character as happier, while kids who heard the sad music interpreted the story/character as more sad.  This shows that music affects the perception of other stimuli and stories.

 

Stories and Music

From the beginning of our species, humans have been telling stories; we’re obsessed with them. From ancient origin myths to movies and television, Greek tragedies to Broadway, and papyrus scrolls to paperback novels, we tell stories in all sorts of ways, and we can’t get enough of them. With them we find meaning, we imagine, and we emote; storytelling is uniquely human, and evokes the very behaviors that are generally thought to define what makes us human.

Often found coupled with storytelling is music. We often use music to tell stories that evoke emotions, or make us think and imagine. The relationship between narrative and music is one that is difficult to parse out, however. You can have stories without music; can you have music without a story? It seems obvious to say yes; music may not have the same characters, the same actions, and the same plot that we recognize so easily in stories, but they have themes. They have recurring tones and sounds, interactions between those themes, and a syntax as complex as that of the languages that form stories without music. Seemingly most important, both music and story are essentially experiences – they unfold in time, and must be experienced. So how do we understand this relationship between music and stories? Is music a specialized type of story, simply part of a much larger concept of stories? Or are they two separate things that interact?

Some questions Patel raises in his book, Music, Language, and the Brain, are how we define and understand the meaning created by music, and whether emotion is inherent in the music, separate entirely, or whether it may exist both within and separate from the music. He enumerates a few different theories about how these concepts may be related, but ultimately leaves this question unanswered. In the aim of understanding these connections better, along with music’s connection to story, I have several questions I wish to explore. For example:

Can a story change the perception of the emotions of a musical piece? Or perhaps vice versa, can a musical piece played after/during a story change emotional perception?

What contributes to perceiving a narrative in a piece of music? Do different rhythms give rise to different narrations? Perhaps by asking subjects to create narratives for many different rhythms will reveal some consistencies or similarities.

How do structures of music relate to structures of stories? Do people recognize and connect the two? Perhaps by finding or creating a story with a similar arc as a piece of music, I can ask people to identify the larger structure, and see which is better, if there are any similarities, and so on.

Are people consistent with the creation of narratives in music? For instance, do people generally create similar sounding narratives for the same piece of music?

Expectation Theory states that we create unconscious expectations when listening to music, and was shown to do so with short groups of tones. Do stories create the same types of expectations? Perhaps using either long or short sentences to create a seeming “rhythm”, and then changing to the other type might create a similar violation of expectation, and make it more difficult to remember the content.

Does music in stories help us remember things better? Perhaps setting a story to music would help subjects remember the content of the story better than those who got the story without the music.

Though there hasn’t been a lot of work done in this area; Patel’s book is a good general overview of research that was current at the time, but rather than give a conclusive answer to questions, Patel gives several possible theories that could answer the question for each. The next step will be to continue to seek out further research that specifically focuses on what I’m interested in, to see if there is any sort of precedence for the types of experiments I want to run.

*These are many questions, and while it is unlikely that I will be able to create an experiment that explores all of them, many of them are related, and so I believe it will be possible for me to create a large scale experiment, or a slew of smaller experiments that will give me data to answer a fair amount of the questions I have.