A Review of Musical Narrative

Review of Musical Narrative

From the beginning of our species, humans have been telling stories; we’re obsessed with them.  From ancient origin myths to movies and television, Greek tragedies to Broadway, and papyrus scrolls to paperback novels, we tell stories in all sorts of ways, and we can’t get enough of them.  We can see narrative all around us; in the stories we enjoy in books, movies, and theater productions, but also in the histories we teach and pass down, in the way we communicate daily through the telling of stories, and in the self-narrative of memories that we are constantly making and remaking for ourselves.  With them we find meaning, we imagine, and we emote; storytelling is uniquely human, and evokes the very behaviors that are generally thought to define what make us human.

This phenomenon hasn’t gone unnoticed, though; story and narrative has been a topic of scholarly thought and research for millennia.  Aristotle’s The Poetics is an in-depth analysis on the elements of stories and their effects, and there are countless English and Humanities professors who have devoted themselves to studying how narratives are created, how they work, and why they’re important.  Gratier et al. (2008), following Bruner (1990), defines narrative as, “a fundamental mode of human collective thinking — and acting — and that its basic function is the production of meaning or ‘world making’.”[1] Stephen Malloch defines narrative as fundamentally temporal and intersubjective: “Narratives are the very essence of human companionship and communication. Narratives allow two persons to share a sense of passing time, and to create and share the emotional envelopes that evolve through this shared time.”[2]  These are the concepts driving my experiment, though I will be equating the terms “narrative” and “story”, and referring to them in the sense that they are a perceived chain of causally connected events (with events being as vague as possible) by or from which we generate meaning.

A particularly fascinating area of research concerned with narrative is in relation to music.  Michael Imberty says, “All interactive musical communication has a regular implicit rhythm that has been called the pulse of the interaction. It presents also a sequential organization whose units are most often shapes or melodic contours. Finally, it transmits something like a content that can be described as narrative.”[3] For the purposes of this literature review, I will be focusing on two aspects of narrative in music: meaning and emotion, as these two subjects are the most researched and most pertinent to my experiment in narrative in music.  Questions about how music produces meaning, why music arouses our emotions, and how music can tell stories without words are huge questions that researchers are continuing to pursue today. However, there is debate over whether one can even truly speak of narrative in music; as Aniruddh Patel notes in his book Music, Language, and the Brain, there is a wide spectrum of ways people approach the question of meaning in music.  There are those who argue that music cannot be meaningful, nor even meaningless, as “meaning” is defined by these researchers as the symbolic connection of an idea or concept represented by a sound or collection of sounds.  Music is unable to do this; an F major chord doesn’t readily translate to any one idea or concept other than a chord characterized by the notes F, A, and C.  Others argue for a more inclusive definition of meaning, stating that meaning is whenever and object or event brings something to mind other than the object/event.  This would imply that meaning is “inherently a dynamic, relational process.”, that the meaning changes with the context and interpretation, and it also implies that meaning isn’t something contained within the object or event, but rather something generated by the observer.[4]  These are only a couple of the many theories of musical narrative and meaning that exist, but they serve to demonstrate the wide variety and approaches that have been used to try to understand this phenomenon.  Over the course of this literature review, I will review a few more of these theories, as well as discuss the methods and implications of several experiments that have investigated the intersection of narrative, music, and emotion.

Many investigations into this subject take the form of intellectual conjectures or theories, followed by analyses of different pieces of music.  For example, Candace Brower posits a theory of how musical meaning, or metaphor, is created, and relies on two theories from Cognitive Science: that pattern recognition and matching plays a part in thought, and that we map our bodily experiences onto the patterns of other domains.  Thus, through a mix of intra-domain mapping (matching patterns to patterns heard previously in the piece, as well as matching patterns from the piece to patterns conventionally found in music) and cross-domain mapping (matching patterns of music onto bodily experiences, i.e., the idea of strong and weak beats, higher and lower pitches, expansion and contraction, etc.) we create musical meaning.  Brower then goes on to show these concepts at work in an analysis of Schubert’s “Du bist die Ruh”.  For example, she points out the suspensions found throughout the piece as indicators of “resisting gravity”, a very physical concept.  This is an example of cross-domain mapping, which helps listeners construct a narrative, but she also claims that the narrative is told through the varied repetition of the melody, through things like the change from an A♭ to an A♮ symbolizing the blocking of an attempted move to a more stable area.  It’s through these two processes that we unconsciously analyze a piece of music as we listen in order to create narrative from music without lyrics (though in this case, the piece did).[5]

Another method used by researchers, and a more neurological approach, is to use different types of brain scans, usually as the music is being played, in order to see which brain regions are being activated at certain points in the music. This relies on an aspect that stories and music both possess, that is that they both unfold in time.  The theory behind many of these studies is that narrative and emotion in music are created through both the fulfillment and denial of expectations set up by the music – this reaches all the way back to and draws on Leonard Meyer’s discussion of musical meaning and emotion.[6]  In particular, experiments have looked at the phenomenon of “chills”, or when a phrase in a piece of music produces a physiological response of shivers and possibly even tearing up.  10 subjects identified musical passages that consistently gave them chills, which were played while their brain activity was being monitored (mostly by PETs), and music that didn’t give them chills was used as a control.  It was discovered using this method that these chills produced activity in the Nucleus Accumbens, which is a kind of “reward center” in the brain, and that these chills produced responses in the brain similar to joy and/or euphoria.  With this scanning we can see that there are distinct brain states for music, and that they match up well spatially to states associated with emotional responses to other stimuli.  In addition, the brain areas found to be associated with listening to music are believed to be evolutionarily ancient structures/regions, and so it is believed that music somehow appropriates a more ancient system in order to give us pleasure.[7]

A third theory was put forward by Laura-Lee Balkwill and William Forde Thompson, which states that emotion and meaning are communicated in music through a combination of both universal and cultural cues.  In other words, there are cues within music that are recognized worldwide as cues for certain emotions, but there are also certain cues that are specific to certain cultures, such as the western music culture.  In order to test this theory, Balkwill and Thompson set up an empirical study, in which Western music listeners were exposed to 12 Hindustani “ragas”, each of which was intended to convey one of 4 emotions: joy, sadness, anger, and peace. The subjects then rated each piece for these 4 emotions, as well as 4 attributes of the music, including tempo, rhythmic complexity, melodic complexity, and pitch range.  The results showed that subjects were sensitive to three of the four emotions portrayed respectively by each raga: joy, sadness, and anger, and these judgments of emotion were related to the perception of the musical attributes.  This suggests that listeners are able to extract emotion from unfamiliar music using universal cues, even in absence of the listeners’ usual cultural cues.[8]

For all the theories out there, of which the above are only a few, we seem to possess a very healthy folk knowledge of meaning, emotion, and narrative in music.  Many studies done with children give evidence of this.  By the age of 11 – 13, children are reliably able to match music with different structures (directed action with solution and closure, action without direction or closure, and no action or direction) with three different styles of western music (La fille aux cheveux de lin by Debussy; the prelude in G, op. 28 by Chopin; and the prelude op. 74, no. 4 by Skrâbin).  Results showed a very high degree of agreement among the children; the Chopin piece was matched with the well-structured story, with directed action and closure, the Debussy piece was matched with the story with no action or direction, and the Skrâbin piece was matched to the story with action but no direction or closure.[9]  Even by the age of 5 – 6, children are able to use the perceived emotion in music to judge other stimuli.  The children heard a neutral story read aloud while either sad music (minor, slower tempo), happy music (major, faster tempo), or no music was played.  The kids were then asked questions about the story, and told to pick a sad face, a happy face, or a neutral face to describe certain moments in the story.  Kids who heard the happy music were more likely to interpret the story/character as happier, while kids who heard the sad music interpreted the story/character as more sad.[10]  In a study done by Mayumi Adachi and Sandra Trehub, children of different ages and of no particular musical ability were asked to perform either Twinkle, Twinkle Little Star or the ABC song, once to make people feel happy, and once to make people feel sad. These renditions were recorded both aurally and visually, and each of these was presented to other children as well as adults, who then made a judgment on which version sounded happier.  Children were accurately able to not only produce versions of the songs that effectively communicated happiness or sadness, but were able to judge from both audio and visual input as well.  There was a trend found that the older children and adults did better than the younger, providing evidence for the idea that our perception of emotion, meaning, and narrative in music is a skill that develops over time.[11]

As is clear from this review, there’s been a fair amount of research done into this subject, spanning a wide breadth and depth of methods and topics.  With my experiment, I hope to further this body of study and literature, and provide a more concrete look at the interaction of narrative and music

 

Bibliography

[1] Gratier, Maya, and Colwyn Trevarthen. “Musical Narratives and Motives for Culture in Mother-Infant Vocal Interaction.” Journal of Consciousness Studies15.10-11 (2008): 122-58. PsycARTICLES. Web.

[2] Malloch, S.N. (1999-2000). Mothers and Infants and Communicative Musicality. Musicæ Scientiæ, Special issue: Rhythm, musical narrative, and the origins of human communication, 29-57.

[3] Imberty, Michael, and Maya Gratier. “Narrative in Music and Interaction Editorial.” Musicae Scientiae 12.1 Suppl (2008): 3-13. PsycARTICLES. Web.

[4] Patel, Aniruddh D. Music, Language, and the Brain. Oxford: Oxford UP, 2008. Print.

[5] Brower, Candace. “A Cognitive Theory of Musical Meaning.” Journal of Music Theory 44.2 (2000): 323. Répertoire International de Littérature Musicale. Web.

[6] Meyer, Leonard B. Emotion and Meaning in Music. Chicago: U of Chicago, 1956. Web.

[7] Malloch, Stephen, and Colwyn Trevarthen. “Brain, Music, and Musicality: Inferences from Neuroimaging.” Communicative Musicality: Exploring the Basis of Human Companionship. Oxford: Oxford UP, 2009. N. pag. Répertoire International de Littérature Musicale. Web.

[8] Balkwill, Laura-Lee, and William Forde Thompson. “A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues.” Music Perception: An Interdisciplinary Journal 17.1 (1999): 43-64. JSTOR. Web.

[9]Ziv, Na. “Narrative and Musical Time: Children’s Perception of Structural Norms.”Proceedings of the Sixth International Conference on Music Perception and Cognition (2000): n. pag. Web.

[10] Ziv, Naomi, and Maya Goshen. “The Effect of ‘Sad’ and ‘Happy’ Background Music on the Interpretation of a Story in 5 to 6-year-old Children.” British Journal of Music Education 23.03 (2006): 303. PsycARTICLES. Web.

[11]Adachi, Mayumi, and Sandra E. Trehub. “Decoding the Expressive Intentions in Children’s Songs.” Music Perception: An Interdisciplinary Journal 18.2 (2000): 213-24. Web.

How does the interaction of music and story affect perceptions of emotion, meaning, and structure? Additionally, how does this affect memory and comprehension?

Balkwill, Laura-Lee, and William Forde Thompson. “A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues.” Music Perception: An Interdisciplinary Journal 17.1 (1999): 43-64. JSTOR. Web.

This article proposes a theory of emotion in music that states that emotion is communicated in music through a combination of both universal and cultural cues, and it is these cues that we use to perceive and understand the emotions the music is conveying.  Western music listeners were exposed to 12 Hindustani “ragas”, each of which was intended to convey one of 4 emotions: joy, sadness, anger, and peace. The subjects then rated each piece for these 4 emotions, as well as 4 attributes of the music, including tempo, rhythmic complexity, melodic complexity, and pitch range.  Subjects were sensitive to joy, sadness, and anger, and these judgments were related to the judgments of the musical attributes, suggesting that listeners are able to extract emotion from unfamiliar music, and that musical cues help them do this.

 

Boltz, Marilyn. “Temporal Accent Structure and the Remembering of Filmed Narratives.” Journal of Experimental Psychology: Human Perception and Performance 18.1 (1992): 90-105. PsycARTICLES. Web.

A study was conducted where filmed narratives were broken up by commercials either between major episode boundaries (so-called “breakpoints”) or within these episodes (“non-breakpoints”).  Those who experienced the narratives with the more logical commercial placements were able to recall details from the story better, better recognition, and better memory for temporal information. This suggests that people use episode boundaries for attention and remembering, and also suggests that narratives have a natural rise and fall, that within the larger arc of the story are smaller arcs that form a regular structure that is also found in other forms of media.

 

Brower, Candace. “A Cognitive Theory of Musical Meaning.” Journal of Music Theory 44.2 (2000): 323. Répertoire International de Littérature Musicale. Web.

This article puts forward a theory of how musical meaning, or metaphor, is created, and relies on two theories from Cognitive Science: that pattern recognition and matching plays a part in thought, and that we map our bodily experiences onto the patterns of other domains.  Thus, through a mix of intra-domain mapping (matching patterns to patterns heard previously in the piece, as well as matching patterns from the piece to patterns conventionally found in music) and cross-domain mapping (matching patterns of music onto bodily experiences, i.e., the idea of strong and weak beats, higher and lower pitches, expansion and contraction, etc.) we create musical meaning.  The author explains the concepts, and then applies them in an analysis of Schubert’s “Du bist die Ruh”.

 

Cohen, A. J. (2001). Music as a source of emotion in film. In Juslin P. & Sloboda, J. (Eds.). Music and emotion. (pp.249-272). Oxford: Oxford University Press. Google Scholar. Web.

This chapter discusses the role of music in films, and what they add to the narrative, as well as how they evoke emotion.  The music in films is a bit of an oddity, as it is not directed at the characters of the film, but rather the audience. Cohen outlines 6 different ways music in films evokes our emotions.

 

Gratier, Maya, and Colwyn Trevarthen. “Musical Narratives and Motives for Culture in Mother-Infant Vocal Interaction.” Journal of Consciousness Studies15.10-11 (2008): 122-58. PsycARTICLES. Web.

Researchers looked at the non-verbal communications between mothers and infants, theorizing that communications and narratives conveyed through gestures and other non-verbal communications help the child become a being that participates in culture.  They also look at the organization of these factors in time, and at the end of the article, they provide empirical evidence for their claims.

 

Imberty, Michael, and Maya Gratier. “Narrative in Music and Interaction Editorial.” Musicae Scientiae 12.1 Suppl (2008): 3-13. PsycARTICLES. Web.

This article speculates on the definition of narrative, and about wordless narrative, including music.  The author also focuses on the musicality in communication, including gestures and speech, and the temporality of both narrative and music.

 

Klein, Michael Leslie, and Nicholas W. Reyland. Music and Narrative Since 1900. Bloomington: Indiana UP, 2013. Print.

This book focuses directly on the connection between music and narrative, especially in recent years, and seeks to challenge the claim that some modern music has lost its narrative.  The book looks at the phenomenon of narrative and music over time, tracking how it has changed, and the effect of other types of narrative on contemporary music and musical narrative.  There are also many different analyses of various pieces presented in this book, which display musical narrative at work.

 

Malloch, Stephen, and Colwyn Trevarthen. “Brain, Music, and Musicality: Inferences from Neuroimaging.” Communicative Musicality: Exploring the Basis of Human Companionship. Oxford: Oxford UP, 2009. N. pag.Répertoire International de Littérature Musicale. Web.

This article serves as a literature review of studies done with all different kinds of brain scans, seeking to look for commonalities between the processing of music and the processing of language. These studies seem to suggest strong connections and similarities between the two (similar areas of processing, similar patterns), but also slight differences, such as the fact that music is likely to be processed more bilaterally, which suggests that capacity to be affected by music is likely to be innate.  This also provides evidence for the idea that the faculties we use in the cognition of music may help facilitate language acquisition. The studies also provide evidence for the localization of certain parts of music cognition; for example, music activates the areas that generally deal with emotion, as well as many other specific areas.

 

Miell, Dorothy, Raymond A. R. MacDonald, and David J. Hargreaves. Musical Communication. Oxford: Oxford UP, 2005. Print.

This book seeks to bring together ideas and concepts from many different fields to look at the topic of musical communication. Researchers cover themes such as “Music and meaning, ambiguity, and evolution”, “Singing as communication”, and “The role of music communication in cinema” in an attempt to understand how humans share emotions, intentions, meanings, and stories with each other through music.

 

Patel, Aniruddh D. Music, Language, and the Brain. Oxford: Oxford UP, 2008. Print.

Patel’s book explores the connection between language and music, including such topics as rhythm, melody, syntax, and meaning. Patel reviews the relevant studies, and summarizes the joint scientific field of music and language as of now.

 

Porter-Reamer, Sheila Veronica. Song Picture Books and Narrative Comprehension. N.p.: n.p., 2006. Web.

This study sought to compare the effects of reading a story vs. reading a story with a song, to measure whether narrative comprehension was better with song. While the results showed no such effect, they did show that memory improved with the song picture books.

 

Ziv, Naomi, and Maya Goshen. “The Effect of ‘Sad’ and ‘Happy’ Background Music on the Interpretation of a Story in 5 to 6-year-old Children.” British Journal of Music Education 23.03 (2006): 303. PsycARTICLES. Web.

This article details an experiment run where children heard a neutral story read aloud while either sad music (minor, slower tempo), happy music (major, faster tempo), or no music was played.  The kids were then asked questions about the story, and told to pick either a sad face, a happy face, or a neutral face to describe certain moments in the story.  Kids who heard the happy music were more likely to interpret the story/character as happier, while kids who heard the sad music interpreted the story/character as more sad.  This shows that music affects the perception of other stimuli and stories.

 

Stories and Music

From the beginning of our species, humans have been telling stories; we’re obsessed with them. From ancient origin myths to movies and television, Greek tragedies to Broadway, and papyrus scrolls to paperback novels, we tell stories in all sorts of ways, and we can’t get enough of them. With them we find meaning, we imagine, and we emote; storytelling is uniquely human, and evokes the very behaviors that are generally thought to define what makes us human.

Often found coupled with storytelling is music. We often use music to tell stories that evoke emotions, or make us think and imagine. The relationship between narrative and music is one that is difficult to parse out, however. You can have stories without music; can you have music without a story? It seems obvious to say yes; music may not have the same characters, the same actions, and the same plot that we recognize so easily in stories, but they have themes. They have recurring tones and sounds, interactions between those themes, and a syntax as complex as that of the languages that form stories without music. Seemingly most important, both music and story are essentially experiences – they unfold in time, and must be experienced. So how do we understand this relationship between music and stories? Is music a specialized type of story, simply part of a much larger concept of stories? Or are they two separate things that interact?

Some questions Patel raises in his book, Music, Language, and the Brain, are how we define and understand the meaning created by music, and whether emotion is inherent in the music, separate entirely, or whether it may exist both within and separate from the music. He enumerates a few different theories about how these concepts may be related, but ultimately leaves this question unanswered. In the aim of understanding these connections better, along with music’s connection to story, I have several questions I wish to explore. For example:

Can a story change the perception of the emotions of a musical piece? Or perhaps vice versa, can a musical piece played after/during a story change emotional perception?

What contributes to perceiving a narrative in a piece of music? Do different rhythms give rise to different narrations? Perhaps by asking subjects to create narratives for many different rhythms will reveal some consistencies or similarities.

How do structures of music relate to structures of stories? Do people recognize and connect the two? Perhaps by finding or creating a story with a similar arc as a piece of music, I can ask people to identify the larger structure, and see which is better, if there are any similarities, and so on.

Are people consistent with the creation of narratives in music? For instance, do people generally create similar sounding narratives for the same piece of music?

Expectation Theory states that we create unconscious expectations when listening to music, and was shown to do so with short groups of tones. Do stories create the same types of expectations? Perhaps using either long or short sentences to create a seeming “rhythm”, and then changing to the other type might create a similar violation of expectation, and make it more difficult to remember the content.

Does music in stories help us remember things better? Perhaps setting a story to music would help subjects remember the content of the story better than those who got the story without the music.

Though there hasn’t been a lot of work done in this area; Patel’s book is a good general overview of research that was current at the time, but rather than give a conclusive answer to questions, Patel gives several possible theories that could answer the question for each. The next step will be to continue to seek out further research that specifically focuses on what I’m interested in, to see if there is any sort of precedence for the types of experiments I want to run.

*These are many questions, and while it is unlikely that I will be able to create an experiment that explores all of them, many of them are related, and so I believe it will be possible for me to create a large scale experiment, or a slew of smaller experiments that will give me data to answer a fair amount of the questions I have.