Review of Musical Narrative
From the beginning of our species, humans have been telling stories; we’re obsessed with them. From ancient origin myths to movies and television, Greek tragedies to Broadway, and papyrus scrolls to paperback novels, we tell stories in all sorts of ways, and we can’t get enough of them. We can see narrative all around us; in the stories we enjoy in books, movies, and theater productions, but also in the histories we teach and pass down, in the way we communicate daily through the telling of stories, and in the self-narrative of memories that we are constantly making and remaking for ourselves. With them we find meaning, we imagine, and we emote; storytelling is uniquely human, and evokes the very behaviors that are generally thought to define what make us human.
This phenomenon hasn’t gone unnoticed, though; story and narrative has been a topic of scholarly thought and research for millennia. Aristotle’s The Poetics is an in-depth analysis on the elements of stories and their effects, and there are countless English and Humanities professors who have devoted themselves to studying how narratives are created, how they work, and why they’re important. Gratier et al. (2008), following Bruner (1990), defines narrative as, “a fundamental mode of human collective thinking — and acting — and that its basic function is the production of meaning or ‘world making’.” Stephen Malloch defines narrative as fundamentally temporal and intersubjective: “Narratives are the very essence of human companionship and communication. Narratives allow two persons to share a sense of passing time, and to create and share the emotional envelopes that evolve through this shared time.” These are the concepts driving my experiment, though I will be equating the terms “narrative” and “story”, and referring to them in the sense that they are a perceived chain of causally connected events (with events being as vague as possible) by or from which we generate meaning.
A particularly fascinating area of research concerned with narrative is in relation to music. Michael Imberty says, “All interactive musical communication has a regular implicit rhythm that has been called the pulse of the interaction. It presents also a sequential organization whose units are most often shapes or melodic contours. Finally, it transmits something like a content that can be described as narrative.” For the purposes of this literature review, I will be focusing on two aspects of narrative in music: meaning and emotion, as these two subjects are the most researched and most pertinent to my experiment in narrative in music. Questions about how music produces meaning, why music arouses our emotions, and how music can tell stories without words are huge questions that researchers are continuing to pursue today. However, there is debate over whether one can even truly speak of narrative in music; as Aniruddh Patel notes in his book Music, Language, and the Brain, there is a wide spectrum of ways people approach the question of meaning in music. There are those who argue that music cannot be meaningful, nor even meaningless, as “meaning” is defined by these researchers as the symbolic connection of an idea or concept represented by a sound or collection of sounds. Music is unable to do this; an F major chord doesn’t readily translate to any one idea or concept other than a chord characterized by the notes F, A, and C. Others argue for a more inclusive definition of meaning, stating that meaning is whenever and object or event brings something to mind other than the object/event. This would imply that meaning is “inherently a dynamic, relational process.”, that the meaning changes with the context and interpretation, and it also implies that meaning isn’t something contained within the object or event, but rather something generated by the observer. These are only a couple of the many theories of musical narrative and meaning that exist, but they serve to demonstrate the wide variety and approaches that have been used to try to understand this phenomenon. Over the course of this literature review, I will review a few more of these theories, as well as discuss the methods and implications of several experiments that have investigated the intersection of narrative, music, and emotion.
Many investigations into this subject take the form of intellectual conjectures or theories, followed by analyses of different pieces of music. For example, Candace Brower posits a theory of how musical meaning, or metaphor, is created, and relies on two theories from Cognitive Science: that pattern recognition and matching plays a part in thought, and that we map our bodily experiences onto the patterns of other domains. Thus, through a mix of intra-domain mapping (matching patterns to patterns heard previously in the piece, as well as matching patterns from the piece to patterns conventionally found in music) and cross-domain mapping (matching patterns of music onto bodily experiences, i.e., the idea of strong and weak beats, higher and lower pitches, expansion and contraction, etc.) we create musical meaning. Brower then goes on to show these concepts at work in an analysis of Schubert’s “Du bist die Ruh”. For example, she points out the suspensions found throughout the piece as indicators of “resisting gravity”, a very physical concept. This is an example of cross-domain mapping, which helps listeners construct a narrative, but she also claims that the narrative is told through the varied repetition of the melody, through things like the change from an A♭ to an A♮ symbolizing the blocking of an attempted move to a more stable area. It’s through these two processes that we unconsciously analyze a piece of music as we listen in order to create narrative from music without lyrics (though in this case, the piece did).
Another method used by researchers, and a more neurological approach, is to use different types of brain scans, usually as the music is being played, in order to see which brain regions are being activated at certain points in the music. This relies on an aspect that stories and music both possess, that is that they both unfold in time. The theory behind many of these studies is that narrative and emotion in music are created through both the fulfillment and denial of expectations set up by the music – this reaches all the way back to and draws on Leonard Meyer’s discussion of musical meaning and emotion. In particular, experiments have looked at the phenomenon of “chills”, or when a phrase in a piece of music produces a physiological response of shivers and possibly even tearing up. 10 subjects identified musical passages that consistently gave them chills, which were played while their brain activity was being monitored (mostly by PETs), and music that didn’t give them chills was used as a control. It was discovered using this method that these chills produced activity in the Nucleus Accumbens, which is a kind of “reward center” in the brain, and that these chills produced responses in the brain similar to joy and/or euphoria. With this scanning we can see that there are distinct brain states for music, and that they match up well spatially to states associated with emotional responses to other stimuli. In addition, the brain areas found to be associated with listening to music are believed to be evolutionarily ancient structures/regions, and so it is believed that music somehow appropriates a more ancient system in order to give us pleasure.
A third theory was put forward by Laura-Lee Balkwill and William Forde Thompson, which states that emotion and meaning are communicated in music through a combination of both universal and cultural cues. In other words, there are cues within music that are recognized worldwide as cues for certain emotions, but there are also certain cues that are specific to certain cultures, such as the western music culture. In order to test this theory, Balkwill and Thompson set up an empirical study, in which Western music listeners were exposed to 12 Hindustani “ragas”, each of which was intended to convey one of 4 emotions: joy, sadness, anger, and peace. The subjects then rated each piece for these 4 emotions, as well as 4 attributes of the music, including tempo, rhythmic complexity, melodic complexity, and pitch range. The results showed that subjects were sensitive to three of the four emotions portrayed respectively by each raga: joy, sadness, and anger, and these judgments of emotion were related to the perception of the musical attributes. This suggests that listeners are able to extract emotion from unfamiliar music using universal cues, even in absence of the listeners’ usual cultural cues.
For all the theories out there, of which the above are only a few, we seem to possess a very healthy folk knowledge of meaning, emotion, and narrative in music. Many studies done with children give evidence of this. By the age of 11 – 13, children are reliably able to match music with different structures (directed action with solution and closure, action without direction or closure, and no action or direction) with three different styles of western music (La fille aux cheveux de lin by Debussy; the prelude in G, op. 28 by Chopin; and the prelude op. 74, no. 4 by Skrâbin). Results showed a very high degree of agreement among the children; the Chopin piece was matched with the well-structured story, with directed action and closure, the Debussy piece was matched with the story with no action or direction, and the Skrâbin piece was matched to the story with action but no direction or closure. Even by the age of 5 – 6, children are able to use the perceived emotion in music to judge other stimuli. The children heard a neutral story read aloud while either sad music (minor, slower tempo), happy music (major, faster tempo), or no music was played. The kids were then asked questions about the story, and told to pick a sad face, a happy face, or a neutral face to describe certain moments in the story. Kids who heard the happy music were more likely to interpret the story/character as happier, while kids who heard the sad music interpreted the story/character as more sad. In a study done by Mayumi Adachi and Sandra Trehub, children of different ages and of no particular musical ability were asked to perform either Twinkle, Twinkle Little Star or the ABC song, once to make people feel happy, and once to make people feel sad. These renditions were recorded both aurally and visually, and each of these was presented to other children as well as adults, who then made a judgment on which version sounded happier. Children were accurately able to not only produce versions of the songs that effectively communicated happiness or sadness, but were able to judge from both audio and visual input as well. There was a trend found that the older children and adults did better than the younger, providing evidence for the idea that our perception of emotion, meaning, and narrative in music is a skill that develops over time.
As is clear from this review, there’s been a fair amount of research done into this subject, spanning a wide breadth and depth of methods and topics. With my experiment, I hope to further this body of study and literature, and provide a more concrete look at the interaction of narrative and music
 Gratier, Maya, and Colwyn Trevarthen. “Musical Narratives and Motives for Culture in Mother-Infant Vocal Interaction.” Journal of Consciousness Studies15.10-11 (2008): 122-58. PsycARTICLES. Web.
 Malloch, S.N. (1999-2000). Mothers and Infants and Communicative Musicality. Musicæ Scientiæ, Special issue: Rhythm, musical narrative, and the origins of human communication, 29-57.
 Imberty, Michael, and Maya Gratier. “Narrative in Music and Interaction Editorial.” Musicae Scientiae 12.1 Suppl (2008): 3-13. PsycARTICLES. Web.
 Patel, Aniruddh D. Music, Language, and the Brain. Oxford: Oxford UP, 2008. Print.
 Brower, Candace. “A Cognitive Theory of Musical Meaning.” Journal of Music Theory 44.2 (2000): 323. Répertoire International de Littérature Musicale. Web.
 Meyer, Leonard B. Emotion and Meaning in Music. Chicago: U of Chicago, 1956. Web.
 Malloch, Stephen, and Colwyn Trevarthen. “Brain, Music, and Musicality: Inferences from Neuroimaging.” Communicative Musicality: Exploring the Basis of Human Companionship. Oxford: Oxford UP, 2009. N. pag. Répertoire International de Littérature Musicale. Web.
 Balkwill, Laura-Lee, and William Forde Thompson. “A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues.” Music Perception: An Interdisciplinary Journal 17.1 (1999): 43-64. JSTOR. Web.
Ziv, Na. “Narrative and Musical Time: Children’s Perception of Structural Norms.”Proceedings of the Sixth International Conference on Music Perception and Cognition (2000): n. pag. Web.
 Ziv, Naomi, and Maya Goshen. “The Effect of ‘Sad’ and ‘Happy’ Background Music on the Interpretation of a Story in 5 to 6-year-old Children.” British Journal of Music Education 23.03 (2006): 303. PsycARTICLES. Web.
Adachi, Mayumi, and Sandra E. Trehub. “Decoding the Expressive Intentions in Children’s Songs.” Music Perception: An Interdisciplinary Journal 18.2 (2000): 213-24. Web.