Musical Rhythms, Memory, and Human Expression

Musical Rhythms, Memory, and Human Expression

Ryan Davis, Angie Fuentes, and Kyle Yoder

Yale University, Cognition of Musical Rhythm, Virtual Lab

 

1. BACKGROUND AND AIMS

1.1  Introduction

The emotional properties of music, long recognized by music theorists, composers, and casual listeners alike, have yet to be fully explored by cognitive scientists. We do know that miniscule variations in timing between notes, called microtiming, are used by musicians to make their music sound more expressive; indeed, people listening to music that is played without microtiming often report that it sounds mechanical. Memory researchers have also demonstrated that emotional valence and social context strongly impact individuals’ ability to recall events. Our research seeks to explore the intersection of these two paths of research. [Kyle]

1.2  Previous Research

In 2008, Swedish researchers Juslin and Vjästfäll conducted a large review of the research into the connections between music and emotion. Despite the widely accepted belief that the two are inextricably linked, these researchers found that the evidence was not sufficient to describe the mechanism by which music could elicit the same emotions in different persons. They proposed a multipart mechanism that they believed could account for these emotional responses. One aspect of this mechanism was musical expectancy and rhythm.

Research has revealed that one major component of listeners’ ability to ascribe emotional valence to music is subtle variations in timing between notes in that music. These variations, called microtiming, are employed by musicians (consciously and unconsciously) in order to give their performance an expressive quality (Ashley, 2002; Repp, 1999). Indeed, most quantization software, meant to make computer-generated music sound “more human,” operates by inserting microtiming variations into the piece in order to make it less perfect and, hopefully, more expressive.

Much research into memory has also focused on the effect of emotion. Research has found that not only are memories with some sort of emotional content more likely to be retained and more easily recalled in the future, but also that memories with a social context show this effect even more robustly (Coppola et al., 2014; Jhean-Larose et al., 2014; Watts et al., 2014). In fact, researchers have found that direct administration of oxytocin, a neuropeptide often associated with feelings of attachment and prosociality, can provide participants with enhanced memory for otherwise non-emotional information (Weigand et al., 2013). Furthermore, memories of neutral events are often overshadowed by those of closely occurring emotional events (Watts et al., 2014).

Some research has been done into the intersection of musical rhythm and memory. Balch and Lewis (1996) found that hearing a familiar rhythm could facilitate participants’ memories of events that were happening when they last heard the same rhythm. Drake et al. (2000) compared how well musicians and nonmusicians could synchronize with human-generated pieces containing microtiming and how they well they could with computer-generated pieces played precisely as written. While they found that participants were better at synchronizing with the computer-generated pieces, they also found that they synchronized with the human-generated (that is, expressive) pieces at slower levels, at a narrower range of levels, and more correspondingly to the theoretically correct metrical hierarchy. They concluded that microtiming might transmit a particular metrical interpretation to the listener and enable the perceptual organization of events over a longer time span (Drake et al., 2000).

The present study seeks to build off of this research by exploring whether the microtiming variations and the expressive quality of the performance are sufficient to elicit these differences in cognitive processing, or if participants’ beliefs about the social context of the music may mediate these effects. [Kyle]

1.3  Present Research

In this study, we attempt to observe if the ease with which participants can recall a musical rhythm is impacted by their beliefs as to whether that rhythm was produced by a human or a computer. By testing participants in three separate belief groups – the rhythms were created by a human, the rhythms were created by a computer, or no specification about the origin of the rhythm – we hope to be able to detect differences in the accuracy of rhythmic memory as a result of belief group. We predict that those who believe the rhythms were created by a human will perform better at the rhythmic memory task. [Angie]

2. METHOD

2.1  Participants

In total, 42 participants (25 female and 17 male) completed the study.  They ranged in age from 19 to 59 years old, with the mean age being 28.8 years of age (standard deviation=11.8 years).  All but three participants recorded English as their first language (the first language of 2 participants is Spanish and of 1 participant is French).  Thirty-two of the participants had at least 1 year of musical training, with 13 of these participants having at least 10 years of training.  Also, most of the participants play at least one instrument.  Four participants reported having some sort of hearing deficiency, either ringing in their ears or mild to moderate hearing loss. [Angie]

2.2  Stimuli

Our stimuli were brief, three-bar rhythmic samples in 4/4 time. We divided our rhythms into two groups based on difficulty, which we named Simple and Complex. To accommodate our desired number of participants in the experiment, it was decided that each participant would undergo eight trials, meaning that four Simple rhythms and four Complex rhythms were constructed. However, each respective rhythm had its own alternate version, a version that was only subtly altered, to enhance the experiment. As a result, there were 16 different rhythms in total. Each rhythmic sample was randomized in tempo (using an online random number generator), ranging between 70bpm and 90bpm, yet each alternate version rhythm carried the exact tempo of its original. This tempo range was chosen as it is commonly regarded as middle ground between slow and fast.

The Simple rhythms were constructed using only dotted half notes, half notes, quarter notes, and eighth notes. There were no syncopations in the Simple rhythms. The Complex rhythms were constructed adding sixteenth notes, dotted eighth notes, dotted quarter notes, and ties, thus creating syncopations. The rhythms were designed to be varied in content, and each alternate version’s subtle change was evenly spaced between rhythmic samples to avoid predictability. The subtle changes were done by either changing a rhythmic value (e.g., a quarter note becoming two eighth notes) or flipping a rhythmic cell (e.g., a quarter note and two eighth notes becoming two eighth notes and a quarter note).

The rhythmic stimuli were recorded by Michael Laurello, a composition student at the Yale School of Music, using Apple Logic Pro 9.1.8 and a “roto tom” sample sound from the Vienna Symphonic Library. Michael recorded each rhythm using 0%, 50% and 100% quantization, and it was decided that 50% was a true balance of rhythmic strictness and performance flexibility. 50% quantization was used for each rhythm throughout the entire experiment. [Ryan]

2.3  Task & Procedure

Participants were randomly introduced to eight of the rhythms (either simple or complex) , via one playing of the recording, and were asked to try and memorize what they heard. The participant was either informed 1) nothing 2) that the recording that they heard was done by a human percussionist 3) that the recording was done by a computer. Following a distractor task (word puzzles), the participant would then either be played the identical rhythm that they heard before the distractor task, or its alternate version. The participant would then be asked if what they heard the second time was the same or different from the first rhythm. [Ryan]

2.4  Data Collection & Analysis

Data was collected through Qualtrics survey website and exported into Microsoft Excel for analysis.  The data was analyzed by looking for potential effects of each participant’s belief condition on their ability to correctly identify whether they were given the same or different rhythms within each trial.  We also conducted limited analysis to discover any effects that demographics may have played on correct identification. [Kyle]

3. RESULTS

3.1 Population Sample

Forty-two participants (25 female and 17 male) were recruited via email and Facebook posts advertising the study.  Participants were all between 19 and 59 years of age (mean age = 28.79, standard deviation = 11.95, median age = 23.00) and all had completed at least a high school level of education.  Ten participants reported being unable to play a musical instrument, while the remaining thirty-two reported at least one year of experience playing: ten (23.80% of the total sample) reported playing primarily the piano, seventeen (40.47%) reported playing a string instrument (i.e., cello, violin, viola, or guitar), and four (9.52%) reported playing a woodwind or brass instrument.  Only one participant reported playing percussion. The number of years of training varied widely among these participants, with the most experienced player having performed on the  (mean = 7.38, standard deviation = 6.35, median = 7.00).  On a five-point scale (1 = no training, 5 = professional training), participants generally reported average familiarity with Western music training in either instrumental performance, vocal performance, and music theory(mean = 2.38, standard deviation = 1.41), while five participants reported a professional level of overall training.  Of the forty-two participants, four reported having some kind of mild hearing deficiency (two reported ringing, two reported mild hearing loss); however, all four reported being able to hear clearly the stimuli used in this study. [Kyle]

3.2  Analysis & Figure 1

Across all belief groups, participants performed better when the rhythm presented after the distraction was the same rather than when the rhythm presented after the distraction was different.  In other words, participants more often reported that the rhythm following the distraction was the same rather than a different rhythm.  This is true for all belief groups, as shown in Figure 1.  Combining all belief groups, 65.25% of participants answered correctly when the rhythm was the same (standard deviation=.0654), while 54.7% of participants answered correctly when the rhythm was different (standard deviation=.0314). This may be evidence that people tend to think rhythms are the same and are not particularly good at detecting minor differences between them.  This also may be evidence that the word puzzle distraction was too time-consuming or difficult and required much thought. [Angie]

bigger music graph

Figure 1.

3.3  Analysis & Figure 2

Figure 2 shows the Simple Rhythms and Complex Rhythms that were used in the experiment. The top rhythm of each grouping is the original form, while the bottom is its subtly altered version. Within a singular trial, participants either heard the top rhythm of each grouping two times (with the recording playings separated by word puzzle distractions) meaning the correct answer was that the rhythms were identical OR participants heard the top rhythm first, followed by the bottom rhythm second (with the recording playings separated by word puzzle distractions) meaning that the correct answer was that the rhythms were not identical.

Experiment_SimpleRhythms

 Experiment_ComplexRhythms

Figure 2.

From a visual standpoint, it is immediately clear that the Complex Rhythms are indeed more difficult than the Simple Rhythms, due to the increased number of audible attack points. The Simple Rhythms ranged from 12 to 15 audible attacks, with an average of 13.125. The Complex Rhythms ranged from 17 to 21 audible attacks, with an average of 18.875. The increased number of attack points would naturally lead one to believe that it is more difficult to remember more information, especially given that our participants only heard each rhythm played one time. However, in general, our participants did not have an exceedingly strong score in identifying whether the second rhythm played (be it simple or complex) was the same or different than the first rhythm played. There are many possible reasons for this outcome, yet with our sample size it is impossible to determine any exact answers. The most obvious possible reason is that the rhythmic information was simply too long to retain after only one playing. This time gap was only reinforced by a following series of word puzzle distractions. In addition, the alternate versions of each rhythm were intentionally designed to be subtly different. The rhythmic differences were by no means significant, and according to our analysis, even those who identified themselves as musical experts were not remarkably superior in their trials. [Ryan]

[3.4  Analysis & Figure 3

As mentioned in section 3.1, five participants (4 male and 1 female) identified themselves as having a professional level of overall music training. These “expert” participants ranged in age from 22 to 36 (mean = 26.4, standard deviation = 5.68), each reported a different instrument as their primary (respectively: cello, clarinet, piano, viola, and violin), and all reported a minimum of ten years experience playing their instrument. We decided to examine whether these “experts” were significantly better at the task of identifying the rhythms than the general pool of participants.

Significance across conditions is impossible to show in this analysis, as three of the expert participants were randomly assigned to the computer-belief condition, while only one each was assigned to the human and no belief conditions. Taken as a whole, it appears that experts may be better than the general group of participants at correctly identifying the rhythms; however, due to the relatively small sample size of this group, these results are not significant (p>0.05). This is easily seen in Figure 3 below, which shows the average rate of correct responses to the rhythm identification rate in the expert and general samples. [Kyle]

Figure 3

Figure 3.

4. CONCLUSIONS

Our results do not reveal any impact of belief group on how participants’ ability to recall a rhythm. We predicted that participants would be able to better recall a rhythm if they believed it was performed by a human. Although there were minor differences in accuracy of rhythm recall between the three groups, no significant effect was demonstrated. Participants performed slightly better in the “no belief” group than in the other two groups, while the “computer-generated” belief group performed slightly worse than the other groups.

Similarly, no significant effect was found with regards to music training and participants’ ability to correctly complete the rhythm recognition task.  Nevertheless, the data trends that direction, providing basis for the hypothesis that, were more participants to be included in the study, this effect could be found to be significant.  This distinction is important because it provides insight into whether the rhythms used in this study were too complex for the average person to remember after listening only once.  Perhaps further research will reveal a “complexity threshold” for musical memory.

An unexpected finding from this study was that people tended to perform better in determining that a rhythm was the same rather than determining that a rhythm was different. However, further experimentation is necessary to determine whether this finding reflects an actual facet of human cognition. In this pilot study, it is possible that the changes in rhythms were simply too subtle for participants to detect.  Another possibility is that participants defaulted to saying that rhythms were the same, producing a “false positive” for this effect.

Although the findings of this pilot study did not provide major evidence for answering our question about the interplay of emotion, belief, and memory, they did provide guidance for future experimentation exploring the same topic. One limitation of using Qualtrics to collect data was that, instead of being asked to replicate the rhythm, our participants were given a task using a “same-different” paradigm. In other words, participants had a 50% chance of guessing the correct answer, potentially allowing correct guesses to skew our results. If subjects were required to recreate the rhythm–perhaps by tapping it–one would be able to more accurately determine if they had remembered the rhythm correctly.

A similar study in future would perhaps yield more revealing data if the selected rhythms were shorter in length. It would be of interest to determine if participants’ success in determining whether a rhythm was the same or different could be influenced by the actual percussive sound(s) used. For example, would it be easier to distinguish the rhythms if the chosen stimuli sound had a discernible pitch, or even multiple pitches? In addition, the combinations of different time signatures could provide further insight.

Another limitation of this study was that it was not randomized whether the second rhythm presented in a trial was the same or different. Whether the rhythm heard after the word puzzle was the same or different was predetermined. We tried to minimize bias by randomizing the order in which participants saw the trials; however, we were unable to randomly assign the rhythm after the word puzzle to be the same or different. This further randomization would have eliminated any possible bias of certain rhythms being more distinctive and easier to find differences in.

The subject of belief and memory is an interesting topic that still requires much experimentation to be fully understood. With this study, we hoped to provide a foundation and springboard for future endeavors in this area. In moving forward in researching belief and memory, it is necessary to run more experiments testing their relationship and think of new methods in which one can examine how belief affects memory. Suggestions for future studies would include replication, rather than recognition, of a rhythm, and varying the distraction difficulty and length between rhythm recognition. [Angie, Kyle, Ryan]

 

REFERENCES
[Kyle]

Ashley, R. (2002).  Do[n’t] Change a Hair for Me: The Art of Jazz Rubato. Music Perception, 19:3, 311–332.

Balch, W.R., & Lewis, B.S. (1996). Music-Dependent Memory: The Roles of Tempo Change and Mood Mediation. Journal of Experimental Psychology: Learning, Memory, and Cognition, 22:6, 1354-1363.

Coppola, G., Ponzetti, S., & Vaughn, B.E. (2014). Reminiscing Style During Conversations About Emotion-laden Events and Effects of Attachment Security Among Italian Mother–Child Dyads. Social Development, 23:4, 702–718. DOI: 10.1111/sode.12066.

Drake, C., Penel, A., & Bigand, E. (2000). Tapping in Time with Mechanically and Expressively Performed Music. Music Perception, 18:1, 1-23.

Jhean-Larose, S., Leveau, N., & Denhie`re, G. (2014). Influence of emotional valence and arousal on the spread of activation in memory. Cognitive Processing, 15, 515–522. DOI 10.1007/s10339-014-0613-5.

Juslin, P.N., & Vjästfäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31, 559–621. DOI:10.1017/S0140525X08005293.

Repp, B. (1999). Individual differences in the expressive shaping of a musical phrase: The opening of Chopin’s Etude in E major. In Suk Won Yi (Ed.), Music, Mind, and Science, 239-270.

Watts, S., Buratto, L.G., Brotherhood, E.V., Barnacle, G.E., Schaefer, A. (2014). The neural fate of neutral information in emotion-enhanced memory. Psychophysiology, 51, 673–684. 
DOI: 10.1111/psyp.12211.

Weigand, A., Feeser, M., Gärtner, M., Brandt, E., Fan, Y., Fuge, P., Böker, H., Bajbouj, M., & Grimm, S. (2013). Effects of intranasal oxytocin prior to encoding and retrieval on recognition memory. Psychopharmacology, 227, 321–329. DOI 10.1007/s00213-012-2962-z.

Additional Acting, Music, and Empathy Research

Here are the links to the papers I talked about in class today!

1) Keefe, B. D., Villing, M., Racey, C., Strong, S. L., Wincenciak, J., & Barraclough, N.E. (2014). A database of whole-body action videos for the study of action, emotion, and untrustworthines. Behavioral Research Methods, 46:1042–1051. doi: 10.3758/s13428-013-0439-6.

This paper announces this team’s database of acting videos and lays out some potential studies that they think other researchers can use this database to pursue.  Like I said, I think this is problematic because it belies a fundamental assumption that no one has tested about realistic acting, but it’s still kinda cool if you’d like to take a look.

Keefe et al. (2014)

2) Parsons, C.E., Young, K.S., Jegindo, E., Vuust, P., Stein, A., Kringelbach, M.L. (In Press). Musical training and empathy positively impact adults’ sensitivity to infant distress. Frontiers in Psychology, 5:1440. doi:10.3389/fpsyg.2014.01440.

Basically, parents with musical training were better at understanding their infants’ cries than nonmusical parents, and more empathetic people were better at understanding babies’ distress than less empathetic people.  It’s a far cry from establishing any causal link between musical training and empathy, but it’s an interesting parallel that seems to point to a connection on some level.

Parsons et al. (In Press)

Reformulated Individual Research Question

Can music be used to augment the naturally empathetic qualities of joint activities?  Does dancing with someone make us more sensitive to their emotional needs?

1) Goldstein, T.R. & Yasskin, R. (2014). Another pathway to understanding human nature: Theatre and dance. In Press. Tinio and J. Smith (Eds.), Cambridge Handbook of the Psychology of Aesthetics and the Arts. Cambridge, U.K.: Cambridge University Press.

Authored by my thesis adviser, this article looks at the performing arts and proposes that researchers examine them from the perspective of cognitive science.  Specifically, her research has a slant towards emotion regulation and empathy, and she discusses the anecdotal and correlational evidence for a positive effect of dance on empathy, while proposing experimental paradigms to examine a potential causal relationship between the two.

2) Witek, Maria A. G. (2009). Groove Experience: Emotional and Physiological Responses to Groove-Based Music. European Society for the Cognitive Sciences of Music, 573-582.

If rhythms can augment empathy between individuals, then a likely mechanism by which this happens seems to be that music’s groove.  Witek analyzed the ability of groove to elicit the same emotional responses across participants, but found something interesting: while each participant was able to identify and report a groove in the music, their evaluations of the music’s affective quality varied greatly.  Perhaps, then, it is not the urge to move (as produced by a groove) that affects our abilities to interpret another’s emotional response.  Perhaps a rhythm that is more consistently “on the beat” is needed to let us “tune in” to those around us.

Empathy and Musical Rhythm: A Literature Review

The emotional and unifying powers of music have long been recognized. Militaries use steady beats to instill a sense of camaraderie in their soldiers; sports players use high-energy pulses to “get angry” before a game; filmmakers use swelling musical works to shape their audiences’ responses to a scene. In all of these instances of emotional influence, a key is the manipulation of the music’s rhythm to elicit the desired feeling in listeners. These compositions are designed to create the same emotional experience across many individuals, relying on shared principles of human cognition in order to do so. Because of these shared principles, it seems likely, therefore, that music can be used to augment empathic responses in its listeners. This paper seeks to review the current literature related to this topic, examining the intersection of empathy and musical rhythm to evaluate possible a possible direction for research into this area. Specifically, I seek to examine whether variations in musical rhythm can influence listeners’ interpretations of others’ emotions and, by extension, their empathic responses to other individuals.

Since ancient times, musicians, audiences, and philosophers have recognized the powerful emotional component of music (Perlovsky, 2010). People regularly describe songs using emotional vocabulary, defining their favorite tunes as “happy,” “upbeat,” “angry,” “sad,” or with a host of other affective terms. In fact, evolutionary psychologists have theorized that music evolved from the same systems as language, diverging from the more concretely semantic process of human language to become a more emotional and semantically abstract artifact of human cognition. Both music and speech rely upon similar ideas of rhythm and pitch to convey messages, though musical sounds can draw upon much wider interpretations of these notions in order to do so. This idea, called superexpressive voice theory, supposes that music holds such power over its listeners because it acts upon the linguistic parts of the brain in a way that is more expressive—that is, more emotional—than normal human language (Perlovsky, 2010). Several psychological mechanisms have been proposed to account for this feature of music, most notably six by researchers Juslin and Västfjäll (2008): brain stem reflexes, evaluative conditioning, emotional contagion, visual imagery, episodic memory, and musical expectancy. While much research is left to be done to confirm that these mechanisms are at fact in play in music cognition, they provide us with a psychological framework which we can use to understand the other literature relevant to this topic.

This notion that music processing is due to universal features of human cognition is further supported by research conducted by Balkwill and Thompson (1999).   These researchers asked American men and women of various age groups (all of whom were unfamiliar with Hindustani music) to listen to several Hindustani melodies and to evaluate the dominant emotions and relative rhythmic complexity of each piece. For control, four experts in Hindustani music were also asked to evaluate each piece on the same bases; each expert also asserted that each recording used was a competent rendition of the piece. The thirty-four participants were found to be in agreement with regards not only to the rhythmic complexity of each piece, but also to the dominant emotion expressed in each melody, regardless of whether they were experts, had only a passing familiarity with the genre, or had never heard that type of music before (Balkwill & Thompson, 1999). Furthermore, the emotions that the participants identified in each piece were found to be in agreement with the emotions intended to be conveyed by each piece, despite the vast differences in cultural backgrounds between the composers of each melody and the participants listening to them (Balkwill & Thompson, 1999). This suggests that emotional responses to music transcend cultural differences and instead draw upon universal psychological features of their listeners.

From the perspective of musical rhythm, this makes sense, especially when one considers the phenomenon of musical entrainment. Clayton et al. (2005) broadly defines entrainment as “a phenomenon in which two or more independent rhythmic processes synchronize with each other.” When listening to music, for example, a walking person will unconsciously fall into step with the beat of the song, entraining the rhythm into their own physicality. Reviewing literature from the field of ethnomusicology, these researchers also found that this propensity for entrainment is found across cultures, again suggesting that a universal psychological process is at play (Clayton et al., 2005). Some research conducted has suggested that this process actually helps listeners to music focus their attention across domains, providing evidence for a possible means by which entrainment could lead to increased empathy (Escoffier et al., 2010). Participants in this study were presented with pictures of faces and houses, then asked to indicate whether each picture was oriented upright or had been inverted. In one condition, participants completed the task in silence; in another condition, a rhythm was played in the background and the images appeared on-beat; and in the third condition, the images appeared off-beat with the rhythm. Participants responded significantly more quickly to pictures presented in the on-beat condition than to those presented off-beat or in the silence condition. That is, the presence of a synchronous musical rhythm was found to facilitate the focusing of attention on visual stimuli (Escoffier et al., 2010).

On an interpersonal level, entrainment is a key component of joint action theory, a psychological theory which attempts to explain how individuals are able to perform complex tasks in conjunction with other people even with incomplete or no communication between the two groups (Knoblich et al., 2011). Knoblich et al. (2011) reviewed the literature in this field, finding that people tend to fall into synchronous patterns with one another even when they try not to, regardless of whether the task in question is dancing to music, walking together, or even just rocking in a chair side-by-side. They propose that this inclination towards interpersonal synchrony is also at play in empathetic responses between individuals.

Indeed, other research has supported the existence of a connection between musical entrainment and prosocial behavior. De Bruyn and colleagues (2008) worked with a group of elementary school children to test the effect of music on their social interaction and the effect of the level of their social interaction on their response to music.   First, they empirically quantified the impact of social interaction on the children’s dancing as they listened to music, investigating the children’s intensity of movement and the amount of synchronization with the beat. This study had two conditions: individual, where the children were separated by screens; and social, where the children danced in a group of their peers. The team of researchers found that the social environment caused a quantifiable increase in both the intensity of the children’s movement and their level of beat-synchronization. Furthermore, the researchers found an effect of the type of music played on the way that the kids embodied it; that is, the genre of music affected how the children danced in both the individual and social conditions (De Bruyn et al., 2008). More recent research has taken this step even farther, suggesting that physical embodiment of music—a phenomenon called “groove”—can actually increase the empathetic responses of those grooving to the music (Sevdalis & Raab, 2013). However, because these experiments were not specifically testing for this effect, more research is warranted before we can draw a firm connection between musical rhythm and empathy.

In testing this connection, research from other areas of cognitive science sheds some light on the feasibility of various methods. While the field of emotion cognition has gone back and forth in recent years on whether or not bodily arousal responses are differentiated enough to allow for direct measurement of emotion, recent research has given credence to supporters of this technique. One possible experiment, therefore, would be to measure participants’ emotional responses to rhythmic stimuli; thus, we could test whether musical rhythms are actually able to elicit similar affective responses across individuals, or whether individuals simply learn through social cues to report certain kinds of emotional states based on the type of rhythm played (Harrison et al., 2010). By seeing whether participants actually experience similar emotional responses or simply report doing so, we can gain further insight into the intersection of music and empathy.

Another possibility, though, is to test how musical rhythm impacts individuals’ ability to accurately identify the emotions that others are experiencing. To this end, researchers could utilize facial emotion recognition tasks, usually used in abnormal psychology to test patients’ abilities to empathize with and understand others. Such tasks present participants with several images of faces coded as one of several emotions and ask that they evaluate the emotion presented. Participants are then scored as to how closely their answers resemble those of the average person (Mueser et al., 1996). Such a test could be useful to measure the effect of mediating factors, such as the presence of a musical rhythm, on the ability of listeners to identify the emotions of others.

With this in mind, the question I seek to test is whether a rhythm can affect individuals’ interpretation of emotions expressed by other people. Specifically, if participants were shown images of faces coded as various emotions as they were played pieces of likewise-coded music, would they be able to more accurately (and reliably) interpret the expressions depicted in the faces?  Or, would their ability to do so be negatively impacted if they were played a piece of music that did not align emotionally with the facial expression shown? By exploring this question, we gain further insight into the psychological links between music, emotion, and empathy.

 

Works Cited

Balkwill, L., and Thompson, W.F. (1999).  A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues.  Music Perception: An Interdisciplinary Journal, 17(1), pp. 43-64

Clayton, M., Sager, R., and Will, U. (2005). In time with the music: the concept of entrainment and its significance for ethnomusicology. European Meetings in Ethnomusicology, 11, pp. 3–142.

De Bruyn, L., Leman, M., Moelants, D. (2008).  Quantifying Children’s Embodiment of Musical Rhythm in Individual and Group Settings.  Miyazaki, K., Hiraga, Y., Adachi, M., Nakajima, Y., and Tsuzaki, M. (Eds.). Proceedings from ICMPC10: The 10th International Conference on Music Perception and Cognition. Sapporo, Japan.

Escoffier, N., Sheng, D. Y. J., and Schirmer, A. (2010).  Unattended musical beats enhance visual processing.  Acta Psychologica, 135(2010), pp. 12–16.

Harrison, N. A., Gray, M.A., Gianaros, P.J., and Critchley, H.D. (2010).  The Embodiment of Emotional Feelings in the Brain.  The Journal of Neuroscience, 30(38), pp. 12878-12884.

Juslin, P.N., & Västfjäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31(5), pp. 559-­621.

Knoblich, G., Butterfill, S., and Sebanz, N. (2011). Psychological Research on Joint Action: Theory and Data. In B. Ross (Ed.), The Psychology of Learning and Motivation (Vol. 54, pp. 59-101).  Burlington, MA: Academic Press.

Mueser, K. T., Doonan, R., Penn, D.L., Blanchard, J.J., Bellack, A.S., Nishith, P., and DeLeon, J. (1996).  Emotion Recognition and Social Competence in Chronic Schizophrenia.  Journal of Abnormal Psychology, 105, pp. 2,271-275.

Perlovsky, L. (2010). Musical emotions: Functions, origins, evolution. Physics of Life Reviews, 7(1), pp. 2-­27.

Sevdalis, V., & Raab, M. (2013). Empathy in sports, exercise, and the performing arts. Psychology of Sports and Exercise. doi: 10.1016/j.psychsport.2013.10.013.

Balch, W.R., & Lewis, B.S. (1996). Music-dependent memory: The roles of tempo change and mood mediation. Journal of Experimental Psychology: Learning, Memory, and Cognition. Vol.22(6), pp. 1354-1363.

Abstract:

Music-dependent memory was obtained in previous literature by changing from 1 musical piece to another. Here, the phenomenon was induced by changing only the tempo of the same musical selection. After being presented with a list of words, along with a piece of background music, listeners recalled more words when the selection was played at the same tempo than when it was played at a different tempo. However, no significant reduction in memory was produced by recall contexts with a changed timbre, a different musical selection, or no music (Experiments 1 and 2). Tempo was found to influence the arousal dimension of mood (Experiment 3), and recall was higher in a mood context consistent (as compared with inconsistent) with a given tempo (Experiment 4). The results support the mood-mediation hypothesis of music-dependent memory.

Balch & Lewis 1996

Juslin, P.N., & Västfjäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31(5), pp. 559-­621.

Abstract:

Research indicates that people value music primarily because of the emotions it evokes. Yet, the notion of musical emotions remains controversial, and researchers have so far been unable to offer a satisfactory account of such emotions. We argue that the study of musical emotions has suffered from a neglect of underlying mechanisms. Specifically, researchers have studied musical emotions without regard to how they were evoked, or have assumed that the emotions must be based on the “default” mechanism for emotion induction, a cognitive appraisal. Here, we present a novel theoretical framework featuring six additional mechanisms through which music listening may induce emotions: (1) brain stem reflexes, (2) evaluative conditioning, (3) emotional contagion, (4) visual imagery, (5) episodic memory, and (6) musical expectancy. We propose that these mechanisms differ regarding such characteristics as their information focus, ontogenetic development, key brain regions, cultural impact, induction speed, degree of volitional influence, modularity, and dependence on musical structure. By synthesizing theory and findings from different domains, we are able to provide the first set of hypotheses that can help researchers to distinguish among the mechanisms. We show that failure to control for the underlying mechanism may lead to inconsistent or non-interpretable findings. Thus, we argue that the new framework may guide future research and help to resolve previous disagreements in the field. We conclude that music evokes emotions through mechanisms that are not unique to music, and that the study of musical emotions could benefit the emotion field as a whole by providing novel paradigms for emotion induction.

Justlin_Vastfjall-2008

Refined Idea

Question:

Is the ease with which participants can recall and replicate a musical rhythm impacted by their beliefs as to whether that rhythm was produced by a human or a computer?

Possible Theory, Conjecture, and Hypothesis:

Human beings find it easier to remember events containing emotional information than those that do not because we prefer social conditions, environments, and interactions to asocial ones.  Under this theory, we hypothesize that participants will be better at recalling and replicating rhythms they believe to be produced by humans than those produced by computers, as they will ascribe more emotional context to the human-produced piece.

Alternatively, a computer can produce a work that does not carry variations in microtiming, expressive or otherwise.  Under this, we hypothesize that participants will find it easier to recall and replicate those rhythms they believe were produced by a computer, as they will interpret such rhythms as exact and absolute.

Operationalization of First Hypothesis:
“… we hypothesize that participants will be better at recalling and replicating rhythms they believe to be produced by humans than those produced by computers…”

Participants = Students from Yale College and the Yale School of Music aged 18-25

Better Recall and Replication = We will ask participants to reproduce the rhythm exactly (to the best of their ability) after hearing it.  We anticipate a significantly higher accuracy in this task in the “human belief” condition(s).  Further refinement of our method will happen after we feel more confident in knowing what others have done.

Believe and Produced= Two recordings will be created, one by having a human being play a rhythm and one by having a computer generate a piece using the exact written timing of that rhythm.  Participants will be told either nothing about the origin of the piece (control), that a computer produced the rhythm, or that a human produced the rhythm.

Possible Group Research Questions

1) Because of the entanglement of music and emotions, some research has suggested that background music can enhance memory.  What effect does this music’s “groove” have on the listener’s ability to retain information in working and/or long-term memory?

2) What effect does the “grooviness” of a song have on its “earworm” quality?

How do variations in musical rhythms affect individuals’ interpretations of others’ emotions?

1) Escoffier, N., Sheng, D. Y. J., and Schirmer, A. (2010).  Unattended musical beats enhance visual processing.  Acta Psychologica, 135(2010), pp. 12–16.

Perhaps the most closely related study on this list, Escoffier et al. (2010) investigated whether and how a musical rhythm entrains a listener’s visual attention. Participants were presented with pictures of faces and houses and asked to indicate whether picture orientation was upright or inverted while either silence or a musical rhythm played in the background. In the beat condition, pictures could occur off-beat or on a rhythmically implied, silent beat. Pictures presented without the musical rhythm and off-beat were responded to more slowly than pictures presented on-beat, indicating that musical rhythm both synchronizes and facilitates concurrent processing of visual stimuli.

2) Knoblich, G., Butterfill, S., and Sebanz, N. (2011). Psychological Research on Joint Action: Theory and Data. In B. Ross (Ed.), The Psychology of Learning and Motivation (Vol. 54, pp. 59-101).  Burlington, MA: Academic Press.

This chapter in Ross (2011)’s book reviews current theoretical concepts and empirical findings surrounding coordination and joint action theory in order to provide a structured overview of the state of the field of joint action research.  It distinguishes between planned and emergent coordination. In planned coordination, agents’ behavior is driven by representations that specify the desired outcomes of joint action and the agent’s own part in achieving these outcomes. In emergent coordination, coordinated behavior occurs due to perception–action couplings that make multiple individuals act in similar ways, independently of joint plans.  It seems that either model could be used to analyze the role of musical entrainment, either as facilitating emergent coordination or acting as a nonverbal representation in planned coordination (although an emergent coordination model seems more likely).

3) De Bruyn, L., Leman, M., Moelants, D. (2008).  Quantifying Children’s Embodiment of Musical Rhythm in Individual and Group Settings.  Miyazaki, K., Hiraga, Y., Adachi, M., Nakajima, Y., and Tsuzaki, M. (Eds.). Proceedings from ICMPC10: The 10th International Conference on Music Perception and Cognition. Sapporo, Japan.

These researchers empirically quantified the impact of social interaction on movements made by children while listening and responding to music, investigating the children’s intensity of movement and the amount of synchronization with the beat in two conditions: individual, separated by screens, and social, moving together in groups of four encouraging social interaction. Data analysis showed that there is a social embodiment factor which can be measured and quantified. Furthermore there is also an effect found of the type of music on the gesture response, both in the individual and social context of the experiment.  I find this interesting in that it shows that social interaction can have an effect on music processing; now that the two are linked, I want to explore effects in the opposite direction.

4) Clayton, M., Sager, R., and Will, U. (2005). In time with the music: the concept of en- trainment and its significance for ethnomusicology. European Meetings in Ethnomusicology, 11, pp. 3–142.

“Entrainment, broadly defined, is a phenomenon in which two or more independent rhythmic processes synchronize with each other.”  This article explores the importance of this process across subfields within ethnomusicology while drawing upon research from various other disciplines, including physics, linguistics, and psychology.

5) Harrison, N. A., Gray, M.A., Gianaros, P.J., and Critchley, H.D. (2010).  The Embodiment of Emotional Feelings in the Brain.  The Journal of Neuroscience, 30(38), pp. 12878-12884.

This study disputes Walter Cannon’s challenge to peripheral theories of emotion that bodily arousal responses are too undifferentiated to account for the wealth of emotional feelings, combining sophisticated technologies to find instead that the experience of core and body-boundary-violation disgust are physiologically distinguishable.  This in turn provides evidence that emotional experience is biologically measurable and reveals a potential mechanism for consistent emotional embodiment across individuals listening to the same music.

6)  Juslin, P.N., & Västfjäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31(5), pp. 559-­621.

This large literature review presents a theoretical framework through which music listening may be understood to induce emotions on several levels, from brain stem reflexes to music expectancy.  I first came across this article a year ago and its framework has helped to guide my understanding and exploration into my research question.

7) Balkwill, L., and Thompson, W.F. (1999).  A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues.  Music Perception: An Interdisciplinary Journal, 17(1), pp. 43-64

Judgments of emotion were significantly related to judgments of psychophysical dimensions (tempo, rhythmic complexity, melodic complexity, and pitch range) and, in some cases, to instrument timbre.  The findings suggest that listeners are sensitive to musically expressed emotion in an unfamiliar tonal system, and that this sensitivity is facilitated by psychophysical cues.

8) Perlovsky, L. (2010). Musical emotions: Functions, origins, evolution. Physics of Life Reviews, 7(1), pp. 2-­27.

This article reviews current theories of music origins and the role of musical emotions in the mind, proposing a theory of musical origin based on a fundamental role of music in cognition and evolution of consciousness and culture.  It provides an evolutionary and psychological framework through which emotional embodiment and music may be understood.

9)  Sevdalis, V., & Raab, M. (2013). Empathy in sports, exercise, and the performing arts. Psychology of Sports and Exercise. doi: 10.1016/j.psychsport.2013.10.013.

Another review article, this one provides a summary of the main findings from empirical studies that used empathy measurements in the domains of sports, exercise, and the performing arts (i.e., music, dance, and theatrical acting). Music, especially when participants have the ability to groove along to it, is shown to augment participants’ empathetic responses, although more research is needed to develop the model.

10) Mueser, K. T., Doonan, R., Penn, D.L., Blanchard, J.J., Bellack, A.S., Nishith, P., and DeLeon, J. (1996).  Emotion Recognition and Social Competence in Chronic Schizophrenia.  Journal of Abnormal Psychology, 105, pp. 2,271-275.

Going back to Article 1 on this list, this study provides a famous demonstration of the use of two kinds of emotion recognition tests.  In this case, these tests were administered to participants with and without schizophrenia to quantify the disease’s effect on emotion recognition and social competence.  However, it a similar method could be used to quantify musical rhythm’s effect on these two dimensions.