Cross-domain cognition and the effect of musical expertise

Front Psychol 2014 Jul 28;5:789
Musicians are more consistent: gestural cross-modal mappings of pitch, loudness and tempo in real-time

Küssner MB, Tidhar D, Prior HM, Leech-Wilkinson D
Department of Music, King’s College London, London, UK

Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures-accounting for the intrinsic link between movement and sound-are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones-continually sounding and concurrently varied in pitch, loudness and tempo-with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising-falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided.

My Individual Research Question

Joint action theory is a widely-accepted social psychology theory which attempts to explain how individuals can coordinate their behavior to to complete tasks in tandem.  One fairly robust piece of evidence that supports the ideas put forth in joint action theory is interpersonal entrainment; that is, people are repeatedly shown to have a natural proclivity to fall into synchrony with one another, in everything from walking speeds to speech patterns.  On an cognitive and emotional level, this interpersonal entrainment manifests itself as, among other things, empathy and our ability to interpret facial expressions as emotive.

Similarly, as we’ve explored in class, researchers in the field of music cognition are working on models of musical entrainment, whereby the human body and mind adapt their rhythmic patterns to match those in a piece of music.  This leads to the phenomena of expectation theory (by which we are able to detect and predict rhythms in music) and emotional embodiment (by which rhythmic meters produce physiological responses that are correlated with human emotions).

My question is, then, if a steady rhythm can produce a similar physiological response across individuals, can that rhythm affect those individuals’ interpretation of emotions expressed by other people?  Specifically, if participants were shown images of faces coded as various emotions as they were played pieces of likewise-coded music, would they be able to more accurately (and reliably) interpret the expressions depicted in the faces?  Conversely, would their ability to do so be negatively impacted if they were played a piece of music that did not align affectively with the facial expression shown?  Put simply, how do varying tempos in musical rhythms affect individuals’ interpretations of others emotions?