Assignment for Tuesday & Thursday, November 4 & 6

Remember that there will be no regular class meeting on Thursday, November 6, due to the annual conference of the American Musicological Society (AMS) and the Society for Music Theory (SMT), which is taking place in Milwaukee, WI. HOWEVER, you are strongly encouraged to meet with your group to continue your work on experimental design; you may use the class space or meet in any place that is convenient for the work that needs to be done.

During Tuesday’s class meeting, we will review some methodology materials and take some time to discuss your work in progress, as needed. Please prepare the following:

1. If you have not already done so, complete you reading on Sixty Methodology Potholes and complete the corresponding task.

2. Read the materials on Sampling and complete the two tasks (types of sampling and sampling issues).

You should also browse through the handouts from today, and make note of any information pertinent to your group project. In particular, pay close attention to the materials on designing questionnaires.

Finally, as you continue your work on completing STEP 4 of the group projects, review the instructions carefully. Post the latest version of your protocol on your group’s blog and update as needed, including sound clips and questionnaire questions. This will make my review and feedback more efficient, especially as I will be away for most of next week. You should also read the new posting on STEP 5, which includes contact information as well as supplementary materials (e.g., Honing & Ladinig on strategies to run successful online experiments).

Group Project Citiations

STEP 5: Implementation (Drafts Due Tuesday, November 11)

The draft of your experiment is due next week, Tuesday, November 11. Try to make this draft as complete as possible, including embedding all testing instructions and questions on Qualtrics, as well as participants’ questionnaire and debriefing. If some of your stimuli are not ready, put place-holder questions. We will review your work in class on Tuesday and do some troubleshooting. The goal is to give your experiment a test-run by the end of the week, so that data collection can start at the latest on Monday, November 17.

Both experiments should include the following components:

– Statement about the pedagogical nature of the project and the anonymity of the data collection process.

– Statement that participants should free free to stop at any time, without adverse effect to them. You may also share that incomplete data sets will not be used in the analysis.

– Participants’ questionnaire (at any point you feel is most appropriate; an option we did not discuss is to have it in the middle).

– Debriefing & free response option: Explain what the experiment was seeking to explore more specifically at the end and provide participants with the option to send comments.

– Contact information in case participants want to receive a copy of the report (with a time period when it will be available).

As you near the end of the experimental design phase, you might want to take some time to review the methodology handouts that were distributed in class. Are there any concepts that relate to your study that is not clear enough to you? Is there any methodological issue arising from your planned procedure that resembles an issue described in the handouts?

Here are the instructions prepared by Pam Patterson on how to post your stimuli on classes*v2 and integrate them in your survey:

Instructions On Uploading Audio Files and Capturing the URL

Don’t hesitate to seek expert help from the support staff at Yale! Here are the contacts of people we have consulted with:

Mike Laurello, School of Music (michael.laurello@yale.edu OR michaellaurello@gmail.com): Will help you with most aspects of stimuli preparation and music technology for pre- and post-processing of data.

Scott Petersen, MusTLab (scott.petersen@yale.edu): Scott is the supervisor of the Department of Music’s technology lab (4th floor). You can contact him for any issue related to using the lab.

Sherlock Campbell, CSSSI (sherlock.campbell@yale.edu): Can help you with everything statistics as well as Qualtrics. Don’t forget that there are also consultants in the center who will be able to answer your questions and guide you through the steps of data analysis, if needed.

Pam Patterson, ITS (itg@yale.edu): Pam is an administrator for the course blog and can help you with any issues related to the course blog as well as using classes*v2 for your study (see above).

Rémi Castonguay, Gilmore Library (remi.castonguay@yale.edu): Can help you with database research if you need additional background sources and when it is time to relate you findings back to previous findings.

Here are a few additional instructional resources:

David Huron on Types of Behaviors

NOTE that part 5, on “self-report”, is especially relevant for our work. You can find many more useful videos on empirical music research with David Huron here.

Yale’s link to lynda.com

Also, two sources on doing web-based research; the first one is especially helpful as it provides some strategies to improve success and reliability of data collection:

Honing & Ladinig (2008), “The potential of the internet for music perception research: A comment on lab-based versus web-based studies

Germine, Nakayama, Duchaine, Chabris, Chatterjee, & Wilmer (2012), “Is the Web as good as the lab? Comparable performance from Web and lab in cognitive-perceptual experiments”

 

The internal clock and subjective tempo: Effects of arousal and aging

To read the poster, click here.

First author: Kelly Jakubowski

Goldsmiths, University of London, London, UK

Co-authors: Andrea Halpern, Lauren Stewart

Session: B1 – LANGUAGE, LEARNING AND MEMORY

Summary: Human time judgments are affected by various psychological factors. Our study tested whether factors known to influence time perception would also affect the tempo at which a familiar tune ‘sounds right’(hereafter referred to as ‘subjective tempo’). Two experiments tested the effects of 1) physiological arousal and 2) age on subjective tempo for common tunes such as Happy Birthday. It was hypothesized that 1) arousal induced via exercise would increase subjective tempo relative to a control task (anagrams)and that 2) subjective tempo would decrease with age. All participants completed a perception task, in which the tempi of tunes heard aloud were adjusted in real time, and an imagery task, in which the speed of a click track was adjusted to match the tempi of imagined tunes. Subjective tempo was positively associated with increased arousal, but was not related to age. Results are discussed in relation to pacemaker-accumulator models of timing and theories of cognitive slowing.

Personality influences career choice: Sensation seeking in professional musicians

Peter Vuust, Line Gebauera, Niels Chr. Hansenb, Stine Ramsgaard Jørgensena,
Arne Møllera, and Jakob Linneta. 2010. Personality influences career choice: Sensation seeking in professional musicians. Music Education Research, 12, 2, 219-230.

ABSTRACT: Despite the obvious importance of deciding which career to pursue, little is
known about the influence of personality on career choice. Here we investigated
the relation between sensation seeking, a supposedly innate personality trait, and
career choice in classical and ‘rhythmic’ students at the academies of music in
Denmark. We compared data from groups of 59 classical and 36 ‘rhythmic’
students, who completed a psychological test battery comprising the Zuckerman
Sensation Seeking Scale, the Spielberger StateTrait Anxiety Inventory, as well as
information about demographics and musical background. ‘Rhythmic’ students
had significantly higher sensation seeking scores than classical students,
predominantly driven by higher boredom susceptibility. Classical students
showed significantly higher levels of state anxiety, when imagining themselves
just before entering the stage for an important concert. The higher level of anxiety
related to stage performance in classical musicians was not attributed to group
differences in trait anxiety, but is presumably a consequence of differences in
musical rehearsing and performance practices of the two styles of music. The
higher sensation seeking scores observed in ‘rhythmic’ students, however, suggests
that personality is associated with musical career choice.

A Review of Rhythmic Memory

The interaction between music and memory has been much researched and discussed. More specifically, it has been studied how the brain remembers a rhythm and what factors can effect how well a rhythm is remembered by the brain. The different pathways of the brain that occur when listening to or reproducing a rhythm have been traced out by numerous experiments. These studies of the mechanisms in the brain have been advanced by examining individuals with certain brain disorders thought to effect rhythmic perception. Outside of observing the systems of the brain, experiments have been conducted to determine what factors and to what extent these factors affect rhythm memory, such as presentation and complexity. It has been established that rhythm is to a degree a component of remembering a piece of music and that this skill is variant among individuals of different age groups, music abilities, and learning levels. A connection that has been made in recent studies is that between musical discrimination abilities and language-related skills. People with certain language defects have corresponding shortcomings in rhythmic synchronization and recognition. Also, disorders not directly related to language, such as autism, have been revealed to parallel rhythmic ability. This knowledge of association between music and levels of learning or social ability have also given rise to the theory that music intervention among affected individuals may provide benefits and assistance towards these deficiencies. This review first examines the mechanisms of the brain involved in rhythm perception and how we interpret rhythms of different kinds. It then discusses what is known about what influences how well a rhythm can be recalled. Later, this review discusses developmental disorders that may be associated with rhythm cognition and how music is trying to be used to combat these syndromes.

Research on rhythm has demonstrated how memory plays a part in the subdivision and division of music. Spontaneous groupings of rhythms arise within a piece of music, which shows limitations in our memory (Krumhansl 2000). In order for us to be able to make sense of what we are hearing and have expectations for what we are about to hear, our mind has to come up with a way to group the beats and rhythms of music in a coherent manner. Also, simpler ratios of beats such as 1:2 are easier to imitate than more complex ratios such as 1:3 (Krumhansl 2000). Perfomance differences in rhythmic ratio imitation experiments start to emerge among individuals with different musical backgrounds, suggesting a disparity in ability to recall a rhythm among groups with varying musical experience. This difference is further supported by an experiment conducted by Habibi, Wirantana, & Starr (2014). In this study, the researchers monitored behavioral and brain activity that occurred in both musicians and nonmusicians during rhythmic variations from pairs of unfamiliar melodies. Musicians greatly outperformed nonmusicians in detecting these deviations and showed greater activity in the frontal-central areas of the brain. These results suggest that musical training may have an effect on brain activity involved in processing temporal irregularities, even of unfamiliar melodies (Habibi, Wirantana, & Starr 2014). Attempts have also been made to divide rhythms into a hierarchy that is placed in different kinds of memory (Brower 1993). The ways in which rhythm has been defined and divided provides insight into how we perceive rhythms and why certain rhythms are easier to remember than others.

Many of the experimental studies that study participants’ abilities to reproduce rhythms reference rhythms that are similar. The term “similar” may seem subjective upon first hearing it, which is a potential problem of these studies. Cao, Lotstein, & Johnson-Laird (2014) took to objectively define similar rhythms and look at the specific characteristics that make up related rhythms. Their experiments displayed that rhythms of the same “families” had the same pattern of interonset intervals, which is the space between the start of two adjacent tones (Cao, Lotstein, & Johnson-Laird 2014). Their experiments also revealed that errors in reproducing rhythms by tapping often yielded rhythms of the same family. This shows that temporal patterns in rhythms play a major role in how we perceive rhythms to be similar, whether consciously or unconsciously.

Manipulating aspects of a rhythm has been shown to have a variety of effects on how well participants can remember and reproduce a certain rhythm. The best cue for identifying a piece of music is the combination of rhythm and pitch (Hébert & Peretz 1997). In their experiment, Hébert and Peretz (1997) demonstrated that rhythm alone tends to be an insignificant indicator of a musical excerpt and less effective than pitch alone. On the other hand, other studies demonstrate the strength of melody recall with rhythm over pitch. Silverman (2010) revealed in his experiment that participants were better able to digitally recall musical excerpts with the condition of only being presented with the rhythm of a melody. In this study, participants listened to six treatment conditions of a melodic excerpt and were asked to demonstrate their memory of the different conditions by a digital recall task. Participants showed the greatest error with the pitch only and both rhythm and pitch conditions (Silverman 2010). Familiarity showed no effect in this experiment. Also, music majors outperformed non-music majors, another indication that musical experience plays in a role in rhythmic recall. A specific manipulation of rhythm that has shown to have an effect on recall is the presentation of the rhythm. Shehan (1987) showed that in second- and sixth- grade students, rhythm reproduction performance was much higher for a combination of aural and visual presentation, rather than one type of presentation alone. Also, the sixth-grade participants learned the rhythm twice as quickly as the second-grade participants (Shehan 1987). This reveals how maturation and age have a large effect on the ability to remember and recall a rhythm. Information gained from this experiment could be used to improve music education for children in presenting rhythms in a manner that is more efficient for them to learn it.

Rhythmic patterns and memory capabilities have been examined in individuals with various developmental or learning disabilities. One group of people who has been studied is those with amusia. Amusia is a loss or impairment of musical capabilities usually caused by brain disease or an injury to the brain. Results of experiments testing those with amusia have suggested that pitch and rhythm processing centers in the brain are independent of each other. Murayama, Kashiwagi, Kashiwagi, & Mimura (2004) found that participants with amusia still showed preserved rhythmic memory, even though their pitch memory was damaged. This supports the theory that pitch and rhythm operate on separate neural subsystems (Murayama, Kashiwagi, Kashiwagi, & Mimura 2004). Rhythmic processing appears to be spared in pitch deafness as well (Phillips-Silver, Tolvalnin, Gosselin, & Peretz 2013). However, other experiments have observed extreme difficulty among amusic individuals in synchronizing to musical rhythms. No such difficulty was seen in synchronizing to noise bursts, which suggests that timing impairments among amusic people are limited to music (Bella & Peretz 2006). These sometime conflicting results call attention for the need of more experimentation perhaps with stronger manipulations.

Many studies have explored the relationship between music and learning. These studies have focused on children, since this is a time of significant learning. I will focus on the studies examining the affects of dyslexia, a developmental reading disorder, on music perception. It has been shown that in children with dyslexia, musical discrimination predicts phonological skills (Forgeard, Schlaug, Norton, Rosam, & Iyengar 2008). Accurate perception of musical structures is related to literacy development in children (Huss, Verney, Fosker, Mead, & Goswami 2011). Also, children without dyslexia generally outperform those with dyslexia in rhythm recall tasks. The correlation of linguistic abilities and musical abilities indicates that linguistic and non-linguistic auditory input are connected and involved in tasks that directly relate with developmental problems, such as reading (Anvari, Trainor, Woodside, & Levy 2002). Results such as these have prompted research to test whether musical intervention in children with disorders such as dyslexia may help improve reading or linguistic skills. One such experiment introduced a short-term music curriculum in second-grade students with and without a specific learning disability (Register, Darrow, Swedberg, & Standley 2007). Significant improvement in word knowledge and reading skills were observed in both groups, showing that improved musical skills may also translate to improved linguistic skills.

Much ground has been made in the study of memory and rhythm. In particular, the connection that rhythmic perception and memory have with skill areas outside of music such as language is now better understood. These results can be used in the future to better education and improve reading skills in youth, which are enormous applications that will hopefully prove to be extremely beneficial in the near future.

 

References

Shehan, P. (1987). Effects of rote versus note presentation of rhythm learning and retention. Journal of Research in Music Education, 35(2), 117-26.

Silverman, M. (2010). The effect of pitch, rhythm, and familiarity on working memory and anxiety as measured by digit recall performance. Journal of Music Therapy, 47(1), 70-83.

Cao, E., Lotstein, M., & Johnson-Laird, P. (2014). Similarity and families of musical rhythms. Music Perception, 31(5), 444-469.

Krumhansl, C. (2000). Rhythm and pitch in music cognition. Psychological Bulletin, 126(1), 159-179.

Huss, M., Verney, J., Fosker, T., Mead, N., & Goswami, U. (2011). Music, rhythm, rise time perception and developmental dyslexia: Perception of musical meter predicts reading and phonology. Cortex, 47(6), 674-689.

Hébert, S., & Peretz, I. (1997). Recognition of music in long-term memory: Are melodic and temporal patterns equal partners? Memory and Cognition, 25(4), 518-533.

Brower, C. (1993). Memory and the Perception of Rhythm. Music Theory Spectrum, 15(1), 19-35.

Habibi, A., Wirantana, V., & Starr, A. (2014). Cortical Activity During Perception of Musical Rhythm: Comparing Musicians and Nonmusicians. Psychomusicology: Music, Mind & Brain, 24(2), 125-135.

Phillips-Silver, J., Toiviainen, P., Gosselin, N., & Peretz, I. (2013). Amusic does not mean unmusical: Beat perception and synchronization ability despite pitch deafness. Cognitive Neuropsychology, 30(5), 311-331.

Bhide, A., Power, A., & Goswami, U. (2013). A rhythmic musical intervention for poor readers: A comparison of efficacy with a letter-based intervention. Mind, Brain, and Education, 7(2), 113-123.

Anvari, S., Trainor, L., Woodside, J., & Levy, B. (2002). Relations among musical skills, phonological processing, and early reading ability in preschool children. Journal of Experimental Child Psychology, 83(2), 111-130.

Bella, S., & Peretz, I. (2003). Congenital Amusia Interferes with the Ability to Synchronize with Music. Annals of the New York Academy of Sciences, 999, 166-169.

Register, D., Darrow, A., Swedberg, O., & Standley, J. (2007). The Use of Music to Enhance Reading Skills of Second Grade Students and Students with Reading Disabilities. Journal of Music Therapy, 44(1), 23-37.

Forgeard, M., Schlaug, G., Norton, A., Rosam, C., Iyengar, U., & Winner, E. (2008). The Relation Between Music and Phonological Processing in Normal-Reading Children and Children with Dyslexia. Music Perception, 25(4), 383-390.

Murayama, J., Kashiwagi, T., Kashiwagi, A., & Mimura, M. (2004). Impaired pitch production and preserved rhythm production in a right brain-damaged patient with amusia. Brain and Cognition, 56(1), 36-42.

 

 

Embodied Cognition and Kinesthetic Motion Literature Review

Slow, fast, fluid – these adjectives can be applied to the facets of rhythm, tempo, and articulation of either movement or music. In fact, there is not much dispute that the auditory and vestibular systems are, in fact, linked. Human movement studies have been involved in everything from pedagogical approaches to memory and entrainment. This literature review addresses how physical body movements can be linked to music, touching upon embodied cognition, physical movement and motion capture technology, how movement to music affects beat perception, developmental studies about rhythmic performance, and substrates behind rhythm affection motor behavior. Not only are brain areas traditionally assumed to only be associated with performing kinesthetic actions now being linked to auditory beat perception, these neuroscience studies are being used alongside behavioral studies that show how body movements can help parse the metric structure of music (Toivianen 2010).

Leman (2008) focuses on the presence of goal-directed action in music perception, embodied cognition thus assuming interaction between an organism and its environment. Leman also mentions Hanslick’s theory of moving sonic forms; just as dance is an undefined structure of form relationships, so is music. An organism’s reaction to the moving sonic form of music is corporeal, providing support to embodied cognition being shaped by aspects of the body. Under the impression that movement can enhance listening, a study attempting to measure vestibular influence on auditory metrical interpretation (Phillips-Silver Phillips-Silver & Trainor, 2008) found that movement of the head, but not legs, affects meter perception. Drawing upon previous work that showed that body movement could help distinguish between metrically ambiguous rhythmic sound patterns, Phillips-Silver & Trainor (2005, 2007) were able to both isolate the vestibular system and test without any vestibular input with the end result of proving that the vestibular system and auditory information are indeed integrated in perception.

Ranging from spontaneous to deliberate body movements, dance is a form of corporeal interpretation of music that can be captured by various technological methods. Eerola et al. (2006) investigated the corporeal movement of toddlers to music using a high-resolution motion capture system. Toiviainen et al. (2010) applied kinetic analysis, body modeling, dimensionality reduction, and signal processing to data acquired by attaching reflectors to 28 joint markers on participants’ bodies. Eigenmovements, according to Alexandrov et al. (2001), are “movements along eigenvectors of the motion equation.”

A high-resolution motion capture system was used in the 2010 Toiviainen study to identify the most typical movement patterns, or eigenmovements, synchronized to different metrical, or beat, levels. PCs (principal components) are a reduced group of uncorrelated variables transformed into a large group of variables, the first five pertaining to the rotation of the upper torso, lateral swaying of the body, mediolateral arm movement, (four does not vary significantly) and vertical arm movement. The beat-level data can be summarized as follows: The one-beat level corresponded with mediolateral and vertical arm movements, the two-beat level with mediolateral arm movements and rotation of the upper torso, and the four-beat level with lateral swaying of the body and rotation of the upper torso. This observation was in line with their hypothesis that “faster metric levels are embodied in the extremities, and slower ones in the central parts of the body.” The torso’s significant mass, and thus kinetic energy, can be thought of in terms of the previously mentioned study’s focus on vestibular motion (in connection to the torso). Even a relatively dated study using motion capture like a virtual dance and music environment at UC Irvine (Beliaqua et al., 2001) used a data stream from placement of acceleration sensors on strategic body parts in order to transform motion into sound.

Mitchell et al. (2001) postulates that similar emotions generated from music and dance can be a means of matching them, accordingly suggesting that their simultaneous presentation might increase the chances of a match even with few similarities. The cross-modality mainly taken into account is emotion, presented as a representation of visual, auditory, or kinesthetic imagery that could potentially serve as a connector in memory between “temporally dissociated visual observations of a dance and auditory experience of the music that inspired it.” There may be a correlation of movement to ‘groove,’ as well, keeping in mind that some rhythms may only be inhibited by adding the additional stimuli of movement (Petr et al., 2011).

The auditory and dorsal premotor cortices were activated for longer tap times (louder tones) in a study (Zatorre et al., 2006). First hypothesized was that the more salient meters would most affect movement entrainment, also modulating brain regions driven by these auditory–motor interactions. Five parametric levels of metric saliency were created to test the hypothesis by increasing the contrast in sound intensity between accented and unaccented notes. Ultimately, the posterior STG and dPMC showed the most function connectivity in auditory-motor interactions. However, these findings can also be applied to neural components such as “mirror neurons,” due to the muscle memory enhanced by repetition of movement, for example by drummers reproducing the exact same sound at the same tempo.

In conclusion, a clear distinction needs to be made between what kind of movement is being integrated with auditory stimuli. Movements follow a hierarchical organization depending on the proximal/distal characteristic of the limb used (Peckel et al., 2014), and can even depend on loudness of tone as well. Music “has a pervasive tendency to rhythmically engage our body,” (Dalla Bella et al., 2013), but we still are not able to fully pin down the neural substrates involved, not only because the cross-modality of areas like the pre-motor cortex are involved in so many bodily functions. Current studies are focusing on modeling hierarchically organized temporal patterns induced by external rhythms. Questions to take away from this can include that if new temporal patterns are presented, do they have basis in past, known, patterns, and can movements be applied to this same exploration?

References

 

A Review of Musical Narrative

Review of Musical Narrative

From the beginning of our species, humans have been telling stories; we’re obsessed with them.  From ancient origin myths to movies and television, Greek tragedies to Broadway, and papyrus scrolls to paperback novels, we tell stories in all sorts of ways, and we can’t get enough of them.  We can see narrative all around us; in the stories we enjoy in books, movies, and theater productions, but also in the histories we teach and pass down, in the way we communicate daily through the telling of stories, and in the self-narrative of memories that we are constantly making and remaking for ourselves.  With them we find meaning, we imagine, and we emote; storytelling is uniquely human, and evokes the very behaviors that are generally thought to define what make us human.

This phenomenon hasn’t gone unnoticed, though; story and narrative has been a topic of scholarly thought and research for millennia.  Aristotle’s The Poetics is an in-depth analysis on the elements of stories and their effects, and there are countless English and Humanities professors who have devoted themselves to studying how narratives are created, how they work, and why they’re important.  Gratier et al. (2008), following Bruner (1990), defines narrative as, “a fundamental mode of human collective thinking — and acting — and that its basic function is the production of meaning or ‘world making’.”[1] Stephen Malloch defines narrative as fundamentally temporal and intersubjective: “Narratives are the very essence of human companionship and communication. Narratives allow two persons to share a sense of passing time, and to create and share the emotional envelopes that evolve through this shared time.”[2]  These are the concepts driving my experiment, though I will be equating the terms “narrative” and “story”, and referring to them in the sense that they are a perceived chain of causally connected events (with events being as vague as possible) by or from which we generate meaning.

A particularly fascinating area of research concerned with narrative is in relation to music.  Michael Imberty says, “All interactive musical communication has a regular implicit rhythm that has been called the pulse of the interaction. It presents also a sequential organization whose units are most often shapes or melodic contours. Finally, it transmits something like a content that can be described as narrative.”[3] For the purposes of this literature review, I will be focusing on two aspects of narrative in music: meaning and emotion, as these two subjects are the most researched and most pertinent to my experiment in narrative in music.  Questions about how music produces meaning, why music arouses our emotions, and how music can tell stories without words are huge questions that researchers are continuing to pursue today. However, there is debate over whether one can even truly speak of narrative in music; as Aniruddh Patel notes in his book Music, Language, and the Brain, there is a wide spectrum of ways people approach the question of meaning in music.  There are those who argue that music cannot be meaningful, nor even meaningless, as “meaning” is defined by these researchers as the symbolic connection of an idea or concept represented by a sound or collection of sounds.  Music is unable to do this; an F major chord doesn’t readily translate to any one idea or concept other than a chord characterized by the notes F, A, and C.  Others argue for a more inclusive definition of meaning, stating that meaning is whenever and object or event brings something to mind other than the object/event.  This would imply that meaning is “inherently a dynamic, relational process.”, that the meaning changes with the context and interpretation, and it also implies that meaning isn’t something contained within the object or event, but rather something generated by the observer.[4]  These are only a couple of the many theories of musical narrative and meaning that exist, but they serve to demonstrate the wide variety and approaches that have been used to try to understand this phenomenon.  Over the course of this literature review, I will review a few more of these theories, as well as discuss the methods and implications of several experiments that have investigated the intersection of narrative, music, and emotion.

Many investigations into this subject take the form of intellectual conjectures or theories, followed by analyses of different pieces of music.  For example, Candace Brower posits a theory of how musical meaning, or metaphor, is created, and relies on two theories from Cognitive Science: that pattern recognition and matching plays a part in thought, and that we map our bodily experiences onto the patterns of other domains.  Thus, through a mix of intra-domain mapping (matching patterns to patterns heard previously in the piece, as well as matching patterns from the piece to patterns conventionally found in music) and cross-domain mapping (matching patterns of music onto bodily experiences, i.e., the idea of strong and weak beats, higher and lower pitches, expansion and contraction, etc.) we create musical meaning.  Brower then goes on to show these concepts at work in an analysis of Schubert’s “Du bist die Ruh”.  For example, she points out the suspensions found throughout the piece as indicators of “resisting gravity”, a very physical concept.  This is an example of cross-domain mapping, which helps listeners construct a narrative, but she also claims that the narrative is told through the varied repetition of the melody, through things like the change from an A♭ to an A♮ symbolizing the blocking of an attempted move to a more stable area.  It’s through these two processes that we unconsciously analyze a piece of music as we listen in order to create narrative from music without lyrics (though in this case, the piece did).[5]

Another method used by researchers, and a more neurological approach, is to use different types of brain scans, usually as the music is being played, in order to see which brain regions are being activated at certain points in the music. This relies on an aspect that stories and music both possess, that is that they both unfold in time.  The theory behind many of these studies is that narrative and emotion in music are created through both the fulfillment and denial of expectations set up by the music – this reaches all the way back to and draws on Leonard Meyer’s discussion of musical meaning and emotion.[6]  In particular, experiments have looked at the phenomenon of “chills”, or when a phrase in a piece of music produces a physiological response of shivers and possibly even tearing up.  10 subjects identified musical passages that consistently gave them chills, which were played while their brain activity was being monitored (mostly by PETs), and music that didn’t give them chills was used as a control.  It was discovered using this method that these chills produced activity in the Nucleus Accumbens, which is a kind of “reward center” in the brain, and that these chills produced responses in the brain similar to joy and/or euphoria.  With this scanning we can see that there are distinct brain states for music, and that they match up well spatially to states associated with emotional responses to other stimuli.  In addition, the brain areas found to be associated with listening to music are believed to be evolutionarily ancient structures/regions, and so it is believed that music somehow appropriates a more ancient system in order to give us pleasure.[7]

A third theory was put forward by Laura-Lee Balkwill and William Forde Thompson, which states that emotion and meaning are communicated in music through a combination of both universal and cultural cues.  In other words, there are cues within music that are recognized worldwide as cues for certain emotions, but there are also certain cues that are specific to certain cultures, such as the western music culture.  In order to test this theory, Balkwill and Thompson set up an empirical study, in which Western music listeners were exposed to 12 Hindustani “ragas”, each of which was intended to convey one of 4 emotions: joy, sadness, anger, and peace. The subjects then rated each piece for these 4 emotions, as well as 4 attributes of the music, including tempo, rhythmic complexity, melodic complexity, and pitch range.  The results showed that subjects were sensitive to three of the four emotions portrayed respectively by each raga: joy, sadness, and anger, and these judgments of emotion were related to the perception of the musical attributes.  This suggests that listeners are able to extract emotion from unfamiliar music using universal cues, even in absence of the listeners’ usual cultural cues.[8]

For all the theories out there, of which the above are only a few, we seem to possess a very healthy folk knowledge of meaning, emotion, and narrative in music.  Many studies done with children give evidence of this.  By the age of 11 – 13, children are reliably able to match music with different structures (directed action with solution and closure, action without direction or closure, and no action or direction) with three different styles of western music (La fille aux cheveux de lin by Debussy; the prelude in G, op. 28 by Chopin; and the prelude op. 74, no. 4 by Skrâbin).  Results showed a very high degree of agreement among the children; the Chopin piece was matched with the well-structured story, with directed action and closure, the Debussy piece was matched with the story with no action or direction, and the Skrâbin piece was matched to the story with action but no direction or closure.[9]  Even by the age of 5 – 6, children are able to use the perceived emotion in music to judge other stimuli.  The children heard a neutral story read aloud while either sad music (minor, slower tempo), happy music (major, faster tempo), or no music was played.  The kids were then asked questions about the story, and told to pick a sad face, a happy face, or a neutral face to describe certain moments in the story.  Kids who heard the happy music were more likely to interpret the story/character as happier, while kids who heard the sad music interpreted the story/character as more sad.[10]  In a study done by Mayumi Adachi and Sandra Trehub, children of different ages and of no particular musical ability were asked to perform either Twinkle, Twinkle Little Star or the ABC song, once to make people feel happy, and once to make people feel sad. These renditions were recorded both aurally and visually, and each of these was presented to other children as well as adults, who then made a judgment on which version sounded happier.  Children were accurately able to not only produce versions of the songs that effectively communicated happiness or sadness, but were able to judge from both audio and visual input as well.  There was a trend found that the older children and adults did better than the younger, providing evidence for the idea that our perception of emotion, meaning, and narrative in music is a skill that develops over time.[11]

As is clear from this review, there’s been a fair amount of research done into this subject, spanning a wide breadth and depth of methods and topics.  With my experiment, I hope to further this body of study and literature, and provide a more concrete look at the interaction of narrative and music

 

Bibliography

[1] Gratier, Maya, and Colwyn Trevarthen. “Musical Narratives and Motives for Culture in Mother-Infant Vocal Interaction.” Journal of Consciousness Studies15.10-11 (2008): 122-58. PsycARTICLES. Web.

[2] Malloch, S.N. (1999-2000). Mothers and Infants and Communicative Musicality. Musicæ Scientiæ, Special issue: Rhythm, musical narrative, and the origins of human communication, 29-57.

[3] Imberty, Michael, and Maya Gratier. “Narrative in Music and Interaction Editorial.” Musicae Scientiae 12.1 Suppl (2008): 3-13. PsycARTICLES. Web.

[4] Patel, Aniruddh D. Music, Language, and the Brain. Oxford: Oxford UP, 2008. Print.

[5] Brower, Candace. “A Cognitive Theory of Musical Meaning.” Journal of Music Theory 44.2 (2000): 323. Répertoire International de Littérature Musicale. Web.

[6] Meyer, Leonard B. Emotion and Meaning in Music. Chicago: U of Chicago, 1956. Web.

[7] Malloch, Stephen, and Colwyn Trevarthen. “Brain, Music, and Musicality: Inferences from Neuroimaging.” Communicative Musicality: Exploring the Basis of Human Companionship. Oxford: Oxford UP, 2009. N. pag. Répertoire International de Littérature Musicale. Web.

[8] Balkwill, Laura-Lee, and William Forde Thompson. “A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues.” Music Perception: An Interdisciplinary Journal 17.1 (1999): 43-64. JSTOR. Web.

[9]Ziv, Na. “Narrative and Musical Time: Children’s Perception of Structural Norms.”Proceedings of the Sixth International Conference on Music Perception and Cognition (2000): n. pag. Web.

[10] Ziv, Naomi, and Maya Goshen. “The Effect of ‘Sad’ and ‘Happy’ Background Music on the Interpretation of a Story in 5 to 6-year-old Children.” British Journal of Music Education 23.03 (2006): 303. PsycARTICLES. Web.

[11]Adachi, Mayumi, and Sandra E. Trehub. “Decoding the Expressive Intentions in Children’s Songs.” Music Perception: An Interdisciplinary Journal 18.2 (2000): 213-24. Web.

Empathy and Musical Rhythm: A Literature Review

The emotional and unifying powers of music have long been recognized. Militaries use steady beats to instill a sense of camaraderie in their soldiers; sports players use high-energy pulses to “get angry” before a game; filmmakers use swelling musical works to shape their audiences’ responses to a scene. In all of these instances of emotional influence, a key is the manipulation of the music’s rhythm to elicit the desired feeling in listeners. These compositions are designed to create the same emotional experience across many individuals, relying on shared principles of human cognition in order to do so. Because of these shared principles, it seems likely, therefore, that music can be used to augment empathic responses in its listeners. This paper seeks to review the current literature related to this topic, examining the intersection of empathy and musical rhythm to evaluate possible a possible direction for research into this area. Specifically, I seek to examine whether variations in musical rhythm can influence listeners’ interpretations of others’ emotions and, by extension, their empathic responses to other individuals.

Since ancient times, musicians, audiences, and philosophers have recognized the powerful emotional component of music (Perlovsky, 2010). People regularly describe songs using emotional vocabulary, defining their favorite tunes as “happy,” “upbeat,” “angry,” “sad,” or with a host of other affective terms. In fact, evolutionary psychologists have theorized that music evolved from the same systems as language, diverging from the more concretely semantic process of human language to become a more emotional and semantically abstract artifact of human cognition. Both music and speech rely upon similar ideas of rhythm and pitch to convey messages, though musical sounds can draw upon much wider interpretations of these notions in order to do so. This idea, called superexpressive voice theory, supposes that music holds such power over its listeners because it acts upon the linguistic parts of the brain in a way that is more expressive—that is, more emotional—than normal human language (Perlovsky, 2010). Several psychological mechanisms have been proposed to account for this feature of music, most notably six by researchers Juslin and Västfjäll (2008): brain stem reflexes, evaluative conditioning, emotional contagion, visual imagery, episodic memory, and musical expectancy. While much research is left to be done to confirm that these mechanisms are at fact in play in music cognition, they provide us with a psychological framework which we can use to understand the other literature relevant to this topic.

This notion that music processing is due to universal features of human cognition is further supported by research conducted by Balkwill and Thompson (1999).   These researchers asked American men and women of various age groups (all of whom were unfamiliar with Hindustani music) to listen to several Hindustani melodies and to evaluate the dominant emotions and relative rhythmic complexity of each piece. For control, four experts in Hindustani music were also asked to evaluate each piece on the same bases; each expert also asserted that each recording used was a competent rendition of the piece. The thirty-four participants were found to be in agreement with regards not only to the rhythmic complexity of each piece, but also to the dominant emotion expressed in each melody, regardless of whether they were experts, had only a passing familiarity with the genre, or had never heard that type of music before (Balkwill & Thompson, 1999). Furthermore, the emotions that the participants identified in each piece were found to be in agreement with the emotions intended to be conveyed by each piece, despite the vast differences in cultural backgrounds between the composers of each melody and the participants listening to them (Balkwill & Thompson, 1999). This suggests that emotional responses to music transcend cultural differences and instead draw upon universal psychological features of their listeners.

From the perspective of musical rhythm, this makes sense, especially when one considers the phenomenon of musical entrainment. Clayton et al. (2005) broadly defines entrainment as “a phenomenon in which two or more independent rhythmic processes synchronize with each other.” When listening to music, for example, a walking person will unconsciously fall into step with the beat of the song, entraining the rhythm into their own physicality. Reviewing literature from the field of ethnomusicology, these researchers also found that this propensity for entrainment is found across cultures, again suggesting that a universal psychological process is at play (Clayton et al., 2005). Some research conducted has suggested that this process actually helps listeners to music focus their attention across domains, providing evidence for a possible means by which entrainment could lead to increased empathy (Escoffier et al., 2010). Participants in this study were presented with pictures of faces and houses, then asked to indicate whether each picture was oriented upright or had been inverted. In one condition, participants completed the task in silence; in another condition, a rhythm was played in the background and the images appeared on-beat; and in the third condition, the images appeared off-beat with the rhythm. Participants responded significantly more quickly to pictures presented in the on-beat condition than to those presented off-beat or in the silence condition. That is, the presence of a synchronous musical rhythm was found to facilitate the focusing of attention on visual stimuli (Escoffier et al., 2010).

On an interpersonal level, entrainment is a key component of joint action theory, a psychological theory which attempts to explain how individuals are able to perform complex tasks in conjunction with other people even with incomplete or no communication between the two groups (Knoblich et al., 2011). Knoblich et al. (2011) reviewed the literature in this field, finding that people tend to fall into synchronous patterns with one another even when they try not to, regardless of whether the task in question is dancing to music, walking together, or even just rocking in a chair side-by-side. They propose that this inclination towards interpersonal synchrony is also at play in empathetic responses between individuals.

Indeed, other research has supported the existence of a connection between musical entrainment and prosocial behavior. De Bruyn and colleagues (2008) worked with a group of elementary school children to test the effect of music on their social interaction and the effect of the level of their social interaction on their response to music.   First, they empirically quantified the impact of social interaction on the children’s dancing as they listened to music, investigating the children’s intensity of movement and the amount of synchronization with the beat. This study had two conditions: individual, where the children were separated by screens; and social, where the children danced in a group of their peers. The team of researchers found that the social environment caused a quantifiable increase in both the intensity of the children’s movement and their level of beat-synchronization. Furthermore, the researchers found an effect of the type of music played on the way that the kids embodied it; that is, the genre of music affected how the children danced in both the individual and social conditions (De Bruyn et al., 2008). More recent research has taken this step even farther, suggesting that physical embodiment of music—a phenomenon called “groove”—can actually increase the empathetic responses of those grooving to the music (Sevdalis & Raab, 2013). However, because these experiments were not specifically testing for this effect, more research is warranted before we can draw a firm connection between musical rhythm and empathy.

In testing this connection, research from other areas of cognitive science sheds some light on the feasibility of various methods. While the field of emotion cognition has gone back and forth in recent years on whether or not bodily arousal responses are differentiated enough to allow for direct measurement of emotion, recent research has given credence to supporters of this technique. One possible experiment, therefore, would be to measure participants’ emotional responses to rhythmic stimuli; thus, we could test whether musical rhythms are actually able to elicit similar affective responses across individuals, or whether individuals simply learn through social cues to report certain kinds of emotional states based on the type of rhythm played (Harrison et al., 2010). By seeing whether participants actually experience similar emotional responses or simply report doing so, we can gain further insight into the intersection of music and empathy.

Another possibility, though, is to test how musical rhythm impacts individuals’ ability to accurately identify the emotions that others are experiencing. To this end, researchers could utilize facial emotion recognition tasks, usually used in abnormal psychology to test patients’ abilities to empathize with and understand others. Such tasks present participants with several images of faces coded as one of several emotions and ask that they evaluate the emotion presented. Participants are then scored as to how closely their answers resemble those of the average person (Mueser et al., 1996). Such a test could be useful to measure the effect of mediating factors, such as the presence of a musical rhythm, on the ability of listeners to identify the emotions of others.

With this in mind, the question I seek to test is whether a rhythm can affect individuals’ interpretation of emotions expressed by other people. Specifically, if participants were shown images of faces coded as various emotions as they were played pieces of likewise-coded music, would they be able to more accurately (and reliably) interpret the expressions depicted in the faces?  Or, would their ability to do so be negatively impacted if they were played a piece of music that did not align emotionally with the facial expression shown? By exploring this question, we gain further insight into the psychological links between music, emotion, and empathy.

 

Works Cited

Balkwill, L., and Thompson, W.F. (1999).  A Cross-Cultural Investigation of the Perception of Emotion in Music: Psychophysical and Cultural Cues.  Music Perception: An Interdisciplinary Journal, 17(1), pp. 43-64

Clayton, M., Sager, R., and Will, U. (2005). In time with the music: the concept of entrainment and its significance for ethnomusicology. European Meetings in Ethnomusicology, 11, pp. 3–142.

De Bruyn, L., Leman, M., Moelants, D. (2008).  Quantifying Children’s Embodiment of Musical Rhythm in Individual and Group Settings.  Miyazaki, K., Hiraga, Y., Adachi, M., Nakajima, Y., and Tsuzaki, M. (Eds.). Proceedings from ICMPC10: The 10th International Conference on Music Perception and Cognition. Sapporo, Japan.

Escoffier, N., Sheng, D. Y. J., and Schirmer, A. (2010).  Unattended musical beats enhance visual processing.  Acta Psychologica, 135(2010), pp. 12–16.

Harrison, N. A., Gray, M.A., Gianaros, P.J., and Critchley, H.D. (2010).  The Embodiment of Emotional Feelings in the Brain.  The Journal of Neuroscience, 30(38), pp. 12878-12884.

Juslin, P.N., & Västfjäll, D. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioral and Brain Sciences, 31(5), pp. 559-­621.

Knoblich, G., Butterfill, S., and Sebanz, N. (2011). Psychological Research on Joint Action: Theory and Data. In B. Ross (Ed.), The Psychology of Learning and Motivation (Vol. 54, pp. 59-101).  Burlington, MA: Academic Press.

Mueser, K. T., Doonan, R., Penn, D.L., Blanchard, J.J., Bellack, A.S., Nishith, P., and DeLeon, J. (1996).  Emotion Recognition and Social Competence in Chronic Schizophrenia.  Journal of Abnormal Psychology, 105, pp. 2,271-275.

Perlovsky, L. (2010). Musical emotions: Functions, origins, evolution. Physics of Life Reviews, 7(1), pp. 2-­27.

Sevdalis, V., & Raab, M. (2013). Empathy in sports, exercise, and the performing arts. Psychology of Sports and Exercise. doi: 10.1016/j.psychsport.2013.10.013.

All Things Groove: A Brief Literature Review

All Things Groove: A Brief Literature Review
Ryan Davis

 

It is practically indisputable that listening to music is a multidimensional experience. Engaging in a musical experience as a listener is a particularly complex phenomenon, and it is especially curious that we, as human beings, often experience a compulsion to move along to the music that we hear. We seem to receive some sort of satisfaction from this aural-to-physical connection, and this engagement perhaps heightens our overall musical and personal experience. The concept of groove, which in recent years has become a widely researched topic, is a means to learn more about the musical properties that invite a listener into a more profound relationship with the music. When one hears the word groove, it likely sparks imagery of a comfort zone, or perhaps a consistent momentum of sorts, or even being in a positive state of mind. In a musical context, groove is a rather broad term, and brings forth many questions when searching for its precise meaning. What in fact makes music groovy? Can groove exist in any type of music, or is it restricted to a select few genres? Is groove simply a matter of musical taste, that is highly personal and subjective, or can it be universally identified, regardless of who is listening? Many researchers have sought out to answer these questions and many more, using a diverse series of approaches. For the purposes of this literature review, the content will be organized in a fashion firstly addressing the many definitions and functions that have led numerous researchers to a greater understanding of groove, followed by the implications of groove in different musical genres, in hopes of discovering new insights. One might question the importance or merit of delving into the meaning of groove. After all, could our understanding of groove best be left explained as a simple path to enjoyment for the listener? Perhaps an even greater enjoyment and appreciation of music begins to exist as we investigate the deeper components of groove and its impact.

Charles Keil, in his “Defining Groove” (2010), explains how the word itself has yet to be understood as having a singular concrete definition, and is still not currently interchangeable with words akin to “swing, flow, focus, grace, in the pocket…etc”. Keil’s approach to defining groove is an engaging one, in that he seems to seek his definition through somewhat casual observations, pulled from a remarkable wealth of personal musical experience, as opposed to rigorous scientific testing. He speaks of a groove being created via “participatory discrepancies” and defines them as “measurable differences or discrepancies in attack points and release points along a time continuum.” Therefore, is groove perhaps achieved by musicians consciously or unconsciously creating subtle rhythmic imprecision or by taking small rhythmic liberties? Using a rhythm section as an example, Keil explains further: “the drummer and bassist are consistently in synchrony with each other, but they are also consistently discrepant, different, slightly out of phase or in and out of phase with each other.” While it is not possible to know if all performing musicians are aware of this rhythmic push-and-pull as it occurs in real time, it is helpful to acknowledge that some degree of rhythmic flexibility is present. Another notable definition is from Oliver Senn and Lorenz Kilchenmann, taken from “The Secret Ingredient” (2009): “its principal meaning describes the music’s effect on musicians and listeners: music with a good groove incites people to engage emotionally with the music, and to participate with their bodies.” This reinforces the notion that listening to music is indeed a complex activity, with many linked human systems simultaneously reacting. Maria A. G. Witek provides another possibility: “due to the repetitiveness of the groove, it was hypothesized that microtiming in groove might facilitate a type of arousal that is not peak-based, but rather reflects the groove state of listening, which has been conceptualized as a steady mental state in synchronization with the music,” (Groove Experience, 2009). Focusing primarily upon the repetitive naures of music, Richard Middleton, in his “In the Groove or Blowing Your Mind?” (1986) concludes that “the production of musical syntaxes involves active choice, conflict, redefinition; at the same time, their understanding and enjoyment take place in the theatre of self-definition, as part of the general struggle among listeners for control of meaning and pleasure.” It becomes increasingly clear that a universal definition of groove is difficult to create, however one can certainly summarize that it would encompass rhythm, pleasure, microtiming, and physical movement.

In “Groove as Familiarity with Time” (2013), Rowan Oliver explores in depth many fascinating aspects of groove that emphasize its presence in a variety of musical genres. As each genre of music is defined by its unique set of sonic characteristics, it is of significance that a musician has the ability to manipulate sound in such a way that makes groove malleable between styles. Oliver explains that a musician can use timing as an expressive force, as long as he/she is aware of the “contextual senses of time”, which he divides into three useful categories: “1) In the first category, the contextual sense of time is contingent upon shared prior knowledge on the part of all musicking participants, as in the reggae ‘one-drop’ rhythm, for example. 2) In the second category, a musician ‘sets up’ the contextual sense of time in some way prior to the start of a performance proper for the benefit of the other musicking participants, as in styles based around a stated timeline pattern. 3) The third category relies on a shared sense of metronomic time, although in practice this tends to be more of a general feeling of an underlying isochronous pulse rather than a precisely ‘metronomic’ understanding.” Oliver’s distinctions can perhaps be applied to detect groove in a variety of types of music. Stereotypically, groovy music is usually linked to styles of swing, jazz, and funk, although these principles can certainly be related to other genres. The way in which a conductor gives a preparatory beat gesture to an orchestra no doubt influences the following sonic outcome, and it’s possible that such gestures impact the groove of a type of music. Mark Jonathan Butler’s “Unlocking the Groove” (2006) investigates the realm of electronic dance music (EDM), which in the last decade has exploded in popularity, and points to relevancy when discussing groove. Many EDM tracks have a driving, quasi-hypnotic repetitive structure, and undoubtedly cause listeners to move along to the music.

Petr Janata, Stefan T. Tomic, and Jason M. Haberman present another refreshing approach to studying groove. In “Sensorimotor Coupling in Music and the Psychology of the Groove” (2012), Janata aimed to define groove as a psychological construct, and used surveys of university students to see if participants identified similar constructs that could be appropriated to groove. “In closing, we consider the construct of the groove in relation to the evolution of entrainment and social behavior. Synchronizing with the beat is the simplest form of entrainment, not only with a musical stimulus, but also with other individuals.” Many of the survey’s musical examples were taken from a diverse selection, and reinforce that groove is a flexible and far-reaching concept.

As shown, the research directed towards the somewhat elusive concept of groove is growing, and we are beginning to identify salient characteristics that aid in describing the mystifying experience of listening to music. The dialogue for developing our understanding of groove is in flow, and perhaps Charles Keil said it best, “that commitment to keeping up your musical life and keeping your participatory mode going is what keeps us on the same wavelength, keeps us in the same groove,” (Music Grooves, Charles Keil & Steven Feld, 1994).

 

Works Consulted/Cited

Butler, Mark Jonathan. (2006). Unlocking the Groove: Rhythm, Meter and Musical Design in Electronic Dance Music. Bloomington, USA: Indiana University Press.

Janata, Peter., Tomic, Stefan T., & Haberman, Jason M. (2012). Sensorimotor Coupling in Music and the Psychology of the Groove. Journal of Experimental Psychology, 141(1), 54-75.

Keil, Charles. Defining Groove. (2010). PopScriptum 11: The Groove Issue. University of Berlin.

Keil, Charles., Feld, Steven. (1994). Music Grooves: Essays and Dialogues. Chicago, USA: University of Chicago Press.

Madsion, Guy. (2006). Experiencing Groove Induced by Music: Consistency and Phenomenology. Music Perception, 24(2), 201-208.

Middleton, Richard. (1986). In the Groove or Blowing Your Mind? The Pleasures of Music Repetition. Popular Culture and Social Relations. 159-176.

Oliver, Rowan. (2013). Groove as Familiarity with Time. Music and Familiarity: Listening, Musicology and Performance. 239-252.

Senn, Oliver., Kilchenmann, Lorenz. (2012). The Secret Ingredient: State of Affairs and Future Directions in Groove Studies. Musik-Raum-Akkord-Bild: Festschrift zum 65. 799-810.

Witek, Maria A. G. (2009). Groove Experience: Emotional and Physiological Responses to Groove-Based Music. European Society for the Cognitive Sciences of Music, 573-582.

Witek, Maria A.G., Clarke, Eric F., Wallentin, Mikkel., Kringelbach, Morten L., Vuust, Peter. (2014). Syncopation, Body-Movement and Pleasure in Groove Music. PLoS ONE, 9(4), 1-12.

Zagorski-Thomas, Simon. (2007). The Study of Groove. Ethnomusicology Forum, 16(2), 327-335.

Zbikowski, Lawrence M. (2004). Modelling the Groove: Conceptual Structure and Popular Music. Journal of the Royal Musical Association, 12(9), 272-297.