Toward a model for joint synchronization to musical rhythm

Proc Natl Acad Sci USA 2014 Aug 11. pii: 201324142
Synchronization in human musical rhythms and mutually interacting complex systems

Hennig H
Department of Physics, Harvard University, Cambridge, MA 02138, USA holgerh@nld.ds.mpg.de

Though the music produced by an ensemble is influenced by multiple factors, including musical genre, musician skill, and individual interpretation, rhythmic synchronization is at the foundation of musical interaction. Here, we study the statistical nature of the mutual interaction between two humans synchronizing rhythms. We find that the interbeat intervals of both laypeople and professional musicians exhibit scale-free (power law) cross-correlations. Surprisingly, the next beat to be played by one person is dependent on the entire history of the other person’s interbeat intervals on timescales up to several minutes. To understand this finding, we propose a general stochastic model for mutually interacting complex systems, which suggests a physiologically motivated explanation for the occurrence of scale-free cross-correlations. We show that the observed long-term memory phenomenon in rhythmic synchronization can be imitated by fractal coupling of separately recorded or synthesized audio tracks and thus applied in electronic music. Though this study provides an understanding of fundamental characteristics of timing and synchronization at the interbrain level, the mutually interacting complex systems model may also be applied to study the dynamics of other complex systems where scale-free cross-correlations have been observed, including econophysics, physiological time series, and collective behavior of animal flocks.

Cross-domain cognition and the effect of musical expertise

Front Psychol 2014 Jul 28;5:789
Musicians are more consistent: gestural cross-modal mappings of pitch, loudness and tempo in real-time

Küssner MB, Tidhar D, Prior HM, Leech-Wilkinson D
Department of Music, King’s College London, London, UK

Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded classification and forced-choice matching tasks, investigations of representations of concurrently varied sound features (e.g., pitch, loudness and tempo) with overt gestures-accounting for the intrinsic link between movement and sound-are scant. To explore the role of bodily gestures in cross-modal mappings of auditory stimuli we asked 64 musically trained and untrained participants to represent pure tones-continually sounding and concurrently varied in pitch, loudness and tempo-with gestures while the sound stimuli were played. We hypothesized musical training to lead to more consistent mappings between pitch and height, loudness and distance/height, and tempo and speed of hand movement and muscular energy. Our results corroborate previously reported pitch vs. height (higher pitch leading to higher elevation in space) and tempo vs. speed (increasing tempo leading to increasing speed of hand movement) associations, but also reveal novel findings pertaining to musical training which influenced consistency of pitch mappings, annulling a commonly observed bias for convex (i.e., rising-falling) pitch contours. Moreover, we reveal effects of interactions between musical parameters on cross-modal mappings (e.g., pitch and loudness on speed of hand movement), highlighting the importance of studying auditory stimuli concurrently varied in different musical parameters. Results are discussed in light of cross-modal cognition, with particular emphasis on studies within (embodied) music cognition. Implications for theoretical refinements and potential clinical applications are provided.

Creative Classroom: The ticks ‘come marching in’ in singing professor’s microbiology class (YALE NEWS)

An example of joint action here at Yale!

“I like to sing, whether anybody sings along or not,” says the Yale microbiologist. “But I especially like having the students join in. There’s something about singing together that is ancient and wonderful and magical. It builds community.

Read the story here.

International Conference on the Multimodal Experience of Music

Call for papers: International Conference on the Multimodal Experience of Music
ICMEM, 23-25 March 2015
In live and virtual situations, music listening and performing are multimodal experiences: Sounds may be experienced tactically, music evokes visual images or is accompanied by visual presentations, and both generate vivid cross-modal associations in terms of force, size, physical location, fluency and regularity, among others.
ICMEM aims to bring together researchers from various disciplines who investigate the multimodality of musical experiences from different perspectives. Disciplines may include among others audiology, cognition, computer science, ethnomusicology, music performance and theory, neuroscience, philosophy, and psychology.
Proposals are invited for papers, symposia, demonstrations and posters.
Investigations may include but are not necessarily confined to the following areas
* Multimodal experiences of music in everyday life
* Cross-modal correspondences with musical parameters
* Influences of visual context on music perception
* Emotion and cross-modality
* Tactile, visual, and kinesthetic feedback in music performance
* Multi-modal interaction in multimedia, including film and games
* Uses of cross-modality in hearing or visual impaired music listeners
* Strong and weak synaesthesia
* Motion and movement perception in music
* Relations between motion and emotion in music listening
* Brain-structures related to cross-modal associations with sounds
* Technological and commercial applications of cross-modal associations
* Creative and pedagogical uses of cross-modality in music
Invited speakers: Profs. Amir Amedi, Eric Clarke, Nicholas Cook, Charles Spence, and Peter Walker
Dates: 23-25 March 2015
Location: Humanities Research Institute, University of Sheffield, UK
Host: Music, Mind, Machine in Sheffield, Department of Music, University of Sheffield, UK
Submission deadline: 6 October 2014 by e-mail to ICMEM@sheffield.ac.uk
This conference is supported by ESCOM and SEMPRE, who offer bursaries to student attendees, and by the British Academy.

The musicality of non-musicians: an index for assessing musical sophistication in the general population

PLoS One 2014 Feb 26;9(2):e89642
The musicality of non-musicians: an index for assessing musical sophistication in the general population

Müllensiefen D1, Gingras B2, Musil J1, Stewart L1
1 Department of Psychology, Goldsmiths, University of London, London, UK; 2 Department of Cognitive Biology, University of Vienna, Vienna, Austria

Musical skills and expertise vary greatly in Western societies. Individuals can differ in their repertoire of musical behaviours as well as in the level of skill they display for any single musical behaviour. The types of musical behaviours we refer to here are broad, ranging from performance on an instrument and listening expertise, to the ability to employ music in functional settings or to communicate about music. In this paper, we first describe the concept of ‘musical sophistication’ which can be used to describe the multi-faceted nature of musical expertise. Next, we develop a novel measurement instrument, the Goldsmiths Musical Sophistication Index (Gold-MSI) to assess self-reported musical skills and behaviours on multiple dimensions in the general population using a large Internet sample (n?=?147,636). Thirdly, we report results from several lab studies, demonstrating that the Gold-MSI possesses good psychometric properties, and that self-reported musical sophistication is associated with performance on two listening tasks. Finally, we identify occupation, occupational status, age, gender, and wealth as the main socio-demographic factors associated with musical sophistication.

Mood Segmenter

Music is one of the best ways for conveying and sharing emotions. The scientific community is developing systems for automatically classify songs with emotional-related tags or descriptors. However, several emotions can be perceived during the same song. For example, a song may start with a slow sad intro, and then continue with a melancholic verse, progress with an excitement crescendo up to an aggressive refrain and so on.

To participate in a study for understanding how to identify emotionally uniform segments in a song, that is, for dividing songs into segments with the same perceived emotion:

http://home.deib.polimi.it/buccoli/segm/index.php

Music listening in the subway

“A man sat at a metro station in Washington DC and started to play the violin; it was a cold January morning. He played six Bach pieces for about 45 minutes. During that time, since it was rush hour, it was calculated that 1,100 people went through the station, most of them on their way to work.

Bell's subway experiment
Three minutes went by, and a middle aged man noticed there was musician playing. He slowed his pace, and stopped for a few seconds, and then hurried up to meet his schedule.

A minute later, the violinist received his first dollar tip: a woman threw the money in the till and without stopping, and continued to walk.

A few minutes later, someone leaned against the wall to listen to him, but the man looked at his watch and started to walk again. Clearly he was late for work.

The one who paid the most attention was a 3 year old boy. His mother tagged him along, hurried, but the kid stopped to look at the violinist. Finally, the mother pushed hard, and the child continued to walk, turning his head all the time. This action was repeated by several other children. All the parents, without exception, forced them to move on.

In the 45 minutes the musician played, only 6 people stopped and stayed for a while. About 20 gave him money, but continued to walk their normal pace. He collected $32. When he finished playing and silence took over, no one noticed it. No one applauded, nor was there any recognition.

No one knew this, but the violinist was Joshua Bell, one of the most talented musicians in the world. He had just played one of the most intricate pieces ever written, on a violin worth $3.5 million dollars.

Two days before his playing in the subway, Joshua Bell sold out at a theater in Boston where the seats averaged $100.

This is a real story. Joshua Bell playing incognito in the metro station was organized by the Washington Post as part of a social experiment about perception, taste, and priorities of people. The outlines were: in a commonplace environment at an inappropriate hour: Do we perceive beauty? Do we stop to appreciate it? Do we recognize the talent in an unexpected context?

One of the possible conclusions from this experience could be:

If we do not have a moment to stop and listen to one of the best musicians in the world playing the best music ever written, how many other things are we missing?”

Another way to look at music and language…

Front Psychol 2013 Dec 6;4:855
High school music classes enhance the neural processing of speech

Tierney A1,2, Krizman J1,2,3, Skoe E1,2, Johnston K4, Kraus N1,2,4,5,6,7
1Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, USA; 2 Dept. of Communication Sciences, Northwestern University, Evanston, IL, USA; 3 Bilingualism and Psycholinguistics Research Group, Northwestern University, Evanston, IL, USA; 4 Walter Payton College Preparatory High School, Chicago, IL, USA; 5 Institute for Neuroscience, Northwestern University, Evanston, IL, USA; 6 Dept. of Neurobiology and Physiology, Northwestern University, Evanston, IL, USA; 7 Dept. of Otolaryngology, Northwestern University, Evanston, IL, USA

Should music be a priority in public education? One argument for teaching music in school is that private music instruction relates to enhanced language abilities and neural function. However, the directionality of this relationship is unclear and it is unknown whether school-based music training can produce these enhancements. Here we show that 2 years of group music classes in high school enhance the neural encoding of speech. To tease apart the relationships between music and neural function, we tested high school students participating in either music or fitness-based training. These groups were matched at the onset of training on neural timing, reading ability, and IQ. Auditory brainstem responses were collected to a synthesized speech sound presented in background noise. After 2 years of training, the neural responses of the music training group were earlier than at pre-training, while the neural timing of students in the fitness training group was unchanged. These results represent the strongest evidence to date that in-school music education can cause enhanced speech encoding. The neural benefits of musical training are, therefore, not limited to expensive private instruction early in childhood but can be elicited by cost-effective group instruction during adolescence.

Triple Special Issue of Empirical Musicology Review: Music & Shape

The Triple Special Issue on ‘Music and Shape’ in Empirical Musicology Review consists of 9 target articles and 17 commentaries, which have been developed in response to a conference held in London in July 2012 on ‘Music and Shape’.

You are invited to explore the following three broad themes:

Pedagogy and Performance (http://libeas01.it.ohio-state.edu/ojs/index.php/EMR/issue/view/109): articles on the relationship between the shape of gestures and sonic events in vocal lessons of South Indian Karnatak music; the use of musical shaping gestures in rehearsal talk by performers with different levels of hearing impairment; and what it means for professional DJs to shape a set on their turntables.

Motion Shapes (http://libeas01.it.ohio-state.edu/ojs/index.php/EMR/issue/view/110): articles discussing how motiongrams can be used to sonify the shape of human body motion; how pianists’ shapes of motion patterns embody musical structure; and how mathematical techniques can be used to quantify shapes of real-time visualizations of sound and music.

Perception and Theory (http://libeas01.it.ohio-state.edu/ojs/index.php/EMR/issue/view/111): articles on cross-cultural representations of musical shapes from the UK, Japan and Papua New Guinea; the evolutionary origins of tonality as a system for the dynamic shaping of affect; and how shaping and co-shaping of ‘forms of vitality’ in music gives rise to aesthetic experience.

New Research in Music Therapy: Disorders of Consciousness

You might be interested in this freely accessible article on some of the very latest research into Music Therapy with people with disorders of consciousness: Neurophysiological and behavioural responses to music therapy in vegetative and minimally conscious states, by Julian O’Kelly, L. James, R. Palaniappan, J. Fachner, J. Taborin, W.L. Magee, published in Frontiers in Human Neuroscience.

To view and download the online publication, please click here:

http://www.frontiersin.org/Journal/Abstract.aspx?s=537&name=human_neuroscience&ART_DOI=10.3389/fnhum.2013.00884&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&journalName=Frontiers_in_Human_Neuroscience&id=73485

The paper has relevance to other conditions where complex disability or illness impact upon our ability to discern behavioural responses to music. It’s one of the first to be published in a series of papers sharing the research topic of “Music, Brain, and Rehabilitation: Emerging Therapeutic Applications and Potential Neural Mechanisms”

This article is an open access publication, so it’s freely accessible to any reader anywhere in the world.