We investigate how the presence of performance microstructure(small variations in timing, intensity, and articulation )influences listeners’ perception of musical excerpts, by measuring the way in which listeners synchronize with the excerpts. Musicians and non musicians tapped on a drum in synchrony with six musical excerpts, each presented in three versions: mechanical (synthesized from the score, without microstructure), accented (mechanical, with intensity accents), and expressive (performed by a concert pianist, with all types of microstructure). Participants’ synchronizations with these excerpts were characterized in terms of three processes described in Mari Riess Jones’s Dynamic Attending Theory: attunement (ease of synchronization), use of a referent level (spontaneous synchronization rate), and focal attending (range of synchronization levels). As predicted by beat induction models, synchronization was better with temporally regular mechanical and accented versions than with the expressive versions. However, synchronization with expressive versions occurred at higher (slower) levels, within a narrower range of synchronization levels, and corresponded more frequently to the theoretically correct metrical hierarchy. We conclude that performance microstructure transmits a particular metrical interpretation to the listener and enables the perceptual organization of events over longer time spans. Compared with nonmusicians, musicians synchronized more accurately (heightened attunement), tapped more slowly (slower referent level), and used a wider range of hierarchical levels when instructed (enhanced focal attending), more often corresponding to the theoretically correct metrical hierarchy. We conclude that musicians perceptually organize evens over longer time spans and have a more complete hierarchical representation of the music than do nonmusicians.
This source compares how well people can synchronize with expressive versus mechanical excerpts. This gives us knowledge of prior work that has compared human-like performances against computer-like performances. The results from this experiment show that people were better at synchronizing with the mechanical excerpt, which is the opposite of our hypothesis. However, this study also showed that people synchronized with the expressive excerpt at higher levels, at a narrower range of levels, and more correspondingly to the correct metrical hierarchy, which suggests that expressive, human-like performance may enhance certain aspects of synchrony that mechanic performances do not.