This project proposes to examine the differences between learning rhythms aurally and from notation. Does learning a rhythmic pattern aurally allow people to pick it up more quickly than if it is presented solely through notation? What types of rhythm lend themselves to each mode of learning (ie., syncopated, polymetric, etc.)? I define learning a rhythm aurally to entail the call-and-response method, where a teacher (human or machine) plays a rhythm and the subject/student plays it back, whereas learning through notation entails using traditional western notation to learn rhythms. This method is more often used independently by musicians and involves designating note values which break a measure into fractional time periods to represent a pattern which describe the pattern. I am also particularly interested in the immediate recognition of rhythmic pattern one gets from an aural representation, versus the longer, abstracted process of translating notation into rhythm and then internalizing what the rhythm sounds like. This interest reflects a bit of my personal bias, though, as I am not a particularly fluent sight reader and so do not “see” the patterns immediately as expert musicians might.
These questions of pedagogy are particularly important because rhythm is an often neglected side of sight reading, and one with which many musicians struggle. In one cognition study, nearly two thirds of all errors musicians made on the Watkins-Farnum Performance Scale (WFPS, a test commonly used to measure sight reading ability) were mistakes in rhythm (McPherson 1994). This strengthens the case for further research into the pedagogy of sight reading rhythms and possible crossover pitch effects. An improvement in rhythm learning ability, whether through sight reading or learning aurally, would dramatically aid in musicianship and efficiency of learning new music.
There have been several studies on factors that impact sight reading ability and musical aptitude, though they often focus on pitch more than rhythm. They tend to fall into two categories: those which examine correlations between learning style/strategy and achievement, and those which look at other cognitive abilities, like spatial visualization, reading level, sequential processing, etc. The former have found interesting links which shed light on how different people may learn music efficiently through different means. Green (2010) distinguishes between learning styles and strategies, defining style as something innate to each musician and strategies as ways musicians tackle problems. She uses this as a basis for a study of learning by ear, and describes four observed learning styles: the impulsive style, the shot-in-the-dark style, the practical style, and the theoretical style. Though a more qualitative study, hers shows the relationship between style and success in the difficult task of picking out a piece of music by ear. The students most comfortable with the task were those with a practical style, who began by playing their instrument (rather than just listening) and quickly began breaking down the task. A similar, albeit older study found related interactions between learning style and melodic and rhythmic performance in third through sixth graders (Zikmund and Nierman 1992). These findings can be extended to hypothesize which characteristics of each learning style is more likely to correspond to a preference for notation versus aural learning. On the flip side, Levy’s dissertation (2001) tested sight readers’ miscues (what caused their mistakes) and defined learning styles based on her analysis. She organized them by the cues on which they relied most heavily, visual versus aural patterns and figural versus metric patterns. Proficient sight readers most often used aural and metric patterns, though the best relied on balancing all four cue types.
In a similar vein, several studies have tested the effects of other cognitive abilities on music performance or sight reading proficiency. Wright & Ashman (1991) produced a preliminary study in this field, in which they tried to isolate aspects of pitch, rhythm (meter), and dynamics as they related to simultaneous versus sequential processing skills, finding that both are needed for meter and rhythmic processing. The researchers also examined the relationship between notation use and information processing ability, finding a correlation between aural meter recognition and rhythmic notation skills. More recently, Hayward & Gromko (2009) found that aural discrimination, spatial-temporal reasoning, and technical proficiency all correlated positively with high sight-reading scores on the WFPS. These are more potential factors which vary from person to person and could bias a subject towards one of the two modes of learning. This complicates the measurement of their benefits and downfalls.
Additionally, some have studied the effect of movement on learning rhythms. In particular, Auerbach, Arden, and Bostock’s chapter in Pop-Culture Pedagogy in the Music Classroom discusses the use of the popular video game Dance Dance Revolution as an instructional tool for improving rhythm ability for sight reading (2011). Though it concludes that the game has only marginal benefits, it was greeted with enthusiasm by the class and has promise in helping students feel more comfortable moving past mistakes they make while sight reading.
Rhythm and movement are also linked in non-western music and dance, where signals other than a regular meter and conductor keep large ensembles together. Malm (1972) gives an overview of ways musicians communicate rhythm and meter without common notation, including synchronized breathing, vocal calls, and mnemonic syllables. He points out that western notation is frustratingly clumsy for representing non-western music, which often features complicated polyrhythms and phrases held over bar lines. Traditionally, they are taught aurally, but another option, using western notation as a reference and record only, holds promise. Matsumura, Yamamoto, & Fujinami (2011) studied samba learning, examining students’ improvements through motion capture. These studies hint at connections between aural learning and movement and provide support for future uses of movement in learning rhythms.
Finally, there have been a few studies focusing on the difference of notation versus aural ;earning on the processing and retention of music. They focus mostly on melody, but their research paradigms apply to rhythmic studies as well. Watson (2010) studied the difference between learning jazz improvisation skills through aural versus notated instruction. He tested subjects pre- and post- instruction with a panel of judges (to test achievement) and a self-efficacy scale test (to examine how much more confident subjects felt). Both achievement and self-efficacy improved with training of either type, but no significant difference was found between the conditions of aural versus notated instruction. Buonviri (2010) studied the effect of showing a visual notation reference on people’s ability to detect errors in (a target) melody heard concurrently with its notated version after a (second) distraction melody was played. He found no positive effect, and many of his subjects found it disheartening or distracting to try to follow the notation before it disappeared and the distraction melody started. Shehan (1987) examined rhythm learning specifically, both with and without notation, though aural stimulus was played for each condition. She played or spoke (in mnemonic syllables) a short rhythmic pattern and collected data on the number of repetitions it took children to correctly repeat the rhythm, either with or without an added visual notation reference. She found that students learned the rhythms faster with the added notation. She postulates, perhaps without proper support, that this visual helps because Americans (the subjects of her study) have a visual culture, to which kids are exposed from an early age. The mnemonics also helped children retain rhythms, to a lesser extent. These few studies about notation and aural learning have been negative or contradictory, but more research is needed, particularly into rhythm and meter.
Shehan’s study is the closest to the question I pose. However, the experimental design had no condition where the subjects only saw notation, without aural stimulus (her subjects were 2nd through 6th graders without any musical training though, so it would have been impractical to assume sight reading ability). Her paradigm of testing the number of repetitions before a student exhibits mastery of the phrase is a good metric, as it keeps the atmosphere positive and gives a clean data point for each trial. I propose a similar paradigm, but in keeping with Watson’s paradigm, which entirely separated groups learning jazz improvisation aurally from those learning with notation, I will attempt to avoid any overlap between aural- and notation-learning conditions. My study’s purpose is to concretize the differences in efficacy of learning rhythms through notation or aurally, and it will draw on the paradigms presented above to inform experimental design and other studies’ conclusions to inform potential hypotheses.
Auerbach, B., Aarden, B., & Bostock, M. (2010). DDR at the crossroads: A report on a pilot study to integrate music video-game technology into the aural-skills classroom. In N. Biamonte (Ed.), Pop-culture pedagogy in the music classroom: Teaching tools from American Idol to YouTube (pp. 147-172). Lanham, MD: Scarecrow Press.
Buonviri, N. O. (2010). Effects of visual presentation on aural memory for melodies (Doctoral dissertation). Available from ProQuest Dissertations & Theses database. (UMI No. 340869)
Green, L. (2010). Musical “learning styles” and “learning strategies” in the instrumental lesson: some emergent findings from a pilot study. Psychology of Music, 40, 42-65. doi: 10.1177/0305-73126.96.36.199510
Hayward, C. M., & Gromko, J.E. (2009). Relationships among music sight-reading and technical profiency, spatial visualization, and aural discrimination. Journal of Research in Music Education, 57(1), 26-36. doi: 10.1177/0022429409332677
Levy, K. L. M. (2001). Music readers and notation: Investigation of an interactive model of rhythm reading (Doctoral dissertation). Available from ProQuest Dissertations & Theses database. (UMI No. 3034122)
Malm, W. P. (1972). Teaching rhythmic concepts in ethnic music. Music Educators Journal, 59(2), 95-99. Retrieved from JSTOR. Retrieved from http://www.jstor.org/stable/3394148
Matsumura, K., Yamamoto, T., & Fujinami, T. (2011). The role of body movement in learning to play the shaker to a samba rhythm: An exploratory study. Research Studies in Music Education, 33(1) 31-45. doi: 10.1177/1321103X11400513
McPherson, G. E. (1994). Factors and abilities influencing sightreading skill in music. Journal of Research in Music Education, 42(3), 217-231. doi: 10.2307/3345701
Shehan, P. K. (1987). Effects of rote versus note presentations on rhythm learning and retention. Journal of Research in Music Education, 35(2), 117-126. doi: 10.2307/3344-987
Watson, K. E. (2010. The effects of aural versus notated instructional materials on achievement and self-efficacy in jazz improvisation. Journal of Research in Music Education, 58(3), 240-259. doi: 10.1177/0022-42188.8.131.52115
Wright, S. K., & Ashman, A. F. (1991). The relationship between meter recognition, rhythmic notation, and information processing competence. Australian Journal of Psychology, 43(3), 139-146. doi: 10.1080/00049539108260138
Zikmund, A. B., & Nierman, G. E. (1992). The effect of perceptual mode preferences and other selected variables on upper elementary school students’ responses to conservation-type rhythmic and melodic tasks. Psychology of Music, 20(1), 57-69. doi: 10.1177/0305735692201005