embodiment and (e)motion – assignment and leading questions

Hey folks,

For our discussion on “embodiment and (e)motion”, we’ll do one close reading, and one general. I’d like you to prepare the following:

1. (CLOSE READING) Read Phillips-Silver & Trainor (2007). Some guidance: closely read their definition of meter; decide whether they are trying to interpret ‘autonomous’ systems; see if you can pin-down what ‘auditory encoding’ is; debate whether their results (e.g., PDF p.8) suggest correlation or causality; ask whether you agree with their pithy last line; finally consider whether the study design(s) potentially involves some analysis/learning/conceptual mediation which would confound the putative direct connection between ‘movement’ and ‘listening’ – that is, might there not be some other process in between.

2. (GENERAL) Get a sense of the general tenor across the five articles (i.e., review the abstracts). Then read through Iyer (2002). What’s the point of this article? And please reflect on how important you think kinesthetic/physiological aspects are to your own theoretical work and thinking. We’ll try to talk through some perspectives.

In sum:

1. Read Phillips-Silver & Trainor (2007) closely.

2. Read Iyer (2002).

3. Read the five abstracts.

4. Post after the guidance per item 1.

– S P G

11 thoughts on “embodiment and (e)motion – assignment and leading questions

  1. Since I gave discussion leaders the “power” to assign only one (full) reading, I would call for reading no. 2 to be optional.

  2. I have three issues with their definition of meter:

    1) How exactly is this “organization of beats” defined?
    2) Meter is presented as though it is not cognitively mediated; it is something to be discovered in the sound. For the authors, meter is “the organization of beats that allows…” not the inferred steady succession of rhythmic pulses.
    3) Their definition of meter contains no notion of hierarchy.

    I have one issue with the procedure:

    • When we move, we make sound, even though it might not be very loud. I’m wondering whether there was an unwanted auditory stimulus during the bending of the knee—an auditory signal influencing the perception of meter.

    I have one issue with the results, which relates to Stephen’s question about the “pithy last line”:

    • On page 543, the authors write “mere observation is not sufficient to strongly bias auditory encoding of the rhythm pattern.” (Aside: So is meter now rhythm pattern?) I’m not sure that the experimental results are consistent with this claim, though I admit that this depends on how one interprets the adverb “strongly”.

    If we accept the methods and result, the authors have shown is that in a *very limited* time frame, visual stimulus is insufficient to encode meter. Would this hold under longer periods of exposure to visual stimulus?

    • Andrew,

      Thanks for your post. You have given us several items to discuss further.

      Ahead of that, I’d like to offer what clarification I may (or can), in response to your penultimate paragraph. First, ‘auditory encoding’ is problematic terminology, especially as its inter-subjective definition appears to be assumed. Reading kindly, I assume they mean something ‘metric induction’. But you’ll note that now I have just transposed the problem of definition, and additionally, have not said whether such ‘encoding’ or ‘induction’ is conceptual-analytical or autonomous (a hobby-horse of mine, this semester). Yet, the authors also give no clue in this regard other than to suggest a mere latent trace of active and deliberate implication by the verbal noun ‘encoding’. Second, ‘rhythm pattern’ seems to me less fraught, even if implicated (guilty by association) into the otherwise sloppy array of terminology: especially ‘meter’, ‘auditory encoding’, and ‘influence’. I believe they are just referring to their happy ‘tune’ of long-short-short-long (660-330-330-660 in ms), which, by the way, is perhaps less ‘ambiguous’ than the authors claim (something to think about). Provisionally, we could say that the term ‘rhythm pattern’ is (emphatically?) redundant, and that a ‘rhythm’ is some pattern of attacks/elements of various durations, ‘various’ being one essential distinction from an undifferentiated pulse.

      – S P G

  3. To answer Stephen’s questions:
    (1)The authors’ definition of meter is hidden within their text but, upon excavation, fairly clear. To them, meter, at a given level (presumably the perceived ‘beat’ and grouping of it), is either duple or triple defined by strong beats (to which one taps) and weak beats (to which one does not tap). We use our bodies to internalize this pattern (meter) and then express it physically. Thus, the authors certainly do not conceive of the systems as ‘autonomous,’ since some cross-modal interactions clearly affect subjects (movement and meter) while others do not (visual stimulation and meter). It seems that discussion of embodiment does not necessarily claim that systems intertwine, but this work would be seriously undermined if the authors are only contending that a listener can embody a stimulus (such as a musician playing along with a non-sounding, blinking metronome) and nothing more.

    (2)As a term, auditory encoding is a bit vague. As it is used in the article, auditory encoding is a listener’s implication of a duple or triple scheme over an otherwise ambiguous rhythm. The authors test auditory encoding by implying a duple or triple scheme through some non-musical stimuli (a movement or a visual cue timed to strong beats) and then seeing how a listener recreates the scheme over the rhythm sans the non-musical stimuli.

    (3)As false-negatively inclined as I am when dealing with causality, I believe the authors demonstrate their most general result: A physical stimulus does significantly affect auditory encoding, at least in the short term. It seems clear (I would love to hear why this might not be the case) that auditory encoding (at some sub- or semi- conscious level) can be affected through physical movement. While I suspect that we could problematize these results easily, by having participants attempt to encode a string of different ambiguous rhythms with stimuli implying different schemes, I do not know that this would de-stabilize the authors’ findings. Such a study would suggest the participants’ limitations of memory and encoding rather than a claim that movement does not affect encoding.

    (4)Pithy-ness aside, I think that the “we hear what the body feels” claim is very hard to test without a rhythmic stimulus through which to understand what the body feels (thus isolating the “what we hear” from the “what we feel”). To actually investigate the encoding of an embodied, yet non-sounded, experience would require a drastically different experimental design. I can imagine a project in which a “listener” is primed by bouncing along with a visual cue (since the authors suggest that visual cues do not necessarily encode a metric scheme) and then exploring how different bouncing patterns affect the auditory encoding of an ambiguous rhythmic stimulus played after bouncing has ceased.

    (5)Finally, I would expect, at the very least, that between movement and listening, there exists some sort of reflexory nervous interaction that allows embodiment, if not a host of conscious intermediary processes. However, I do not see this as a flaw in this experiment’s design.

  4. Just a quick thought and a question:

    1. “Ambiguous” rhythm in the current context is best understood as a rhythmic pattern that has two possible metric interpretations (it does not mean “vague”); see London’s useful distinction of these terms in his monograph “Hearing in Time” (2012: 106-107).

    2. What do you mean by “autonomous”? And, what is/are the question(s) at the core of the dichotomy you propose of autonomous vs. analytical/conceptual? If this is to be your hobby horse for the semester, might as well dive into it a bit more systematically…

    • Great! I am enjoying the responses so far, and hope for a great deal more pushback.

      Regarding your two items: I will want to discuss both of them in class. For now, I’ll say that I understand the concept of an ‘ambiguous’ rhythm and that I don’t think the authors did a good job of picking one. A simply more effective ambiguous rhythm is one that would make just as much sense (according to the satisfaction of certain conservative preference rules, I suspect) in at least two different meters. The ‘long-short-short-long’ rhythm, by my reckoning at the moment, when played in a triple meter, would result in a syncopation. The same is not true for duple. Moreover, such a rhythm theoretically already ‘projects’ a duple meter between the slowest pulse of 6 units and the fastest of 1 unit. Anyhow, ambiguous-rhythm-selection aside, I will try to consider in class how this may not compromise the results at all, but rather reinforce them somehow.

      Then, I think the second item will be better served in a live setting. This point will connect to my question of correlation versus causation.

  5. 1) I found their definition of meter to be quite clear. “The organization of beats that allows the music listener to infer a steady succession of rhythmic pulses or strong beats is called meter.” This is similar to Tan and Al. who said that “meter is also conceptualized as a pattern of strong and weak time points”. So beat precedes and creates meter. Or meter creates beat. This leads to a sort of chicken and the egg question. I get confused just thinking about it!

    2)Their results are absolutely supportive of their auditory encoding hypothesis. PDF Page 3 “In other words, how we move will influence what we hear.” They are very directly speaking in support of a causal relationship between movement and auditory encoding.

    3) I agree in the context of this experiment that we “hear what the body feels”. This is by no means objective truth however.

    4)The study design did present some initial concerns for me. First, I believe that their mixture of regular and occasional dancers did not enhance my confidence in their findings. What exactly were they trying to prove by mixing such a diverse group of people together? I would be more inclined to be scrupulously selective in choosing test participants. This could, in my opinion, strengthen their argument for auditory encoding.

  6. Oh boy…this is going to be a fun discussion on Thursday.

    I agree with Stephen that the definition of meter is iffy – an organization (does this claim “hierarchy”?) of beats that allows inference of rhythmic pulses. So by this definition, we could think of an isochronous set of pulses as metric since we can infer a set of rhythmic pulses from it (of course that depends on what “organization” means).

    As far as correlation versus causality – the statistics and experimental design back up some sort of causality. However, I agree with Stephen’s questioning regarding some sort of mediation effect. The authors allude to vestibular stimulation prior to experiment 4 but don’t expound further. I wonder to what degree the vestibular system (or other ear mechanisms) can lead to correlational effects between movement and sound: does movement an effect in our ears (i.e. basillar membrane changes) that leads to auditory effects? If so, would we consider this to be a multisensory effect or would it only count if the effect is based in our processing system?

    I also am interested in the connection (or lack thereof) between mirror neurons and motion perception – especially when related to the McGurk effect and other motor speech perception models. (For the McGurk effect, check this out: http://www.youtube.com/watch?v=aFPtc8BVdJk). Also, if you’re interested, please check out Corrado Sinigaglia, a philosopher of science who is doing work on mirror neurons and speech/music perception: http://dipartimento.filosofia.unimi.it/index.php/corrado-sinigaglia

  7. Their definition of meter as “the organization of beats that allows the music listener to infer a steady succession of rhythmic pulses or strong beats,” while perhaps not all-encompassing, seems sufficient for their current exploration of meter. And while they never really define auditory encoding, they seem to equate it with auditory representation, so I suppose it could be defined as the way that an auditory stimulus is represented organizationally in the mind, or as it seems in this paper, one’s understanding of the meter of an auditory stimulus. I’m skeptical of this “definition” of auditory encoding, but again I don’t know if this study necessarily calls for a more all-encompassing definition.

    In terms of the procedure, one qualm came when they mentioned that the correlation between performance and dance experience “approached significance.” As we’ve talked about before in class, I don’t really know what the point of saying something “approached significance” is, but if they took the time to mention it, then I think further study on its effect needs to be done before one can rule out its significance, and therefore I am hesitant to fully accept the results of this particular study.
    Additionally, I am curious about the effect of eye movement on the results. I do not know if there would be any effect, but the eye movement of participants was not tracked during the fourth experiment, and I wonder if there could be some effect of distinct eye movement on auditory encoding. And if so or not, does eye movement count as movement?

    Putting that aside, I am not convinced that these experiments prove causality and not correlation. It could be that one aspect of movement causes this auditory component, but that if this aspect were induced without movement itself, the same effect would be found. For example (and this is only a very briefly thought through alternative), could it be some sort of experience factor that causes this auditory encoding, and not movement? So that the effect is only caused by something internal, and not external. For this hypothesis, would one find the same auditory encoding if a shock was experienced on each intended accent instead of a movement? This idea connects with the last line of the paper. Perhaps it is true that we “hear what the body feels,” but that does not necessarily mean that we “hear what the body moves.” (excuse the poor English)

    Lastly, I am also interested in the question of mirror neurons, and I found myself very frustrated by the authors’ assessment of this portion of their experiment. In the general discussion, they state that in the established cases of mirror-neuron activation, the movement was goal-directed. With this information, it seems evident that the movement in this experiment was not goal-directed, and therefore I am not convinced that their results have anything to do with the mirror-neurons that have shown effects in goal-directed situations. Therefore it is still possible that mirror-neurons could cause the same effect as movement. However, in the next sentence, the authors present this fact along with an unfounded alternative that would mean that they are still correct (they present no evidence of non-goal-directed movement inducing an effect in mirror neurons.) This feels like a very manipulative attempt to assuage doubt. But putting my irrelevant frustration aside, I do think it is enough to call into question the conclusions they draw from their fourth experiment.

  8. There are a few relevant articles on cross-modal effects involving visual and auditory systems in the Bibliography on this site. Looking quickly through, the Miller & al. (2013) seemed particularly pertinent,

  9. Since I agree that the definition of meter in this paper is fairly straightforward, as presented by my classmates, I’ll only briefly mention that in this particular case, it seems to be a rather simple “version” of meter. Because the participants only had to distinguish between a duple or triple, it doesn’t seem quite enough to make generalizations about many different kinds of meter (for example, going to back to our nonisochronous v isochronous discussions).

    I was curious if participants would have been able to do the experiment if the rate of beats had been different, and outside the comfortable bandwidth. In “Hearing and Time,” London presents the idea that 600 ms is where we are most comfortable listening, and best at determining interonset intervals. 600 ms, or about 100 beats per minute, is about the rate of the human heartbeat, and it has been shown that this is the rate at which mothers naturally rock their babies to be most soothing, and the rate at which most people naturally walk. So, we best interpret tempos that fall into about the 100 bpm range, which was followed in this experiment with the 660 ms in this experiment for IOIs. In duple and triple, these end up being about 100 bpm, but in quadruple, it’s outside that band of perceptible time. I’m curious whether the subjects would still have been able to perform the experiment adequately, and if so, what that would mean for the rest of the experiment.

    The mirror neuron principle was also really interesting, if only casually mentioned. Typically, the idea behind the mirror neuron is that when you watch someone’s actions, neurons fire in your brain as though you were doing that action yourself. It might be a little unfair to call upon mirror neurons as reasoning if the people were actually doing the movement themselves, however. Mirror neurons fire as though the action were being performed, but frankly, because the subject was actually performing the task, these neurons would have already been firing.

    As for the last line, I think due to the four experiments, it’s fairly accurate. I’m just not sure that it is inclusive enough after only looking at a set number of only two different meters.

Leave a Reply

Your email address will not be published. Required fields are marked *