Music Theory

Music Cognition Symposium

maslogo_black

The Eastman/UR/Cornell/Buffalo Music Cognition Symposium is an informal gathering of people interested in music cognition. The symposium meets four times a year (twice in the fall and twice in the spring) on Saturday afternoons, usually at Eastman. The symposium receives funding from the University of Rochester’s Committee for Interdisciplinary Studies (UCIS).

Often, the symposium features invited guests—leading researchers in music cognition from around the United States and beyond. Symposia may also feature presentations of ongoing work by members of the community, and discussions of readings and topics in music cognition. Recent topics have included performance expression, probabilistic modeling, melodic expectation, and music-language connections.

Symposia are open to the public, and all are welcome. To be added to the symposium’s e-mail mailing list, contact David Temperley (dtemperley@esm.rochester.edu).

Music Cognition Symposia, 2017-18

Saturday, September 30, 2017
Guest speaker: Sean Hutchins, Royal Conservatory of Music, Toronto
Ciminelli Lounge (Eastman Student Living Center, 100 Gibbs St., Rochester), 2:00-5:00 p.m.
[Details]

Saturday, November 18, 2017
Lexical and Musical Tone
Guest speakers: Laura McPherson (Dartmouth College) and Gavin Bidelman (University of Memphis)
Howard Hanson Hall, Eastman School of Music (4th floor of main building), 2:00-5:00 p.m.
[Details]

Saturday, March 3, 2018
Computational Music Research
Guest speakers: Juan Bello (New York University) and Xavier Serra (Universitat Pompeu Fabra, Barcelona)
Ciminelli Lounge (Eastman Student Living Center, 100 Gibbs St., Rochester), 2:00-5:00 p.m.
[Details]

Saturday, April 21, 2018
Local Research
ESM 120, Eastman School of Music
[Details]

 

Music Cognition Symposium Steering Committee

  • University of Rochester: Elizabeth West Marvin and David Temperley (Eastman), Joyce McDonough (Linguistics), Anne Luebke (Biomedical Engineering), Zhiyao Duan (Electrical and Computational Engineering)
  • Cornell University: Carol Krumhansl
  • University at Buffalo: Peter Pfordresher

Visiting speakers to the music cognition symposium from past years

Roger Chaffin
Elaine Chew
Sarah Creel
Roger Dannenberg
Steven Demorest
Mary Farbood
Sid Fels
Jessica Graun
Peter Gregersen
Andrea Halpern
Erin Hannon
David Huron
Petr Janata
Ed Large
Steve Larson
Fred Lerdahl
Dan Levitin
Charles Limb
Justin London
Psyche Loui
Elizabeth Margulis
Steve McAdams
Devin McAuley
Josh McDermott
Ken’ichi Miyazaki
Rosemary Mountain
Eugene Narmour
Jean-Jacques Nattiez
Caroline Palmer
Bryan Pardo

Ani Patel
Isabelle Peretz
Dirk-Jan Povel
Bruno Repp
Jean-Claude Risset
Frank Russo
Gottfried Schlaug
Mark Schmuckler
John Sloboda
Michael Thaut
Barbara Tillman
Laurel Trainor
Sandra Trehub
Victoria Williamson
Robert Zatorre

 

Sean Hutchins (Sept. 30)

Dr. Sean Hutchins is the Director of Research for The Royal Conservatory of Music in Toronto. He founded and currently leads The Royal Conservatory’s Research Centre, focusing on experimental studies of music neuroscience and performance. He received his PhD from McGill University in 2008, and is trained in experimental psychology and neuroscience, with a specialization in the field of music cognition. Dr. Hutchins has held positions at l’Université de Montréal and the Rotman Research Institute at Baycrest Hospital in Toronto. Dr. Hutchins is an expert in the science of vocal perception and production; his research has studied the factors that affect basic singing ability and the relationship between speech and singing. His current work examines the role of musical training and experience on cognitive and linguistic abilities.

Dr. Hutchins will give two talks:

Music and Language Production

Music and language share many similarities, in form, in goals, and in usage. Given this overlap, it has long been supposed that musicians’ training transfers to improved linguistic ability. In this talk, I’ll discuss some of the behavioural and neurological evidence for music-to-language transfer, then hone in on an important piece of the puzzle that has been largely ignored: the role of production. This talk will describe some of my recent experiments investigating the transfer of production abilities with people of all ranges of musical ability and training, and discuss how this fits with current models of music and language transfer, and wider impacts of musical training.

Music Educators and Psychology

The psychology of music is a field that has grown by leaps and bounds over the past decades; we now know much more about the musician’s mind and the factors that can affect (and be affected by) musical skill. However, one problem with any interdisciplinary field can be lack of communication. In this discussion, I will discuss my role as a scientist within a music conservatory, the challenges of effective communication across disciplines, and the ways that we attempt to integrate cognitive psychology into curriculum design. The session will include a general discussion on effective two-way communication with music educators, the most important areas in the field for an educator to know, and practical examples of successful integration of music education and psychology.

Lexical and Musical Tone (Nov. 18)

Howard Hanson Hall, Eastman School of Music (4th floor of main building), 2:00-5:00 p.m.

Visiting Speakers:
Laura McPherson, Department of Linguistics, Dartmouth College
Gavin Bidelman, School of Communication Sciences & Disorders, University of Memphis

2:00-2:15 General introduction
2:15-3:15 Laura McPherson, “The talking balafon of the Sambla”
3:15-3:30 Discussion

3:30-3:45 Break with refreshments

3:45-4:45 Gavin Bidelman, “The effects of music and tone-language experience on neuroplasticity, perceptual abilities, and cognitive transfer”
4:45-5:00 Discussion

Abstracts and Bios

“The Talking Balafon of the Sembla”
Laura McPherson, Darthmouth College

BIO. Dr. Laura McPherson is Assistant Professor in the Linguistics and Cognitive Science Program at Dartmouth College. Her theoretical research focuses on phonology (sound systems) and morphology (word formation), with a special focus on tone. She is also interested in how phonology can be adapted to or invoked in music. Most of her data come from primary fieldwork in West Africa (Mali and Burkina Faso), where she has been undertaking in-depth descriptive projects. Her first reference grammar, A Grammar of Tommo So, was published by De Gruyter Mouton in 2013. She is currently working on a grammar of Seeku (exonym Sembla/Sambla), a Mande language spoken in Burkina Faso. Current interests include the relationship between phrases and phonology, tonal features in Seeku, Tone-tune association in Tommo So folk music, and Seeku surrogate language in xylophone music.

ABSTRACT. A growing body of literature points to immense overlaps in structural and cognitive aspects of language and music. Blurring the boundary between these two modes of expression are musical surrogate languages, in which a fundamentally linguistic message is encoded and performed musically. This talk focuses on the case study of the Sembla balafon (resonator xylophone). The Sembla are a Mande ethnicity in Burkina Faso, whose language, Seenku, is spoken by about 17,000 people. Any important village event will be accompanied by traditional balafon music, in which musicians sing lyrics and communicate with spectators solely through their instruments. This balafon surrogate language is an “abridging system” (Stern 1957), encoding certain phonological aspects (tone, vowel length, and word structure) to the exclusion of others (segmental information). Even amongst the encoded aspects, we find a division between lexical/morphological and postlexical processes, with the latter only variably encoded in the surrogate language, suggesting that a separation between grammatical components is accessible to musicians in transposing speech to musical form. In this talk, I formalize the relationship between the phonology of the spoken language and the surrogate language, while also exploring the value of this tradition both in Sembla society and as a tool for language documentation.

“The effects of music and tone-language experience on neuroplasticity, perceptual abilities, and cognitive transfer”
Gavin Bidelman, University of Memphis

BIO. Dr. Gavin Bidelman directs the Auditory Cognitive Neuroscience Laboratory located in the School of Communication Sciences and Disorders at the University of Memphis. The major goals of his research are to better understand neural basis of complex auditory perception and cognition (e.g., speech and music) and how they are changed with listening experience, hearing impairment, and age. The lab uses a multifaceted approach to understand human audition that includes a coordinated blend of techniques including neuroimaging (EEG/ERPs), psychoacoustics, and computational modeling. Current projects are focused on understanding the neurocomputations involved in generating basic psychoacoustic phenomena and during complex music/speech listening. Complementary work examines how different listening experiences and/or training (e.g., music lessons, bilingualism) influence an individual’s auditory skills and how these benefits might transfer to improve seemingly unrelated cognitive abilities.

ABSTRACT. Behavioral and neuroimaging evidence suggest that music and language are intimately coupled; experience/training in one domain influence cognitive processing in the other. While music-to-language transfer effects are well-documented, clear evidence of transfer in the complementary direction (i.e., language-to-music) has yet to be established. In this talk, I will provide evidence from my lab for a “bi-directionality” between music and tonal languages and highlight the perceptual and cognitive benefits of these two human experiences. Using a blend of perceptual, cognitive, and neuroimaging measures, we are investigating the similarities and differences between music and tone language expertise on brain function and how these two experiences positively transfer to impact one another. Our studies reveal that both musical training and language experience enhance auditory neural processing, perception, and certain cognitive abilities (e.g., working memory). We have found that while both experiences mutually benefit the neural extraction and subsequent perception of acoustic information, specific features of sound are highlighted in a listener’s brain activity depending on their perceptual salience and function within their domain of expertise.

Computational Music Research

Saturday, March 3, 2:00-5:00
Ciminelli Lounge, Eastman Student Living Center, 100 Gibbs St., Rochester
Guest speakers: Xavier Serra (Universitat Pompeu Fabra) and Juan Bello (New York University)

2:00-2:10 Introductions
2:10-3:10 Xavier Serra, “Computational Studies of Several Non-Western Musical Repertoires”
3:10-3:25 Discussion
3:25-3:45 Break with Refreshments
3:45-4:45 Juan Bello, New York University, “Some Thoughts on the How, What and Why of Music Informatics Research”
4:45-5:00 Discussion

BIOS AND ABSTRACTS

Xavier Serra, Universitat Pompeu Fabra, Barcelona

BIO: Xavier Serra is a Professor of the Department of Information and Communication Technologies and Director of the Music Technology Group at the Universitat Pompeu Fabra in Barcelona. After a multidisciplinary academic education, he obtained a PhD in Computer Music from Stanford University in 1989 with a dissertation on the spectral processing of musical sounds that is considered a key reference in the field. His research interests cover the computational analysis, description, and synthesis of sound and music signals, with a balance between basic and applied research and approaches from both scientific/technological and humanistic/artistic disciplines. Dr. Serra is very active in fields of Audio Signal Processing, Sound and Music Computing, Music Information Retrieval and Computational Musicology at the local and international levels, being involved in the editorial board of a number of journals and conferences and giving lectures on current and future challenges of these fields. He was awarded an Advanced Grant from the European Research Council to carry out the project CompMusic aimed at promoting multicultural approaches in music information research.

TITLE: Computational Studies of Several Non-Western Musical Repertoires

ABSTRACT: The use of computational methods in the study of music through the processing of digital artifacts goes back several decades, but it is only in the past few years that this type of research has started to obtain some musicologically relevant results, mainly from the approaches taken in the field of Music Information Retrieval (MIR). With these approaches we have been able to validate some existing musicological knowledge, but we are still at the very beginning. In this talk I will go over several of the studies done at the MTG in the context of CompMusic, a project in which we have focused on five non-western music repertoires, Hindustani (North India), Carnatic (South India), Turkish makam (Turkey), Arab-Andalusian (Maghreb, or North Africa), and jingju (Beijing Opera; China). We have created corpora for each repertoire, have developed computational methodologies with which to study their melodic and rhythmic characteristics, and have obtained some musically relevant results to help us better understand these music traditions.

Juan Bello, New York University

Juan Pablo Bello is Associate Professor of Music Technology at New York University, with courtesy appointments at the Department of Electrical and Computer Engineering, and NYU’s Center for Data Science. In 1998 he received a BEng in Electronics from the Universidad Simón Bolívar in Caracas, Venezuela, and in 2003 he earned a doctorate in Electronic Engineering at Queen Mary, University of London. Juan’s expertise is in digital signal processing, machine listening and music information retrieval, topics that he teaches and in which he has published more than 70 papers and articles in journals and conference proceedings. In 2008, he co-founded the Music and Audio Research Lab (MARL), where he leads research on music informatics. His work has been supported by public and private institutions in Venezuela, the UK, and the US, including a CAREER award from the National Science Foundation and a 2013 Fulbright scholar grant for multidisciplinary studies in France.

Title: Some Thoughts on the How, What and Why of Music Informatics Research

Abstract: The framework of music informatics research (MIR) can be thought of as a closed loop of data collection, algorithmic development and benchmarking. Much of what we do is heavily focused on the algorithmic aspects, or how to optimally combine various techniques from e.g., signal processing, data mining, and machine learning, to solve a variety of problems, from auto-tagging to automatic transcription, that captivate the interest of our community. We are very good at this, and in this talk I will describe some of the know-how that we have collectively accumulated over the years. On the other hand, I would argue that we are less proficient at clearly defining the “what” and “why” behind our work, that data collection and benchmarking have received far less attention and are often treated as afterthoughts, and that we sometimes tend to rely on widespread and limiting assumptions about music that affect the validity and usability of our research. On this, there is much that we can learn from music cognition research, particularly with regards to the adoption of methods and practices that fully embrace the complexity and variability of human responses to music, while still clearly delineating the scope of the solutions or analyses being proposed.

Local Research

2:00 Introduction
2:15-2:45 Lissa Reed (Eastman School of Music), “Do Elements of a Musician’s Speech Prosody Influence Their Musical Text-Setting?”
2:45-3:15 Ronald Friedman (University of Albany), “Re-exploring the Effects of Relative Pitch Cues on the Perceived Emotional Expression of an Unconventionally Tuned Musical Scale”
3:15-3:45 Break with refreshments
3:45-4:15 Olivia Wen (Cornell), “Perception of the Standard Pattern and the Diatonic Pattern”
4:15-4:45 Emma Greenspon (U at Buffalo), “Domain Specificity of Auditory Imagery in Vocal Reproduction”

BIOS AND ABSTRACTS

Lissa Reed is a first year PhD student in music theory at the Eastman School of Music. She previously studied at the Ohio State University and Florida State University, and has broad musical research interests including intersections of music perception and analysis, music and politics, and music theory pedagogy.
Title: Do Elements of a Musician’s Speech Prosody Influence Their Musical Text-Setting?
Abstract: Musicians, linguists, and cognitive psychologists have often investigated parallels between linguistic prosody and instrumental musical composition, but many take for granted any potential connections between speech prosody and texted music. This study aims to draw discrete connections between characteristics of prosodic speech and sung lyrics within subjects. Ten musicians were recorded speaking fourteen sentences and subsequently singing them on improvised melodies. Using spectrograms, similarities in rhythm syllable timing) and in contour (intonation) are measured between a subject’s spoken and sung version of the same sentence. To measure similarities in rhythm, syllables are coded as longer (1), shorter (-1), or the same (0) as their preceding syllable into an ordered vector; the spoken vector is compared syllable-for-syllable with the sung vector. The resulting similarity measures for all sentences are compared with chance using a one sample t-test. Two different measures are taken for contour: a syllabic contour vector similar to the timing vector, and a reduced contour. Using contour similarity functions and contour embedding functions, a proportion of similarity is determined for each spoken/sung reduced contour pair, and these are compared with chance. Results are expected to lend support to the commonly-held belief that speech prosody influences vocal melodic music-making.

Ronald S. Friedman is an associate professor of psychology and head of the Social/Personality area at the University at Albany. He is interested in the links between emotion, motivation, and cognition and is currently exploring how situational cues and personality impact emotional responses to music.
Title: Re-exploring the Effects of Relative Pitch Cues on the Perceived Emotional Expression of an Unconventionally Tuned Musical Scale
Abstract: In this study, we reassessed the hypothesis that musical scales take on a sadder expressive character when they include one or more scale degrees that are lower in pitch than “normal”. Two conceptual replications of a prior study by Yim (2014; Huron, Yim, & Chordia, 2010) were conducted, incorporating modifications meant to bolster statistical power, enhance internal and external validity, and refine measurement of perceived emotional expression. In both experiments, participants were exposed to a set of melodies based on a single, highly unconventional scale, the Bohlen-Pierce (BP) scale. In the high versus low exposure conditions, participants were exposed to melodies based on a BP scale variant in which selected scale degrees had been raised versus lowered relative to a comparison scale. Following the exposure phase, all participants rated the perceived sadness/happiness of the exact same test melodies, in this case based on the “intermediate” comparison scale. Results confirmed that lowering selected degrees of an exposure scale causes melodies based on the comparison scale to be perceived as sadder/less happy (Experiment 1). However, altering these scale degrees did not independently affect perceptions of sadness/happiness after controlling for the average pitch height of the scale variants (Experiment 2). As such, the findings provide qualified support for the contention that “lower than normal” scales are perceived as expressively sadder.

Olivia Wen is a 4th year Ph.D. student in the Department of Psychology at Cornell University. Her research focuses on perception of musical structure, body movement, and social affiliation.
Title: Perception of the Standard pattern and the Diatonic pattern
Abstract: Pressing (1983) pointed out the theoretical parallel between two cyclic patterns which share the surface structure of 2 2 1 2 2 2 1. One is the standard rhythmic pattern and the other is the diatonic scale. The standard rhythmic pattern is the most pervasive rhythmic pattern in Central and West African music (Agawu, 2006; Kubik, 1999; Pressing, 1983; Rahn, 1987; Temperley, 2000; Toussaint, 2013), and the diatonic scale pattern is the most prominent scale pattern in Western music (Agmon, 1989; Browne, 1981; Powers, 2001). Despite the theoretical interest in these two patterns, no experimental studies have investigated possible parallels in how they are perceived. Experiment 1 used the standard rhythmic pattern beginning at each of the 7 possible starting positions. A probe accent technique was developed where one tone of the rhythm is dynamically accented, and listeners rate how well the accent fit the rhythm. Listeners perceived the standard pattern as a subdivision of 2 tones + 2 tones + 3 tones, and the metrical hierarchy of a syncopation-shifted 3/2 meter (Temperley, 2000). In addition, rhythms having similar probe accent profiles were those that are close according to the theoretical swap distance measure (Toussaint, 2013). Using the probe tone technique and the diatonic pattern beginning at all 7 positions (diatonic modes), Experiment 2 found that listeners gave high ratings to tones early in the context, close to the tonic, and relatively frequent in the scale contexts used in the experiment, which also corresponded with various measures of consonance. In addition, modes with similar probe tone profiles are close according to the swap distance measure applied to the modes, which is identical to the arrangement based on the number of accidentals. Similarities and differences in perception of the two patterns will be discussed.

Emma Greenspon is a Ph.D. Candidate at the University at Buffalo, SUNY. She works with Dr. Peter Pfordresher, and her current research focus is on the degree to which speech and music rely on shared or separate cognitive resources.
Title: Domain Specificity of Auditory Imagery in Vocal Reproduction
Abstract: The ability to reproduce a sound with one’s voice requires three components: an accurate representation of the target sound, an accurate motor plan that will result in the reproduction of the target sound, and an accurate association between these perceptual and motor representations. Auditory imagery has recently been suggested as a mechanism that underlies the third component: sensorimotor association. This talk will discuss two studies that examine whether auditory imagery relies on shared or separate resources for speech and music in the context of vocal reproduction. This question is addressed through both correlational and experimental methods.