The Berkeley Phonetics, Phonology and Psycholinguistics Forum ("Phorum") is a weekly talk and discussion series featuring presentations on all aspects of phonetics, phonology, and psycholinguistics. We meet on Fridays from 4(:10)-5pm (unless specified otherwise below), in Dwinelle 1229 (Zoom link shared upon request). Phorum is organized by Kai Schenck and Lindsay Hatch. Our emails are respectively "kai_schenck" and "lindsaykhatch" @berkeley.edu.
Schedules from previous semesters can be found here.
Fall 2024 Schedule
September 6
Introductions & round robin!
We'll share about our summers, then share any interesting puzzles or piece of data we've been working with. You are welcome to attend without presenting in the round robin.
September 12 (irregular time)
Martin Krämer (UiT, The Arctic University of Norway): Sonority, markedness and the OCP
In this talk, data from a wide range of languages as well as language acquisition are presented that cast serious doubt on the role of sonority and sonority sequencing n syllable phonotactics. These data show that cross-linguistic apparent sonority effects must be coincidental. The theoretical challenge is thus not how to incorporate a universal multi-level scale into a theory of phonology with otherwise binary categorical distinctions (features are generally assumed to be binary or privative, not scalar), but to explain the fuller empirical picture, including alleged sonority effects, without any phonetically motivated hierarchy. I argue here that some of the apparent sonority effects emerge from a more abstract principle of syntagmatic contrast maximization, which is at least a close relative of the Obligatory Contour Principle.
September 13No regular meeting -- please check out the Phonological Domains workshop, hosted by the Linguistics department in Dwinelle 370.
September 20
Niko Schwarz-Acosta (UC Berkeley): “Al Cʉɐntu da Penuchu”: Perceptual Learning of a Vowel Shift in Mexican Spanish
In perceptual adaptation literature, vowel chain shifts are employed to test perceptual learning (e.g. Maye et al., 2008; Weatherholtz, 2015). These experiments found that listeners shift their perceptual boundaries when exposed to a novel accent for sufficient time. Both experiments had an exposure phase where participants would hear a story with the phonetic shift being investigated followed by a lexical decision task assessing the lexical adaptation to the new speech patterns. After exposure, listeners endorsed shifted items as words at around a rate of 60%, compared to the endorsement rates of nonshifted items at 90-100%. Importantly, these studies were done in English. Recent research has highlighted potential language-specific processes in psycholinguistics (Clapp & Sumner, 2024). The question still remains as to whether a listener’s first language affects their perceptual adaptation to novel speech. This study investigates the perceptual learning of Mexican listeners in a familiarization task. To my knowledge, Spanish has no attested vowel chain shift across all dialects. Moreover, Spanish has a 5-vowel system, whereas English can have around a 10-vowel system (varying by dialect).
A familiarization task was employed to assess the perceptual adaptation of Mexican Spanish listeners to novel speech patterns. Listeners were asked to listen to a story containing a vowel chain shift and complete a lexical decision task containing both shifted and nonshifted words. Vowels were shifted using Praat by separating the source and filter of the speaker, then manipulating the speaker’s filter properties. There were 6 possible conditions that the listeners were exposed to: 2 vowel conditions (unshifted vs. shifted) x 3 exposure times (10, 5, and 2 minutes). 120 participants were recruited through Prolific, which was then filtered down to 109 based on performance on the control words and nonwords.
Results suggest that familiarization may not play a big role in the endorsement rates of critical items. Additionally, I analyze vowel-specific endorsement rates which reveal that certain vowel shifts are significantly less likely to be endorsed than others. The results of this study imply some language-specific processes, as well as how some vowels are more strict in their endorsement rates. A Bayesian model was then fit to the data to evaluate these findings.
September 27
Alexia Hernandez (Stanford PhD graduate): The role of experience on the cognitive underpinnings of linguistic bias: An interdisciplinary investigation of Miami-based Cuban American speech [Virtual talk]
In this talk, I’ll investigate the cognitive processes and architectures that underlie speech-based linguistic bias. Ultimately, I argue that linguistic and social experiences mediate category structure, and that differently structured categories modulate speech production, perception, and bias patterns.
Speech-based bias is associated with linguistic variation in production. Thus, I first inquire about the cognitive systems behind speech variation by analyzing the acoustic patterns of TRAM, TRAP, /l/, (DH), and rhythm realizations within the Cuban American community in Miami, FL. I show that social factors can reflect differences in experience, which shape individual speakers’ cognitive representations and make speech variation in production possible.
Building on these production patterns, I study how listeners use variation in Miami-based Cuban American speech for person construal. I find that listeners’ social and linguistic experiences structure their racial/ethnic perception of speakers. Both Miami-based Cuban American and General American listeners display a range of ethnic/racial perception, though they attend to different social and linguistic cues. Moreover, listeners’ perceptions were tied to linguistic patterns, not individual speakers, such that the same speakers were perceived variably across phrases.
Finally, I ask how two listener groups make stereotyped associations based on perceived speaker identity in a speeded association task. While both Miami-based Cuban Americans and Midwestern listeners exhibited a whiteness bias, quickly associating perceived non-Hispanic white speech with white stereotypes, Midwestern listeners exhibited more biased responses. This study again underscores that experience impacts the implicit biases listeners hold about speakers.
Across all three studies, the role of experience emerges as an important force in shaping language production, perception, and bias. The results support a cognitive architecture that integrates social information pre-comprehension via a socioacoustic memory. This architecture suggests that experience with diverse populations and their speech has the potential to decrease linguistic bias and discrimination.
October 4
Marko Drobnjak (Arnes, University of Ljubljana): VOICE: Verifying How Speech Perception Shapes Credibility in Legal Contexts – A Statistical and Experimental Approach with Future Machine Learning Potential
Witness testimony often serves as critical evidence in legal cases, making credibility a key concern. Various factors, including speech patterns, influence how trustworthy a witness is perceived to be. Previous studies have shown that rhetorical skill enhances credibility, while the use of dialect or vernacular speech can lead to perceptions of unreliability. These judgments are shaped by social hierarchies, where speech serves as a marker of status.
My research, based on an experimental study conducted during my Fulbright at UC Berkeley, examines how dialect and gender influence the perceived credibility of witnesses. Slovenia, with its rich dialectal diversity and low-income inequality, provides an ideal context to isolate the effects of speech on credibility without the strong socioeconomic associations seen in other regions. The findings indicate that participants found dialect speakers more trustworthy than those using standard speech, likely associating the latter with institutional authority, which tends to have low public trust. Furthermore, male speakers were consistently rated as more credible.
These findings suggest that linguistic biases may contribute to disparities in legal outcomes, highlighting the need for greater awareness of speech perception in legal proceedings.
Future research could explore the use of Generative Adversarial Networks (GANs) to analyze how vocal characteristics like tone and timbre further shape perceptions of credibility, opening new avenues for understanding bias in legal contexts.
October 11
Kai Schenck (UC Berkeley): Modeling stochasticity, gradience, and domain effects in Yurok rhotic vowel harmony with Gestural OT
I argue that the framework of Gestural OT (Smith 2018), an OT implementation of Articulatory Phonology’s speech production model (Browman & Goldstein 1986, 1989), is able to account for the phonetic and phonological behavior of Yurok harmony, if the *Inhibit constraints that penalize the presence of an inhibition gesture are indexed to prominent morphological domains.
only categorical stochasticity, as well as a gradient mechanism that regulates the degree to which a gesture is expressed in the phonetic output, modeled as gestural strength in Gestural OT (Smith 2018).
October 18
Richard Wang (UC Santa Cruz): Morphosyntax-Prosody Mismatch in Beijing Mandarin: Evidence from Retroflex Lenition
Stress in (Beijing) Mandarin, or the lack thereof, is a topic under much debate in the literature. Retroflex lenition, an optional phenomenon occuring in fast speech, is sensitive to segment duration, which provides insight into prosodically weak positions in the language. In capturing the lenition sites, I propose that prosodic structures in Beijing Mandarin are conditioned by but not perfectly mapped to morphosyntactic boundaries, resulting in a (morpho)syntax-prosody mismatch. Retroflex lenition can only occur on the weak syllable of a trochee, and foot recursion is necessary to derive the prosodic strucutres. Additionally, the distribution of (neutral) tones can affect prosodic parsing. I provide an analysis in Harmonic Grammar to account for a gang effect problematic for parallel Optimality Theory. Lenition domain is also compared with another phrasal phonological process, Tone 3 sandhi. Through the comparison, I demonstrate that Tone 3 sandhi does not operate on strucutres belonging to the Prosodic Hierarchy, thus resulting in a domain mismatch with retroflex lenition. For theoretical implications, lenition sits at the phonetics-phonology interface, the (morpho)syntax-prosody interface, and demonstrate tone-prosody interactions.
October 25
Yin Lin Tan (Stanford): Towards an indexical account of English in Singapore: Sociophonetic variation and Singlish
November 1
Santiago Barreda (UC Davis): Re-Introducing the Probabilistic Sliding Template Model of Vowel Perception
November 8
No meeting due to speaker cancellation.
November 15
Maya Wax Cavallaro (UCSC): Domain final sonorant consonant devoicing: Phonetics interacting with phonology
The devoicing of sonorant consonants at the right edge of phonological domains (utterance, word, syllable, etc.) is a phenomenon that, while relatively typologically rare, is also under-described and not well-understood. It is often dismissed as a surface-level phonetic tendency without phonological consequences. My work pushes back on this assumption, investigating how final sonorant devoicing can help us better understand the relationship between phonetics and phonology. In this talk, I will discuss:
- Phonetic factors that may lead to final sonorant devoicing
- How final sonorant devoicing becomes phonologized, and
- Whether/how one might tell the difference between phonetic and phonological sonorant devoicing
I propose that word- and syllable-final sonorant devoicing result from the phonologization of utterance-final phonetic tendencies, and generalization of a pattern from the utterance level to smaller prosodic domains. While past work has shown that generalization from the utterance domain to the word level is possible, I present evidence from an artificial language learning experiment, showing that learners can also generalize a phonological pattern to the syllable level.
I provide examples of two different sonorant devoicing patterns with data from recent fieldwork on Tz’utujil (Mayan) and Santiago Laxopa Zapotec (Otomanguean), and present preliminary results from an ongoing phonological and acoustic study of the patterns in these languages.
November 22
Suyuan Liu (University of British Columbia): In Search of the Standard Mandarin Speaker
Despite growing recognition of the need for precise language when describing speaker populations (e.g., Clopper et al., 2005; Cheng et al., 2021), Mandarin is still often treated as monolithic. Even studies on dialectal variation frequently categorize comparison groups as "Speakers of Standard Mandarin". Standard Mandarin, or Putonghua (普通话), is the national standard of China, officially defined as a language based on pronunciation from Beijing and Northern Chinese dialects, with grammar rooted in vernacular literature (Weng, 2018). But do speakers share this definition? How is "standardness" actually perceived? And, how does perceived standardness affect speech processing?
To explore these questions, I present the Mandarin-English Bilingual Interview Corpus, containing high-quality interviews with 51 bilinguals conducted by a single interviewer in both languages. These interviews provide spontaneous speech samples and rich insights into language backgrounds, attitudes toward variation, and definitions of standardness. Designed for qualitative and quantitative analysis, this corpus is a versatile resource for investigating how perceptions of standardness shape our understanding of Mandarin and its speakers.
November 29
No meeting -- Academic and Administrative Holiday
December 6
No meeting due to speaker cancellation.
December 13
Meg Cychosz (UCLA): Harnessing children’s messy, naturalistic environments to understand speech and language development
Children learn the patterns of their native language(s) from years spent interacting and observing in their everyday environments. How can we model these daily experiences at a large scale? It is no longer a question of if sufficiently comprehensive datasets can be constructed, but rather how to harness these messy, naturalistic observations of how children and their caregivers communicate.
In this talk, I will present recent work that used child-centered audio recorders to illustrate how children’s everyday language learning environments shape their speech and language development. I will present work that I conducted with colleagues on the language learning environments of bilingual Quechua-Spanish children in Bolivia, infants and toddlers who are d/Deaf and have received cochlear implants, and work with a new, massive dataset consisting of infant speech samples from a large number of linguistically-diverse settings. Although these populations seem disparate, I will show how studying the everyday language environments of infants and children from a variety of backgrounds (large cross-linguistic samples, children with hearing loss) helps us better understand how all children develop speech and language.