Phorum 2024

Spring 2024

January 19

No meeting.

Please check out the panel at 3:30pm (at Social Science Matrix, 8th floor Social Sciences Building) on Prof. Andrew Garrett's new book 'The Unnaming of Kroeber Hall'.

January 26

Gašper Beguš (UC Berkeley): Vowels and Diphthongs in Sperm Whales

Sperm whale vocalizations are among the most intriguing communication systems in the animal kingdom. Traditionally, sperm whale codas, or groups of clicks, have been primarily analyzed in terms of the number of clicks and their inter-click timing. This presentation brings a new dimension to the study of sperm whale communication — spectral properties — and argues that spectral properties are likely actively controlled by whales and potentially meaningful in this communication system. We uncover previously unobserved recurrent spectral patterns that are orthogonal to the traditionally analyzed properties. We present a visualization technique that allows us to describe several previously unobserved patterns. We introduce the source-filter analysis of sperm whale codas and argue that they are on many levels analogous to human vowels and diphthongs: vowel duration and pitch correspond to the number of clicks and their timing (traditional coda types), while spectral properties of clicks correspond to formants in human vowels. We identify two recurrent and discrete spectral patterns that appear across individual sperm whales: the a-coda vowel and i -coda vowel. Both coda vowels are possible on different traditional coda types. Our discovery thus suggests that spectral (filter) properties are independent of the source properties (number of pulses and timing). We also show that sperm whales have diphthongal patterns on individual codas: rising, falling, rising-falling and falling-rising formant patterns are observed. Finally, we control for whale movement and present several pieces of evidence suggesting that the observed patterns are not artifacts, but are actively controlled by sperm whales. We also show that the two coda vowels (the a-vowel and i -vowel) are actively exchanged by sperm whales in dialogues. These uncovered patterns suggest that spectral properties have the potential to add to the communicative complexity of codas independent of the traditionally analyzed properties.

February 2

Spectrogram club! Come keep your phonetics skills sharp by deciphering some spectrograms.

February 9

Andrew Garrett (UC Berkeley): Alfred Kroeber’s documentation of Inuktun (Polar Inuit): Philology, phonetics, phonology

Few episodes in American anthropology and linguistics are more disturbing than the 1897 removal of six Polar Inuit people from northern Greenland to New York by the Polar explorer Robert Peary at the request of Franz Boas. Four died of tuberculosis in 1898; a notorious deceit was done to a child. Boas's 21-year-old student Alfred Kroeber was charged with documenting their language, Inuktun. In this paper, I draw attention to the contents and significance of Kroeber’s field materials, consisting of five notebooks that are held at the Bancroft Library and unknown in Inuit studies. They include transcriptions of about 50 short texts (and elicited vocabulary and sentences), also notably featuring 15 pages of transcriptions by Boas. Two generations before any other text or even substantial language documentation of Inuktun, these materials open a window onto an early stage of the language. Some of its characteristic phonological innovations had not yet happened, for example, affording greater precision in placing Inuktun within the Inuit dialect continuum.

February 16

Ana Lívia Agostinho (Federal University of Santa Catarina): Creole word-prosodic typology: the role of African-origin words

Even though the “consensus among creolists specialized in phonology is that there is nothing typologically special or distinctive about creole languages that set them apart from non-creoles” (Bakker and Daval-Markussen 2017, 79), it has been shown that creoles can provide unique insights into what is possible when different word-prosodic systems come into contact, therefore contributing to a broader understanding of word-prosodic typology (Agostinho and Hyman 2021; Good 2008, 2009). This presentation is concerned with word-prosodic systems of Afro-European creole languages that show a correlation between lexical origin (African vs. European) and prosodic pattern (Agostinho, 2023). The discussion is based on the evidence from four languages: Saramaccan (Good 2004b, 2004a, 2009), Nigerian Pidgin English (Faraclas 1984, 2003), Pichi (Yakpo 2009, 2019), and Lung’Ie (Agostinho and Hyman 2021). I examine how the study of word-prosodic systems of creoles can contribute to phonological typology and to the debate of whether creoles are “different” from non-creoles. I hypothesize that such systems can only be found in creole languages and that their existence further confirms that sociohistorical processes – such as historic contact – can shape phonological systems. Finally, I conclude that the analysis of African-origin words is crucial to our understanding of creole phonology.

February 23

Kai Schenck (UC Berkeley): A constraint-based analysis of Yurok [spread glottis]

In Yurok, the [spread glottis] feature straddles the line between predictable and contrastive. Although there is predictable epenthesis of h word-initially and word-finally, and predictable final aspiration and sonorant devoicing, there are also multiple instances of morpheme-specific irregularity, as well as a robust contrast involving final h following non-high vowels preceding obstruents. First, I present a basic, constraint-based analysis of the distribution of [spread glottis] in Yurok. Then, I analyze a case of opacity involving an inalienably possessed root, drawing on previous work that investigates differences in the phonological behavior and syntactic structure of inalienable and alienable nouns (Dobler 2008; Newell et al. 2018; Popp 2022). Ultimately, I give a preliminary investigation of the applicability of two theories to this problem: Stratal OT (Bermúdez-Otero 1999, 2011; Kiparsky 2000, 2008) and Cophonologies by Phase (Sande & Jenks 2018; Sande et al. 2020), and see if they can lend further explanatory power to the behavior of this feature as a whole.

March 1 

Elizabeth Wood (UT Austin): Acoustic and (morpho)phonological evidence for word-initial glottal stops in K’iche’ (Mayan)

Since the beginning of the modern linguistic study of Mayan languages, it has been commonly stated that all words in these languages must begin with a consonant. Words that appear to begin with vowels are argued to have an initial glottal stop. However, glottal stop is not contrastive in this position, and the generalizations of its distribution are typically made on the basis of auditory perception rather than acoustics or phonological patterning. I use data from a documentary corpus of spontaneous narratives and show both acoustic and (morpho)phonological evidence that there are in fact vowel-initial words in K’iche’. Word-initial glottal stops, whether realized as stops or as glottalization cues on the following vowel, are found only on words beginning with a stressed vowel and in certain phrasal contexts (following a pause or another vowel). Furthermore, when present these segments must be epenthetic rather than phonemic. These results show a very different pattern than what is commonly reported in the literature, suggesting the need to reassess the question of the typology of glottal stop in Mayan languages and highlighting the pitfalls of impressionistic descriptions of non-contrastive sounds.

March 8

Nikolai Schwarz-Acosta (UC Berkeley): Investigating the Predictability of an Upcoming Code-switch in Cantonese-English Bilinguals

Systematic phonetic cues are observable in bilingual speech. For example, code-switched utterances exhibit a number of phonetic variations in the phones approaching a code-switch (Balukas & Koops, 2015; Gutierrez Topete 2023; Fricke et al., 2016; Olson 2013; Piccinini & Garellek, 2014, Torres Cacoullos, 2020), and bilingual listeners use these cues to aid the processing of an upcoming code-switch (Shen et al., 2020). Although prosodic and segmental features have been shown to vary when approaching a switch, the consistency and relative magnitude of effect for each of these features have not yet been compared. A major goal of this project is to evaluate the predictability of an upcoming code-switch based on the phonetic cue preceding the site of the code-switch. In this investigation, I empirically and computationally explore the phonetic variability in words immediately before a code-switch from English to Cantonese to assess which phonetic features aid more in the acoustic predictability of the upcoming code-switch. The data is taken from the SpiCE Corpus, a corpus of sociolinguistic interviews with Cantonese-English bilinguals (n=34) (Johnson, K. A. et al., 2020). I employ the use of a statistical model and neural networks to evaluate the predictability of phonetic features. The speech rate and vowel qualities before a code-switch in both English and Cantonese were extracted from the corpus. Preliminary results show that for English-Cantonese code-switch data, vowel qualities do not significantly change, and speech rate does change significantly (p<0.05). Two separate PyTorch neural networks (Paszke et al., 2019) were trained (11 epochs) to assess the usefulness of vowel quality and speech rate in predicting an upcoming code-switch. The test accuracy for vowel quality was 42.31% while the speech rate data showed a 60% accuracy.

March 15

William Clapp (Stanford): The Socially Guided Memory Encoding of Spoken Language

Listeners are more likely to recognize a word upon second presentation if it is repeated in the same voice as compared to a different voice (e.g., Palmeri et al., 1993; Goldinger, 1996). These now-classic talker-specificity effects have served as the basis of exemplar theories of speech perception and have been replicated widely. However, nearly all talkers and listeners have been white, college-aged Midwesterners. As more recent advances have highlighted the asymmetrical encoding of spoken words based on social information (Sumner et al., 2014), we might reconsider these classic studies through a new lens that centralizes demographic diversity. In this talk, I discuss findings from three recent experiments in which inter-talker variation proves central to patterning memory for spoken words. All three used the continuous recognition memory paradigm, following the methods of Palmeri et al. (1993). Exp. 1 used a demographically diverse talker set, Exp. 2 used a homogenous but contextually non-standard talker set, and Exp. 3 compared responses to two demographically-matched talkers directly. Considerable variation in memory patterns measured through accuracy and RT was found in all three experiments, both across and within social categories. These results indicate that listeners are finely attuned to talker and social information in the speech signal and allocate memory resources on an ad-hoc basis. I will argue that this process is guided in part by social ideologies, experience, and cultural dynamics. The findings provide new ways forward in the study of episodic memory and illustrate both how cognitive processes are entangled with social realities, and how we might reliably investigate them in our research.

March 22

Larry M. Hyman & Daniel Ibrahim Kamara (UC Berkeley): Latent High Tones in Limba (Tonko dialect, Sierra Leone)

Our goal in this talk is to present a progress report on our work on the almost totally undocumented Tonko [t̪ɔŋkɔ] dialect of Limba, a Niger-Congo language of Sierra Leone (and a bit of Guinée). Our focus is on the opaque tone system in which most words are all low tone in their citation form, but show a wide range of tonal contrasts with different high tones popping up when words occur in context. We will illustrate, step by step, how the observed fact justify the proposed contrastive underlying forms and reveal tonal alternations which, to our knowledge, have not been described for any other tone system. While the the same tonal issues pervade every aspect of the grammar, we focus on the interaction of nouns and their adnominal modifiers which, despite the opaqueness of the widespread tonal neutralization within the system, justify our analysis.

April 5

Adam Albright (MIT): Laryngeal markedness and laryngeal neutralization in Lakhota

Laryngeal contrasts in Lakhota (Siouan) exhibit several typologically remarkable properties. Stops and fricatives each show three-way contrasts, stops for aspiration and ejection (/k/, /kʰ/, /k'/), and fricatives for voicing and ejection (/x/, /ɣ/, /x'/). However, in many positions, these contrasts are neutralized, with just one value remaining.  These neutralizations pose several analytical challenges.  One set of challenges concern the contexts of neutralization; in particular, neutralization affects some contexts that typically favor contrast, cross-linguistically.  For example, pre-vocalic obstruents, which normally license full laryngeal contrasts (Steriade 1997), are generally neutralized when they are the second member of a cluster: /ta/ vs. /tʰa/, but /pta/ vs. */ptʰa/.  The second set of challenges concern the outcome of neutralization.  For stops, neutralization yields plain voiceless stops in some contexts, and voiced stops in other contexts, including in word-final position, in apparent violation of the usual preference for final devoicing (Blevins, Egurtzegi, Ullrich 2020; Dąbkowski and Beguš 2024), and in stop+sonorant clusters, even though sonorants usually do not trigger voicing assimilation.  For fricatives, on the other hand, neutralization causes devoicing in exactly the same contexts that trigger voicing in stops.  I argue that this complex pattern results from two properties of Lakhota laryngeal contrasts: first, I claim that the contrasts are defined not only by laryngeal properties of release (VOT, bursts), but also by consonant duration (plain stops and voiced fricatives short, others long).  The contexts that demand neutralization are lenition contexts in which consonant duration is reduced.  Second, I claim that voiceless stops in Lakhota require a certain minimum duration in order to implement voicelessness through glottal abduction; in other contexts, they become allophonically voiced.  A consequence of this requirement is that stops are partially or fully voiced in contexts with short closure duration: in clusters and in final position.  This analysis agrees with a basic claim by Dąbkowski and Beguš (2024) that voicing is a form of lenition in Lakhota, while incorporating the requirement into a synchronic analysis of laryngeal markedness and neutralization.

April 12

No meeting.

Please check out WCCFL 42, hosted by UC Berkeley's Linguistics department.

April 19

Allegra Robertson (UC Berkeley): Pre- and post-aspirated geminates: phonetic and phonological patterns across four Southern Italian varieties

Aspiration, a rarely reported trait across Italo-Romance languages, occurs in related yet opposing ways in varieties spoken in Bari, Apulia and Marano, Calabria. In Barese and Bari Italian, voiceless geminate stops are optionally but frequently pre-aspirated, frequency which is primarily conditioned by stress position and secondarily conditioned by stop place of articulation and preceding vowel height. By contrast, in Maranese and Marano Italian, voiceless geminate stops are consistently post-aspirated, and the duration of post-aspiration is conditioned by similar variables. Based on original careful speech data, this paper provides a phonetic description of pre- and post-aspiration in the four varieties, comparing aspirated geminates with unaspirated singletons, and proposes phonological representations of aspirated geminates within a Q-theoretic framework. In the case of Barese and Bari Italian pre-aspiration, optionality and probability are accounted for with a set of weighted constraints in a Maximal Entropy framework. The surfacing patterns of aspiration shed light on length and weight requirements in these quantity-sensitive varieties, which only permit aspiration on segments that are long enough to handle a partial change in identity. The potential function and trajectory of aspiration in Bari and Marano varieties are discussed in relation to previous work and broader trends in Romance languages.

April 26

Drew McLaughlin (Basque Center on Cognition, Brain and Language): Rapid accommodation of talker and accent variation

Spanish spoken in the Bay Area is not the same as Spanish spoken in Buenos Aires or Madrid, just as Spanish spoken by early Spanish learners is not the same as that spoken by late Spanish learners. Although accent variation of these types can prove challenging to listeners, it also reflects a rich cultural heritage that ought to be celebrated. In this talk, I will present research that examines how listeners accommodate talker and accent variation, classically referred to as the “lack of invariance problem.” I will begin by discussing a line of pupillometry research that has shown that switching between talkers can be cognitively costly – especially when talkers have different accents. Next, I will present evidence that (1) listeners recalibrate their phonemic category boundaries to adapt to second language (L2, or “foreign”) accent, and (2) multilingual listeners, in particular, leverage their pre-existing knowledge when processing L2 accent. Across these interdisciplinary lines of research, the common aim is to improve our theoretical understanding of the psycholinguistic processes that support L2 accent perception.

May 3

Lelia Glass (Georgia Tech): Exploring the vowel systems of 50 Asian American young adults in Georgia

This talk reports an ongoing analysis of the vowel systems of Asian Americans (N=50) in an audio corpus of college students at an elite public university in Atlanta, Georgia, with reference to Black (N=20) and White (N=128) speakers. I consider seven vowels (FACE, DRESS, TRAP/HAND, GOOSE, GOAT, and PRIZE) implicated the Southern Vowel Shift (SVS), which is fading in the urban South, as well as the competing Low-Back Merger Shift (LBMS), which is gaining ground across regions and ethnicities.  

          Overall, Asian and White speakers pattern together, but distinct from Black speakers, in their realization of DRESS (higher/fronter for Black speakers), GOOSE (backer for Black speakers), and PRIZE (more monopthongal for Black speakers). Asian speakers are distinct from both White and Black speakers in showing the lowest/backest TRAP and the highest/frontest FACE, thus leading the Low-Back Merger Shift for these vowels (echoing a pattern shown by women of all ethnicities).  On the other hand, subverting the typical LBMS pattern, Asian speakers pattern with Black speakers (distinct from White speakers) in showing a backer GOAT and a lower/backer pre-nasal HAND.  No statistically significant differences are found between speakers who report heritage from South Asia (N=33) versus East Asia (N=11).

          Earlier work on specific national backgrounds of Asian Americans has found evidence of the LBMS (Hall-Lew 2009 for Chinese Americans; Kim and Wong 2020, Cheng et al 2023 for Korean Americans) as well as a backer GOAT (Jeon 2017) and a lower/backer HAND (Kim and Wong 2020). My work generalizes those findings to a broader group of Asian Americans. As for the social motivation of these patterns, echoing Hall-Lew (2009) and King (2021), metalinguistic commentary suggests that these upwardly mobile speakers may orient away from their geographic region and towards the national economy.

Fall 2024

September 6

Introductions & round robin!

We'll share about our summers, then share any interesting puzzles or piece of data we've been working with. You are welcome to attend without presenting in the round robin.

September 12 (irregular time)

Martin Krämer (UiT, The Arctic University of Norway): Sonority, markedness and the OCP

In this talk, data from a wide range of languages as well as language acquisition are presented that cast serious doubt on the role of sonority and sonority sequencing n syllable phonotactics. These data show that cross-linguistic apparent sonority effects must be coincidental. The theoretical challenge is thus not how to incorporate a universal multi-level scale into a theory of phonology with otherwise binary categorical distinctions (features are generally assumed to be binary or privative, not scalar), but to explain the fuller empirical picture, including alleged sonority effects, without any phonetically motivated hierarchy. I argue here that some of the apparent sonority effects emerge from a more abstract principle of syntagmatic contrast maximization, which is at least a close relative of the Obligatory Contour Principle.

September 13No regular meeting -- please check out the Phonological Domains workshop, hosted by the Linguistics department in Dwinelle 370.

September 20

Niko Schwarz-Acosta (UC Berkeley): “Al Cʉɐntu da Penuchu”: Perceptual Learning of a Vowel Shift in Mexican Spanish

In perceptual adaptation literature, vowel chain shifts are employed to test perceptual learning (e.g. Maye et al., 2008; Weatherholtz, 2015). These experiments found that listeners shift their perceptual boundaries when exposed to a novel accent for sufficient time. Both experiments had an exposure phase where participants would hear a story with the phonetic shift being investigated followed by a lexical decision task assessing the lexical adaptation to the new speech patterns. After exposure, listeners endorsed shifted items as words at around a rate of 60%, compared to the endorsement rates of nonshifted items at 90-100%. Importantly, these studies were done in English. Recent research has highlighted potential language-specific processes in psycholinguistics (Clapp & Sumner, 2024). The question still remains as to whether a listener’s first language affects their perceptual adaptation to novel speech. This study investigates the perceptual learning of Mexican listeners in a familiarization task. To my knowledge, Spanish has no attested vowel chain shift across all dialects. Moreover, Spanish has a 5-vowel system, whereas English can have around a 10-vowel system (varying by dialect). 

A familiarization task was employed to assess the perceptual adaptation of Mexican Spanish listeners to novel speech patterns. Listeners were asked to listen to a story containing a vowel chain shift and complete a lexical decision task containing both shifted and nonshifted words. Vowels were shifted using Praat by separating the source and filter of the speaker, then manipulating the speaker’s filter properties. There were 6 possible conditions that the listeners were exposed to: 2 vowel conditions (unshifted vs. shifted) x 3 exposure times (10, 5, and 2 minutes). 120 participants were recruited through Prolific, which was then filtered down to 109 based on performance on the control words and nonwords. 

Results suggest that familiarization may not play a big role in the endorsement rates of critical items. Additionally, I analyze vowel-specific endorsement rates which reveal that certain vowel shifts are significantly less likely to be endorsed than others. The results of this study imply some language-specific processes, as well as how some vowels are more strict in their endorsement rates. A Bayesian model was then fit to the data to evaluate these findings.

September 27

Alexia Hernandez (Stanford PhD graduate): The role of experience on the cognitive underpinnings of linguistic bias: An interdisciplinary investigation of Miami-based Cuban American speech [Virtual talk]

In this talk, I’ll investigate the cognitive processes and architectures that underlie speech-based linguistic bias. Ultimately, I argue that linguistic and social experiences mediate category structure, and that differently structured categories modulate speech production, perception, and bias patterns.

Speech-based bias is associated with linguistic variation in production. Thus, I first inquire about the cognitive systems behind speech variation by analyzing the acoustic patterns of TRAM, TRAP, /l/, (DH), and rhythm realizations within the Cuban American community in Miami, FL. I show that social factors can reflect differences in experience, which shape individual speakers’ cognitive representations and make speech variation in production possible.

Building on these production patterns, I study how listeners use variation in Miami-based Cuban American speech for person construal. I find that listeners’ social and linguistic experiences structure their racial/ethnic perception of speakers. Both Miami-based Cuban American and General American listeners display a range of ethnic/racial perception, though they attend to different social and linguistic cues. Moreover, listeners’ perceptions were tied to linguistic patterns, not individual speakers, such that the same speakers were perceived variably across phrases.

Finally, I ask how two listener groups make stereotyped associations based on perceived speaker identity in a speeded association task. While both Miami-based Cuban Americans and Midwestern listeners exhibited a whiteness bias, quickly associating perceived non-Hispanic white speech with white stereotypes, Midwestern listeners exhibited more biased responses. This study again underscores that experience impacts the implicit biases listeners hold about speakers. 

Across all three studies, the role of experience emerges as an important force in shaping language production, perception, and bias. The results support a cognitive architecture that integrates social information pre-comprehension via a socioacoustic memory. This architecture suggests that experience with diverse populations and their speech has the potential to decrease linguistic bias and discrimination. 

October 4

Marko Drobnjak (Arnes, University of Ljubljana): VOICE: Verifying How Speech Perception Shapes Credibility in Legal Contexts – A Statistical and Experimental Approach with Future Machine Learning Potential

Witness testimony often serves as critical evidence in legal cases, making credibility a key concern. Various factors, including speech patterns, influence how trustworthy a witness is perceived to be. Previous studies have shown that rhetorical skill enhances credibility, while the use of dialect or vernacular speech can lead to perceptions of unreliability. These judgments are shaped by social hierarchies, where speech serves as a marker of status.
 
My research, based on an experimental study conducted during my Fulbright at UC Berkeley, examines how dialect and gender influence the perceived credibility of witnesses. Slovenia, with its rich dialectal diversity and low-income inequality, provides an ideal context to isolate the effects of speech on credibility without the strong socioeconomic associations seen in other regions. The findings indicate that participants found dialect speakers more trustworthy than those using standard speech, likely associating the latter with institutional authority, which tends to have low public trust. Furthermore, male speakers were consistently rated as more credible.
 
These findings suggest that linguistic biases may contribute to disparities in legal outcomes, highlighting the need for greater awareness of speech perception in legal proceedings.
 
Future research could explore the use of Generative Adversarial Networks (GANs) to analyze how vocal characteristics like tone and timbre further shape perceptions of credibility, opening new avenues for understanding bias in legal contexts.

October 11

Kai Schenck (UC Berkeley): Modeling stochasticity, gradience, and domain effects in Yurok rhotic vowel harmony with Gestural OT

Yurok (Algic, Northern California) is a language that not only distinguishes multiple phonemic rhotic vowels (Garrett 2014, 2018), but also has a relatively rare system of rhotic harmony (Robins 1958, Smith 2024). Modeling the Yurok rhotic harmony system requires incorporating multiple intersecting factors, including morpho-phonological domain, vowel quality, and prosodic structure. In addition to factors determining whether harmony occurs, there is also gradience with which rhoticity affects /u/ (as measured by F3-lowering) that distinguishes underlying rhotic vowels, rhotic vowels that are the result of harmony, and opaque vowels that exhibit a little to no F3-lowering and do not propagate harmony.

I argue that the framework of Gestural OT (Smith 2018), an OT implementation of Articulatory Phonology’s speech production model (Browman & Goldstein 1986, 1989), is able to account for the phonetic and phonological behavior of Yurok harmony, if the *Inhibit constraints that penalize the presence of an inhibition gesture are indexed to prominent morphological domains.

While the majority of the harmony system can be accounted for, the fact that this analysis was built primarily from a relatively small archival data set without speaker judgements means it cannot be discerned whether variation between speakers represents one phonological system or multiple. Even so, it seems that modeling the Yurok rhotic harmony system requires a mechanism that regulates when harmony applies based on morphological domain mediating
only categorical stochasticity, as well as a gradient mechanism that regulates the degree to which a gesture is expressed in the phonetic output, modeled as gestural strength in Gestural OT (Smith 2018).

October 18

Richard Wang (UC Santa Cruz): Morphosyntax-Prosody Mismatch in Beijing Mandarin: Evidence from Retroflex Lenition

Stress in (Beijing) Mandarin, or the lack thereof, is a topic under much debate in the literature. Retroflex lenition, an optional phenomenon occuring in fast speech, is sensitive to segment duration, which provides insight into prosodically weak positions in the language. In capturing the lenition sites, I propose that prosodic structures in Beijing Mandarin are conditioned by but not perfectly mapped to morphosyntactic boundaries, resulting in a (morpho)syntax-prosody mismatch. Retroflex lenition can only occur on the weak syllable of a trochee, and foot recursion is necessary to derive the prosodic strucutres. Additionally, the distribution of (neutral) tones can affect prosodic parsing. I provide an analysis in Harmonic Grammar to account for a gang effect problematic for parallel Optimality Theory. Lenition domain is also compared with another phrasal phonological process, Tone 3 sandhi. Through the comparison, I demonstrate that Tone 3 sandhi does not operate on strucutres belonging to the Prosodic Hierarchy, thus resulting in a domain mismatch with retroflex lenition. For theoretical implications, lenition sits at the phonetics-phonology interface, the (morpho)syntax-prosody interface, and demonstrate tone-prosody interactions.

October 25

Yin Lin Tan (Stanford): Towards an indexical account of English in Singapore: Sociophonetic variation and Singlish

Multiple models have been proposed to account for variation in how English is used in Singapore. Some posit a binary distinction between Singlish, a colloquial variety of English used in Singapore, and standard Singapore English (e.g., Gupta 1994), while others locate these varieties along a continuum (e.g., Alsagoff 2007). However, there is no clear consensus on what counts as ‘Singlish’ to the average Singaporean, and there remain blurry distinctions between Singlish and standard Singapore English. Calls have also been made for an indexical account (Leimgruber 2012), which focuses on linguistic features and the social meanings they index. In this talk, I contribute towards an indexical account of English in Singapore by using a perceptually-guided approach for prosodic features that centers the ideological construct of Singlish and focuses on social meanings associated with Singlish.
Through two studies—a perception experiment and an attribute rating task—I show that Singlish is categorized in a gradient manner, and associated with prosodic patterns of global pitch variability, local pitch variability, and articulation rate, and locally grounded social meanings. In the first study, we utilized a speeded forced-choice task to probe listeners’ percepts of Singlish and identify prosodic features used to categorize variable speech as Singlish. In the second study, we used an attribute rating task to examine the social meanings that are associated with Singlish. Taken together, these studies provide support for analyses of pitch variability and articulation rate in future variationist work on English in Singapore and underscore the importance of centering listeners’ language ideologies when investigating variation.

November 1

Santiago Barreda (UC Davis): Re-Introducing the Probabilistic Sliding Template Model of Vowel Perception

Traditional models of vowel normalization typically require multiple tokens of a speaker's speech (and usually a balanced vowel system) in order to normalize speech and simulate perception. This makes them inadequate to understand perception in low information conditions such as when speakers are presented at random. It also raises the question: how do listeners 'guess' the speaker parameters necessary for normalization before having information regarding the speakers vowel system? The Probabilistic Sliding Template Model (PSTM) of vowel perception is a Bayesian model of vowel perception that 'guesses' vowel category and apparent speaker characteristics from individual vowel tokens with a high degree of accuracy. The PSTM is an intrinsic extrinsic normalization model, it uses intrinsic speech information to guess the extrinsic speaker reference frame required for perception. The PSTM works by integrating linguistic and social knowledge with speech information in order to arrive at the most probable combination of linguistic and social information contained in the utterance. In this talk, I will outline the properties of the model and compare different implementations of the framework, in addition to presenting an R package (STM) that can be used to simulate vowel/syllable perception data. Applications of this model for understanding vowel perception and the perception of speaker indexical characteristics (e.g., size, age, gender) will also be discussed.

November 8

No meeting due to speaker cancellation.

November 15

Maya Wax Cavallaro (UCSC): Domain final sonorant consonant devoicing: Phonetics interacting with phonology

The devoicing of sonorant consonants at the right edge of phonological domains (utterance, word, syllable, etc.) is a phenomenon that, while relatively typologically rare, is also under-described and not well-understood. It is often dismissed as a surface-level phonetic tendency without phonological consequences. My work pushes back on this assumption, investigating how final sonorant devoicing can help us better understand the relationship between phonetics and phonology. In this talk, I will discuss:

  • Phonetic factors that may lead to final sonorant devoicing
  • How final sonorant devoicing becomes phonologized, and
  • Whether/how one might tell the difference between phonetic and phonological sonorant devoicing

I propose that word- and syllable-final sonorant devoicing result from the phonologization of utterance-final phonetic tendencies, and generalization of a pattern from the utterance level to smaller prosodic domains. While past work has shown that generalization from the utterance domain to the word level is possible, I present evidence from an artificial language learning experiment, showing that learners can also generalize a phonological pattern to the syllable level.

I provide examples of two different sonorant devoicing patterns with data from recent fieldwork on Tz’utujil (Mayan) and Santiago Laxopa Zapotec (Otomanguean), and present preliminary results from an ongoing phonological and acoustic study of the patterns in these languages.

November 22

Suyuan Liu (University of British Columbia): In Search of the Standard Mandarin Speaker

Despite growing recognition of the need for precise language when describing speaker populations (e.g., Clopper et al., 2005; Cheng et al., 2021), Mandarin is still often treated as monolithic. Even studies on dialectal variation frequently categorize comparison groups as "Speakers of Standard Mandarin". Standard Mandarin, or Putonghua (普通话), is the national standard of China, officially defined as a language based on pronunciation from Beijing and Northern Chinese dialects, with grammar rooted in vernacular literature (Weng, 2018). But do speakers share this definition? How is "standardness" actually perceived? And, how does perceived standardness affect speech processing?

To explore these questions, I present the Mandarin-English Bilingual Interview Corpus, containing high-quality interviews with 51 bilinguals conducted by a single interviewer in both languages. These interviews provide spontaneous speech samples and rich insights into language backgrounds, attitudes toward variation, and definitions of standardness. Designed for qualitative and quantitative analysis, this corpus is a versatile resource for investigating how perceptions of standardness shape our understanding of Mandarin and its speakers.

November 29

No meeting -- Academic and Administrative Holiday

December 6

No meeting due to speaker cancellation.

December 13

Meg Cychosz (UCLA): Harnessing children’s messy, naturalistic environments to understand speech and language development

Children learn the patterns of their native language(s) from years spent interacting and observing in their everyday environments. How can we model these daily experiences at a large scale? It is no longer a question of if sufficiently comprehensive datasets can be constructed, but rather how to harness these messy, naturalistic observations of how children and their caregivers communicate.

In this talk, I will present recent work that used child-centered audio recorders to illustrate how children’s everyday language learning environments shape their speech and language development. I will present work that I conducted with colleagues on the language learning environments of bilingual Quechua-Spanish children in Bolivia, infants and toddlers who are d/Deaf and have received cochlear implants, and work with a new, massive dataset consisting of infant speech samples from a large number of linguistically-diverse settings. Although these populations seem disparate, I will show how studying the everyday language environments of infants and children from a variety of backgrounds (large cross-linguistic samples, children with hearing loss) helps us better understand how all children develop speech and language.