Spring 2023
January 20
Sarang Jeong (Stanford): The relation between perception and production in an ongoing sound change: A pilot experiment on the younger group's perception of Korean three-way stop contrast
The purpose of this study is to investigate whether individual language users’ perception is parallel to their production when their language community is going through a sound change. When a sound change is in progress, do speakers with the new and old variants have perceptual maps that reflect their own respective variants? Or do they share similar perceptual maps despite having different variants in production? I will address this question by examining Korean stop contrast whose primary cue has been shifting from VOT to F0. I hypothesize that there will be a discrepancy between listeners’ perceptual maps and their production patterns. Through an auditory identification experiment, this preliminary study explores one of the predictions made by the hypothesis, which relates to the perception of the younger age group.
January 27
Mykel Brinkerhoff ((UC Santa Cruz): Testing the Laryngeal Complexity Hypothesis: Evidence from Santiago Laxopa Zapotec
Most descriptions about the interaction between tone and phonation are based on South East and East Asian languages (e.g., Masica 1976, Thurgood 2002, Michaud 2012, Brunelle & Kirby 2016, Kuang 2017). These descriptions have lead to widespread claims about what is possible in languages with tone and phonation, specifically that tone and phonation are co-dependent. This means that certain tones will bear specific phonations or that certain phonation types only appear with certain tones. For example, Mandarin’s tone 3 is associated with creaky voice (Hockett 1947) and Vietnamese’s low falling tone is associated with breathy voice (Thurgood 2002). This, however, is not the true for Oto-Manguean languages where tone and phonation are independent from one another (Silverman 1997).
Despite being independent from one another, tone and phonation need to interact in a way that is perceptually salient. This need for perceptual saliency led to the Laryngeal Complexity Hypothesis (LCH; Silverman 1997, Blankenship 1997, 2002). The LHC’s basic premise is that for tone and phonation to be best perceived there needs to be an ordering or phasing between the gestures for tone and phonation. If there was no strict ordering of the gestures for tone and phonation then the cues for tone and/or phonation will be interrupted. DiCanio’s (2012) study exploring the LCH in Itunyoso Trique found exactly this interference. DiCanio showed that when there is a large overlap between the gestures for tone and phonation that the f0 signal is perturbed by the glottal gestures for phonation.This paper investigates the LCH’s role in Santiago Laxopa Zapotec (SLZ), an understudied Oto-Manguean language spoken by approximately 1000 people in the municipality of Santiago Laxopa in the Sierra Norte of Oaxaca, Mexico.
Preliminary results of a generalized additive mix model show that the LCH is correct in arguing for the phasing between tone and phonation cues.
February 10
Reiko Kataoka (UC Davis): Effects of Lexical Frequency on Phonetic Recalibration
Lexically guided perceptual learning (LGPL) is a special case of perceptual recalibration for categorization of speech sounds, whereby upon repeated exposure to an ambiguous speech sound in a disambiguating lexical context listeners come to categorize that ambiguous sound in a way that is consistent with the lexical information (Norris, McQueen, & Cutler, 2003). Various factors that may affect this recalibration have been examined in the literature (see Samuel & Kraljik, 2009).Building on the current body of evidence, this talk will present the study that examines the effect of lexical frequency on the extent of recalibration. Listeners were exposed, in a form of a lexical decision task, to an ambiguous sound [?], midway between [f] and [s], which was embedded in [s]-biasing high-frequency word frames (e.g., tenni[?]) for a high frequency (HF) group or low-frequency word frames (e.g., hubris[?]) for a low frequency (LF) group. After that exposure, all listeners engaged in a phonetic categorization task, classifying tokens from an [ɛf]-[ɛs] continuum either as [ɛf] (f-response) or [ɛs] (s-response). The HF group recognized the exposure words faster and more often than the LF group during the lexical decision task and both groups exhibited evidence of LGPL, but there was no significant group difference in the extent of recalibration. The results will be discussed as they relate to the effect of lexical frequency in perceptual learning in general and in LGPL more specifically.
February 24
Nay San (Stanford): Improving access to language documentation corpora using self-supervised models for speech
Many hours of speech from language documentation projects often remain inaccessible to a variety of interested parties — from speakers within the language communities, to language teachers, to linguists. This inaccessibility largely stems from the fact that recording speech is far simpler and considerably less time consuming than annotating the speech with relevant information. While machine-assisted workflows have the potential to help speed up this annotation process, conventional speech processing systems trained in a supervised setting require many hours of annotated speech to achieve sufficient performance to yield any time savings over manual annotation — leaving many projects in a Catch-22. In this talk, we examine how recently developed self-supervised models for speech such as wav2vec 2.0 have dramatically decreased the amount of annotated data required to implement machine-assisted workflows in language documentation projects and report on the benefits and challenges of using them based on several case studies.
March 3
Julia Swan (San José State): Monophthongal /ow/ among Nordic Americans in Puget Sound: A Case of Functional Reallocation
When describing regional features of English and ethnolinguistic repertoires of English, sociolinguists often posit substrate influence from non-English languages. A feature may be introduced by speakers of a non-English community language and subsequently used by English dominant speakers with identity or stance motivations (Fought 2010). How exactly this happens has been difficult to trace and is rarely as simple as direct language transfer. Ethnolinguistic features may derive from the heritage language of the community but be reallocated to other sociolinguistic meanings by the second generation (Gnevsheva 2020). Older and younger individuals with the same generational status deploy ethnolinguistic variation differently depending on the timing of their birth relative to salient cultural events in the community (Sharma & Sankaran 2011). Features associated with ethnolinguistic repertoires may evolve to become markers of regional identity or local stances (Labov 1963). An analogous situation is observed in dialect contact where diverse varieties of the “same” language come into contact and the distinct features resulting from this contact are functionally reallocated to different linguistic environments over subsequent generations (Britain 1997, Trudgill 1985).
Monophthongal /ow/ is a notable feature of the English in Washington State, along with other parts of the Upper Midwest. The current acoustic analysis explores this feature in the interview speech of 30 first, second and third-generation Swedish and Norwegian immigrants to the Puget Sound area born in the 1920s to 1940s. The interviews were collected as part of the Nordic American Voices Oral History Project (NAV) at the Nordic Heritage Museum in Seattle, WA. Analyzing 4,623 /ow/ tokens using a trajectory length measure and visualizations of formant trajectories, this article describes the focusing and reallocation of monophthongal /ow/ among children and grandchildren of Nordic American immigrants. For diachronic perspective, these data are also compared to conversational data from Seattle young adults (not necessarily of Nordic heritage) collected in 2014-2015.
First generation immigrants who are native speakers of Norwegian and Swedish exhibit more monophthongal /ow/ as indicated by significantly shorter trajectory length measures and trajectory shape differences. Subsequent generations of Norwegian and Swedish Americans display allophonic re-functionalization determined by following voicing environment: monophthongal /ow/ is relegated to the pre-voiceless environment. A similar process of focusing and reallocation of dialect variants in contact led to a phonologized raising pattern in Fenland English (Britain and Trudgill 2003: 251). The pattern of monophthongal /ow/ has become a marker of English in Washington State (among other places). Comparison data from contemporary young adults suggests that this pattern remains robust in Seattle-area young adults born before about 1990, but may be declining among speakers born in the 1990s and later. This project documents the complexity of contact-induced language change and offers insight into the influence of non-English languages on the development of ethnolinguistic and regional varieties of English. The study provides a reminder that despite large-scale language shift, the children and grandchildren of immigrants may impact the salient linguistic features of English in the local environment and likewise deploy such features as markers of their local orientation and identities.
March 10
Amber Galvano (UC Berkeley): A Q-theoretic approach to cross-dialectal Spanish <st> production
The variably pre- and post-aspirated realization of Spanish <s> + voiceless stop (e.g. ‘pasta’ → [pa(h)t(h)a]) has become synonymous with Andalusian Spanish, spoken in southern Spain (e.g. Ruch & Peters, 2014; Torreira, 2006). Previous accounts (e.g. Parrell, 2012) within Articulatory Phonology (AP) (Browman & Goldstein, 1986) explain the Andalusian production patterns via gestural phasing and extensive coarticulatory overlap. In this talk, I will present spontaneous speech data from speakers of Western Andalusian, Buenos Aires, and North-Central Peninsular Spanish, each of which demonstrates a unique envelope of fine-grained phonetic variation in the intervocalic <st> context. Then, bearing this data in mind, I will introduce a novel representational approach based in Q-theory (Inkelas & Shih, 2013) which explains the cross-dialectal variation in production as variation in the linking of segments, subsegments, and marginal subsegments, in accordance with phonetic, social, and other potential constraints. I propose that this type of account offers a more fruitful synchronic model than AP alone, especially for prevalent “transitional” productions with both pre- and post-aspiration. This approach also makes useful diachronic predictions about how these sequences have evolved and may continue to do so.
March 17
Simon Todd (UC Santa Barbara): Building implicit linguistic knowledge through passive exposure
In this talk, I outline recent work that investigates how humans can build implicit lexical and phonotactic knowledge of a language they don't speak simply through being exposed to it regularly, and how this process may be affected by structural and social aspects. I look first at the implicit learning of Māori, the Indigenous language of Aotearoa New Zealand, by New Zealanders who are surrounded by it in everyday life but can't speak it. By applying computational modeling to experimental results, I show that non-Māori-speaking New Zealanders have a surprising amount of lexical and phonotactic knowledge of Māori, which is best explained by the assumption that they have an implicit memory store of approximately 1,500 distinct morphemes. Through consideration of machine learning of morphological segmentation, I describe how this learning is facilitated by the heavy use of compounding in Māori. I then turn to examine the implicit learning of Spanish, which has a much lower relative degree of compounding, in California and Texas. I show that Californians and Texans who don't speak Spanish have implicit lexical and phonotactic knowledge of it, much like non-Māori-speaking New Zealanders do of Māori, but that the form of this knowledge appears to be affected by the morphological differences between the languages. Furthermore, I show that the strength of implicit knowledge evidenced by non-Spanish-speaking Californians and Texans is affected by the attitudes they hold toward Spanish and its speakers, supporting previous work that demonstrates how listeners' attitudes affect not only the conscious actions they take in response to a speaker, but also the way they unconsciously process and represent speech.
April 7
Marie Tano (Stanford): Stancetaking and the construction of Black American identities in the United States
This study accounts for variation found in the use of an ethnically-marked language variety spoken by a Nigerian-born Black American. Methodologically, it takes on a meticulous stance-based analysis regarding the use of specific phonetic features. Drawing on previous work which uses the intra-speaker paradigm to investigate the construction of identity in interaction (Podesva, 2007; Sharma, 2011), this paper examines the ways in which a US-based immigrant indexes the various identities with which they align. A qualitative analysis of self-recorded data revealed behavior distinct from previous literature on addressee-influenced style shifting, but consistent with literature on Diasporic linguistic repertoires that include a variety of standardized and ethnic linguistic varieties. Overall, the analysis demonstrates how a participant's social stances drives their linguistic behavior and allows for them to build and negotiate their identity categories (Rickford & McNair-Knox, 1994; Bell, 2006).
April 14
Laura Kalin (Princeton): On the (non-)transparency of infixes that surface at a morpheme juncture
Infixation is characterized by the intrusion of one morphological element inside of another. Canonical examples of infixation involve an intramorphemic position for the infix, in particular, with the infix appearing inside of a root, e.g., k<ni>akri ‘act of crying’, from root kakri ‘cry’ + nominalizing infix -ni- (Leti; Blevins 1999). But, since infixes are generally positioned relative to a phonological “pivot” (see, e.g., Yu 2007), and since infixes should in principle be able to combine with complex (multimorphemic) stems, it stands to reason that an infix could sometimes, incidentally, appear inside of an affix or even at a morpheme juncture, intermorphemically; and indeed, both possibilities are attested.
In this talk, I ask: When an infix (incidentally) appears between two morphemes in its stem, does the infix disrupt relations at/across that morpheme juncture that we otherwise would expect to be strictly local? I investigate the (non-)transparency of 7 infixes (from 6 languages; 5 language families) that can appear at a morpheme juncture. I find that these infixes never interrupt semantic, syntactic, or morphological interactions/relationships. With respect to phonological interactions/relationships, the findings support a division between "early" and "late" phonology, with the "early" phonology surviving (being counterbled by) the intrusion of an infix, and "late" phonology being fed/bled by the intrusion of an infix. In concert with other recent typological findings about infixes (Kalin 2022), the behavior of infixes at morpheme junctures provides strong novel support for a model of the phonology-morphosyntax interface where realization (including exponence, infixation, and "early" phonology) proceeds from the bottom up, terminal by terminal.
April 21
Georgia Zellou (UC Davis) (project in collaboration with Mohamed Lahrouchi (CNRS & Université Paris 8) and Karim Bensoukas (Mohammed V University))
I will present work exploring the perceptual mechanisms involved in the processing of words without vowels, a common lexical form in Tashlhiyt (an Amazigh language of Southern Morocco) but highly dispreferred cross-linguistically. In Experiment 1, native and naive (English-speaking) listeners completed a paired discrimination task where the middle segment of the different-pair contained either a different vowel (e.g., fan vs. fin), consonant (e.g., ʁbr vs. ʁdr), or vowelless vs. voweled contrast (e.g., tlf vs. tuf). Experiment 2 was a wordlikeness ratings task of Tashlhiyt-like tri-segmental nonwords constructed to vary in the sonority of the middle segment. We find that vowelless words containing different types of sonority profiles are generally discriminable by both native and naive listeners. This can be explained by the phonetic and acoustic properties of vowelless words: Since Tashlhiyt exhibits low consonant-to-consonant coarticulation, the presence of robust consonantal cues in the speech signal means that the internal phonological structure of vowelless words is recoverable by listeners from both language backgrounds. Moreover, speech style variation provides further evidence that the phonetic implementation of vowelless words makes them perceptually stable in Tashlhiyt. At the same time, wordlikeness ratings of nonwords indicate that listeners rely on their native-language experience to process the wellformedness of new words: Tashlhiyt listeners accept sonorant- and obstruent-centered vowelless words equally (and as well as voweled words); meanwhile, English listeners’ preferences increase with higher sonority values of the word center. Thus, our findings provide an overview of the low-level acoustic-phonetic and higher-level phonological processing mechanisms involved in the perception of vowelless words. Our results can inform understandings of the relationship between language-specific phonetic variation and phonotactic patterns, as well as how auditory processing mechanisms shape phonological typology.
April 28
Christopher Cox (Aarhus University): Exploring the Active Role of the Infant & Caregiver-Infant Feedback Loops in Language Development
Language development is a bidirectional process of interaction between caregivers and infants, where infants take an active role in shaping their own linguistic environment. In this talk, I will present findings from different sources to delve into this complex interactional process. First, I will present results from our recent meta-analysis of the acoustic features of infant-directed speech (IDS) and show how caregivers adapt their speech to infants’ changing developmental needs. Second, to build on these insights and to provide a hypothesis-driven testing ground, we will take a deep dive into the peculiar world of Danish IDS and show how the phonological structure of the language can influence the nature of IDS adaptation. Finally, by using our turn-taking work as a springboard to discuss reciprocal adaption and individual differences, I will then present some preliminary acoustic analyses and simulations of infant vocalisations. Altogether these findings highlight the need for a nuanced understanding of the role played by both infants and caregivers in shaping the linguistic environment.
Fall 2023
August 25
Round robin -- introductions, summer updates & data sharing
September 1
Christine Beier (UC Berkeley): Documenting and describing tone in Iquito: a progress report
In this informal, and hopefully interactive, presentation, I will discuss ongoing work to document and describe the tone system of Iquito, a highly endangered Zaparoan language of northern Peruvian Amazonia, sketching out some of the key typological, areal, methodological, analytical, and theoretical issues that have emerged over the years of working with this language and its highly-intricate tone system. Despite being the only member of its family known to exhibit tone, and despite occurring in a part of the world reported to have only a modest density of tone languages, which in turn are said to exhibit, by-and-large, simple tone systems, Iquito turns out to be a tone powerhouse, exhibiting a H, L, Ø (underlying and surface) tone inventory; (usually HLL) lexical melodies and (always H) boundary tones; both rightward and leftward tone spreading; both linked and floating tones; both fixed and mobile melodies; and various morphological and constructional (grammatical) melodies, all integrated via dominance relations among the many domains involved in word and utterance formation. The discussion will be organized around the central question of how to balance descriptive clarity (especially with heritage learners in mind), analytical precision, and theoretical/disciplinary relevance in fieldwork-based research on tone in a language that is on the cusp of falling silent.
September 8
Maksymilian Dąbkowski (UC Berkeley): A'ingae classifying subordination: Bracket erasure violations and a phasal strength solution
This paper presents and analyzes data from A’ingae (or Cofán, an Amazonian isolate, iso 639-3: con), where the patterns of stress and glottalization on verbs in subordinate clauses are sensitive to (i) the presence or absence of preglottalization on the subordinator, (ii) the lexical category of the subordinator, and (iii) the morphological structure of the inflected verb. The last of the three factors violates bracket erasure, an empirical generalization which states that phonological grammar cannot access morphological information from previous cycles (Kiparsky, 1982). To account for the A’ingae patterns, I introduce a family of phase-indexed faithfulness constraints. Like McPherson and Heath’s (2016) phase faithfulness, phase-indexed faithfulness allows for modeling cases where a previous cycle of phonological evaluation results in greater faithfulness to the evaluated material. The additional indexation allows for sensitivity to the previous phase’s syntactic category. At the theoretical core of this investigation lies the question of how much morphology is accessible to phonology. A’ingae data lend support to a model where phonological constraints can be indexed to syntactic labels, but fine-grained information about the morphosyntactic features present in a previous phase is unavailable.
September 15
Katie Russell (UC Berkeley): Morpheme-specific nasalization in Atchan
In this talk, I present the case of nasalization in Atchan [ebr, Kwa, Côte d'Ivoire] as a lens through which to examine the division of labor between phonology and morphology. Nasalization applies across morpheme boundaries in Atchan; however, the specific surface outcome depends on the identity of the triggering morpheme. Per the regular phonology of Atchan, the domain of nasalization typically includes only one segment to the right of the trigger. However, following a singular nasal subject pronoun, the domain of nasalization includes a much larger amount of material. I present two morpheme-specific patterns of nasalization in Atchan here: irregular nasalization of auxiliaries and non-local nasalization within serial verb constructions. While both patterns appear on the surface to be phonologically derivable, involving nasal harmony, I argue that, with a closer look at the domains they affect and the contexts in which they occur, their exceptionality is best accounted for morphologically.
September 22
Jonathan Paramore (UC Santa Cruz): Codas are universally moraic
Abstract here
September 29
Alex Elias (UC Berkeley): Just because you can doesn’t mean you should: Two analyses of Jao’s phonemic inventory
Jao is an Oceanic language with a very large and typologically unusual consonant inventory. Of the roughly 45 consonant phonemes, 24 of them are nasalized, an extraordinarily high proportion. Meanwhile, phonemically nasal vowels are marginal. It is possible to dramatically reduce the size of the phoneme inventory and rescue nasal vowels from their marginal role by shifting the locus of nasality onto the vowels. Certain pairs of consonants like [p, pᵐ] and [ᵐb, m] would then be viewed as allophones conditioned by the nasality of the following vowel. My talk will examine these competing analyses and raise a deeper methodological question: just because something is possible, does that mean you should do it?
October 6
Noah Macey (UC Berkeley): A dynamic neural model of the interaction between social and lexical influences on speech production: the case of retroflex sibilants in Taiwan Mandarin
Abstract here
October 13
AMP 2023 practice talks
-
Maksymilian Dąbkowski (UC Berkeley): Phasal strength in A'ingae classifying subordination
-
Katie Russell (UC Berkeley): Reduplication in Atchan as prosodically conditioned morphological doubling
October 27
Anna Björklund (UC Berkeley): Automated vs. Traditional Approaches to Patwin Intonation
This talk presents PaToBI, a formal model for transcribing the intonational patterns of Patwin (ISO: pwi), a Wintuan language of northern California without living native speakers for which intonation has only ever been briefly described (Lawyer 2021). This talk then applies PaToBI to describe six major intonational contours in Patwin. These intonational contours are compared to the results of Contour Clustering (Kaland 2021), a semi-automated clustering toolkit. Semi-automated clustering allows for the researcher to receive quantitative insight into possible intonational clusters while also reducing data processing time during the initial stages of investigation. This paper finds that Contour Clustering provides support for major intonational patterns discovered with a `traditional' PaToBI analysis, suggesting it is a viable tool to be integrated into the initial process of investigating intonational contours in an archival language.
November 3
Emily Grabowski (UC Berkeley): Acoustic Measurement in Phonetics: Current Practices and Future Directions
Acoustic measurement is a core part of phonetics, but there has been relatively little methodological research in this topic, in spite of a rapidly changing technological landscape. This talk will present the results of two studies: one designed to understand how much acoustic context plays a role in phonetic discriminability and one designed to compare the performance of different types of acoustic measures. Finally, I will discuss the next steps for acoustic representations, including how we might leverage neural networks, a prevailing trend in other areas of speech research, to generate powerful and useful phonetic representations.
November 17
Tomasz Łuszczek (UC Berkeley, University of Warsaw): Glides: A Polish challenge for constraint-based theories of opacity
The goal of this talk is to discuss a fragment of Polish glide phonology from the perspective of OptimalityTheory (OT). In particular, I look at processes of Gliding, i u → j w, and processes of Glide Insertion, Ø → w, as exhibited by two varieties of Polish: Standard Polish and a local underinvestigated variety called Podhale Goralian (PG). These processes pose a challenge for standard OT because the theory (i) cannot model Gliding on a par with Glide Insertion and (ii) it cannot account for the overapplication of Glide Insertion in certain prefixed stems. Consequently, the data are recast in three auxiliary theories of OT designed to tackle opacity: Output-Output theory, Local Constraint Conjunction theory and Derivational OT. The conclusion is that, while all these theories improve on the standard OT model, only Derivational OT can account for the full range of the Polish data.
December 1
Justin Bai (University of Colorado, Boulder): The effects of minimal pairhood on secondary feature enhancement
It is well established that lexical competition conditions variation in phonetic realization. For example, /p, t, k/-initial words with voiced stop minimal pair neighbors are produced with increased VOT (e.g., pat and bat) (Baese-Berk & Goldrick 2009). Whereas VOT is a direct cue for voicing in English, there are secondary features that also play a role in distinguishing words. For example, listeners can use the longer vowel durations in pre-voiced contexts (e.g., in bad) to help them distinguish words (e.g., bad from bat) (Raphael 1972). This present study examines contrastive hyperarticulation of this secondary feature of vowel duration in English in a wordlist task. Participants were made to read coda-voicing minimal pair words, sometimes with the voiced-coda word first (e.g., dog and then dock) and other times with the voiceless-coda word first (e.g., dock and then dog). (Participants also produced words with no coda-voicing minimal pair competitors like tub and shut.) The results indicate that participants on average produce longer vowel durations for minimal pair words in the second position if the voiceless-coda minimal pair competitor was said first. On the other hand, participants on average produce shorter vowel durations for minimal pair words in the second position if the voiced-coda minimal pair competitor was said first. This finding is used to argue for a listener-directed (rather than a speaker-internal) mechanism for this particular variation in phonetic realization.