Phorum

The Berkeley Phonetics, Phonology and Psycholinguistics Forum ("Phorum") is a weekly talk and discussion series featuring presentations on all aspects of phonology, phonetics and psycholinguistics. We meet on Fridays from 1 to 2 pm. We plan to have a virtual option available for those who would prefer to join virtually: please email one of the organizers for the Zoom link, or to ask to be added to the mailing list (which will include relevant links). Phorum is organized by Katie Russell and Maks Dąbkowski. Our emails are respectively "katherine.russell" and "dabkowski" @berkeley.edu.


Spring 2022 Schedule

January 21

Kie Zuraw (UCLA) and Paolo Roca: OPM Rhythms

We use music and lyrics of OPM, Original Pilipino Music, to investigate a controversy about whether it is stress or vowel length that is active in Filipino, and whether words divide mainly into penult-stressed vs. ultima-stressed, or into penult-stressed vs. unstressed. We found in a small corpus of songs that "stressed" syllables, as compared to "unstressed" syllables, are assigned to notes with longer duration, and also tend to contain stronger beats, and that this is true for both penults and ultimas. As a control, we found that vowel height, despite its phonetic correlation with duration and loudness in speech, does not correlate with musical duration or beat strength. We tentatively conclude that final-"stressed" syllables do bear real phonological prominence, and welcome your input on additional hypotheses to test as we add more songs to the corpus.

January 28

Anna Björklund (UC Berkeley): Nomlaki Vowel Quality and Duration: An Archival Examination

This talk discusses vowel quality and duration in Nomlaki (ISO: nol), a Wintuan language of Northern California that survives via a limited number of archival recordings. Phonemic vowel length is reconstructed in Proto-Wintun (Shepherd 2005) and is inherited in Nomlaki’s sister languages, Wintu (Pitkin 1984) and Patwin (Lawyer 2015), producing a vowel system of five long/short pairs. However, because current Nomlaki pedagogical texts use orthography which analogizes Nomlaki vowel pairs to English tense/lax pairs, it is ambiguous whether duration or quality is intended as the pairs’ primary cue (Ainsworth 1972). There are almost no published studies examining the phonetics of any Wintuan language (see Lawyer (2015) for the only detailed study of Patwin). This research therefore represents a novel attempt to quantitatively analyze the phonetics of Nomlaki. To analyze the data, K-means clustering and Linear Discriminant Analysis (LDA) modelling was used. Altogether, the results suggest that 1) the Nomlaki vowel space as a whole can be primarily categorized via F2 and F1, without respect to duration, and 2) within each vowel pair, significant duration and quality differences exist. This finding represents a shift from the assumed historical pattern of duration as primary or even sole cue (Sheperd 2005), to one of mixed duration and quality. These results not only increase our knowledge of Wintuan historical development and California language typology, but are crucial for replicating and teaching Nomlaki faithfully in ongoing language revitalization efforts, using methods that may be adopted by others seeking to revitalize languages that have only archival materials.   

February 4

Richard Bibbs (UC Santa Cruz): Perceptually-grounded contrast licensing by laryngeals in Chamorro

In Chamorro, a typically neutralized contrast between mid and high vowels is preserved before intervocalic laryngeals. This preservation of contrast is shown to be an instance of patterned exceptionality. Rather than being a result of syllable structure or the typical lowering pattern of high to mid vowels in Chamorro, this exceptionality is instead conditioned by perceptual factors. The phonetic context provided by intervocalic laryngeals permits more acoustic information pertaining to the quality of the preceding vowel, enabling the preservation of underlying contrast. I demonstrate that laryngeals allow more robust acoustic information for perceiving vowel height (F1) when compared to oral consonants. This lack of perturbation of characteristic vocalic formants allows an underlying mid-high contrast to surface faithfully in this “exceptional” environment. This supports theories that integrate phonetic information into the synchronic phonology. The perceptual basis for this contrast licensing is also probed experimentally, revealing suggestive evidence for a difference in vowel contrast perception before laryngeal consonants versus a supralaryngeal consonant.

February 11

Josefina Bittar Prieto (UC Santa Cruz): Borrowing of Mental Event Verbs from Spanish to Guaraní

Cross-linguistically, mental events (in particular, emotion and cognition) are expressed through non-prototypical argument coding, which comprises diverse strategies depending on the language (Croft, forthcoming). However, not much is known about the borrowing process of these constructions: What happens when a language that encodes mental events with one strategy borrows a mental event construction from a language that uses a different strategy? To answer this question, the present study explores the Guaraní-Spanish contact scenario.

While Guaraní construes mental events with its so-called Inactive Construction (e.g. che-pochy, 1INACT-‘get angry’), Spanish uses a Middle Voice (Reflexive-like) Construction (e.g. me enojo, 1MID ‘get angry’). Thus, when Guaraní borrows a mental event from Spanish, is the borrowed verb incorporated in an Inactive Construction, thereby joining its semantically-similar native counterparts?

To investigate this question, instances of mental event verbs were extracted from a corpus of spoken Guaraní: COGA (Corpus del Guaraní Actual). Preliminary exploration of these data shows that mental event verbs are virtually never borrowed into Guaraní Inactive Constructions. That is, although a Spanish verb like resabiar-se (get.mad-MID) is semantically similar to the Guaraní verb pochy (‘get mad’), the loan resavia does not join the cluster of Guaraní Inactive Construction verbs. Instead, the borrowed verb co-occurs with the Reflexive/Middle prefix je- and the active person prefixes.

These findings suggest that speakers of Guaraní establish equivalencies between the elements of the Spanish borrowed construction with those in Guaraní, replicating the entire Spanish pattern (in the sense of Matras & Sakel, 2007): As Spanish does not have equivalent elements to the Guaraní Inactive Constructions, borrowed mental event verbs co-occurs with the replication of the Spanish Middle Voice strategy instead. Finally, this case study shows that multiple elements in a construction can be borrowed or replicated simultaneously, which supports the hypothesis that linguistic knowledge is constructional (Goldberg, 2006).

February 18

Maksymilian Dąbkowski (UC Berkeley): A Q-Theoretic solution to A'ingae postlabial raising

I document and analyze the typologically unusual process of postlabial raising in A'ingae: After labial consonants, the diphthongs /ai/ and /ae/ surface respectively as [ɨi] and [oe], revealing that C[+labial]a sequences are marked. However, the monophthongal /a/ in the same environment surfaces faithfully as [a]. To capture these facts, I propose an analysis couched in Q-Theory, where one vocalic target of a diphthong corresponds to fewer subsegments than a monophthong. This predicts that diphthongs might show an emergence-of-the-unmarked (TETU) effect, while monophthongs surface faithfully. The prediction is borne out by A'ingae postlabial raising, contributing a novel argument for Q-Theoretic representations.

February 25

John Starr (Cornell): A first look at mind rhymes (joint work with Helena Aparicio and Marten van Schijndel)

Numerous linguistic phenomena involve conveying an overt message simultaneously with a covert message (e.g., humor, sarcasm, etc.), though identifying an exact trigger for these structures is challenging. We study a novel instance of such phenomena: mind rhymes (MRs), where the final intended target (IT) in a rhyming structure is replaced with a phonologically-unrelated overt target (OT) (examples 1-3 below):
1. He’s limber slouched
against a post  
and tells a friend
what matters least. (IT: most)
2. The poems I write
are a real delight,  
so please be polite,
when the rhyme is not perfect (IT: right)
3. I have a sad story to tell you.
It may hurt your feelings a bit.
Last night I walked into my bathroom
and stepped in a big pile of shaving cream. (IT: shit)                    

Compared to related phenomena, MRs elucidate a point of dual-message resolution that can be altered to unveil what factors aid in implicit and explicit language processing. This study takes a two-fold approach to mind rhymes. First, we experimentally probe the conditions under which an IT can be recovered using data from a novel corpus. We find that people prefer OT targets over all other targets excluding the IT, suggesting that the OT contains a retrieval cue absent in all other possible targets. Second, we computationally explore the contexts that license MRs through measurements of surprisal and cosine similarity across the experimental targets. Our computational results reinforce our experimental conclusions and support the hypothesis that the necessary cue between the OT and IT is semantic in nature. When examined together, these results of both components suggest that future studies must consider how multiple kinds of retrieval cues affect processing, both across implicit and explicit linguistic signals.

March 4

Andrew Cheng (Simon Fraser University)Measuring creak in novel words in Infant- and Adult-Directed Speech

Creaky voice, a voice quality associated with lower pitch, increased glottal constriction, and irregular fundamental frequency, has been the subject of phonetic and sociolinguistic study for quite some time. It has been shown that younger speakers of North American English use more creaky voice than older speakers, although there is less evidence that it is particularly prevalent in young women, and no evidence whatsoever that it is harmful to the vocal tract to use creaky voice. The majority of research on creaky voice in North American English has examined its use in spontaneous speech of adults, most often in conversation with other adults, but none so far has investigated whether creaky voice is prevalent in a different register, namely, Infant-Directed Speech (IDS). In this study, English-speaking parents were recorded speaking naturalistically to both an adult as well as their infant child. We tested whether creak, measured using mean f0 and three measures of glottal constriction (H1-H2, H1-A1, and H1-A2), was more or less prevalent in each register, hypothesizing that IDS, with its patterns of higher pitch and shorter utterance duration, would have less creaky voice. Results indicated that IDS did have higher pitch, as well as greater creakiness according to some, but not all, of the measures of glottal constriction. The amount of creak in IDS was less than what could be accounted for simply by factoring in the effects of higher fundamental frequency and shorter utterance duration; thus, we discuss whether parents might employ the opposite of of creak (e.g., breathy or falsetto voice) as part of a "caretaker persona" that dissociates from the traits that creak typically indexes in the context of North American English.

March 11

Ryan Bennett (UC Santa Cruz): Prosodic smothering is idiosyncratic and lexical

Prosodic units like the phonological word ω or phonological phrase φ typically correspond to morpho-syntactic units, such as syntactic terminals or XPs. Even when this correspondence is imperfect, it is still lawful and systematic: deviations from syntax-prosody isomorphism can usually be attributed to grammatical principles (e.g. size constraints on prosodic units) which apply broadly and predictably to all relevant prosodic structures in the language. In Japanese, for example, compound words like keizi-sosyoohoo 'code of criminal procedure' are realized as phonological phrases φ rather than phonological words ω when the second member of the compound is too large (Ito & Mester 2021). 

Recently, it has been observed that certain morphemes or morpheme-classes may idiosyncratically disrupt normal patterns of prosodification, an effect dubbed 'prosodic smothering' (Bennett et al. 2018, Rolle & Hyman 2019). As one example, negation in Macedonian expands the stress domain of the verb to include preverbal clitics: {clitic ['verb]}, but {['neg clitic verb]}. Most analyses of these patterns assume that prosodically exceptional elements (i) have regular, unremarkable syntax; (ii) are lexically specified to trigger their unique prosodic effects. 

An alternative approach to smothering inverts this analysis: it assumes (i) that the syntax of smothering triggers is special; and (ii) that their prosodic behavior is expected from their special syntax, and therefore does not need to be lexically specified (Branan to appear). In this talk I argue against the particular command-based theory of smothering proposed by Branan, but also in favor of the larger claim that smothering effects cannot be reduced to syntactic differences, and instead require idiosyncratic lexical specification. I will also speculate that conditions on language acquisition may help resolve an important puzzle raised by Branan, namely the fact that smothering triggers often form coherent morpho-syntactic classes.

March 18

no meeting (spring break)

March 25

no meeting (spring break)

April 1

Julianne Kapner (UC Berkeley): The University Next Door: Sound change in an urban Rochester neighborhood

Recent evidence has documented the retreat of the Northern Cities Shift (NCS) in the Inland North, the advance of split pre-nasal and pre-oral /æ/ (BAT/BAN Split), and the merger of /ɔ/ and /ɑ/ (COT and CAUGHT; Low-Back Merger) in dialects throughout North America. We investigate the status of these three changes-in-progress among fifteen speakers in an urban neighborhood of Rochester, New York. Consistent with other recent studies, we find evidence that the NCS, long a characteristic feature of the Inland North region, is rising above the level of consciousness, acquiring stigma, and retreating in apparent time. We also find apparent time evidence that Rochester speakers are adopting the supralocal BAT/BAN Split and Low-Back Merger, with the former rising above the level of consciousness. In addition, we examine an underexplored social factor in language change: interaction with higher education. We find that speakers that are more socially oriented toward higher education exhibit fewer features of the NCS and more adoption of the supralocal speech patterns. Rather than analyzing education as a proxy for social class, we argue that it is the quality and intensity of interaction with higher education that promotes these changes, perhaps by increasing speakers’ awareness of local variants and by providing additional motivation to adapt.

April 8

Marko Drobnjak (Fulbright Visiting Student Researcher UC Berkeley; U of Ljubljana, Slovenia): Evaluation of testimony: speech perception vs. witness credibility

Speech carries a substantial amount of information about the speaker’s identity. It is possible to infer, among other things, the speaker’s age, gender, or ethnicity from speech alone. What happens when biases based on speech perception affect the assessment of the credibility of testimony in court? Previous empirical research has shown that witness credibility is influenced by several factors, such as ethnicity, socio-economic background, or perceived masculinity during speech interactions in court. Based on these findings, a novel experiment has been designed for the present study, focusing primarily on how dialectal features affect the assessment of credibility of testimony. Slovene, a dialectologically highly diverse language, was chosen as the language of the experiment, because Slovenia has one of the smallest income inequalities in the EU (according to OECD). This means the experiment at least partially controls for socioeconomic factors when testing the effects of dialects on witness credibility in court. In addition to dialectological characteristics, the central part of the experiment also tests the impact of gender of speakers and listeners on witness credibility, while the introductory part of the experiment focuses on the perception of age. Preliminary results (N = 300) will be discussed, as the experiment is still ongoing.

April 15

Xue 'Lily' Gong (UC Berkeley)Phonemic segmentation of narrative speech in human cerebral cortex

Speech processing requires extracting meaning from acoustic patterns using a set of  intermediate representations based on phonemic units, syllables, words, etc. This processing requires the segmentation of acoustical events in progressively longer time windows. Here we investigated the locus of cortical phonemic processing within the speech cortical network and the granularity of the phonemic segmentation within these phonemic areas. For this purpose, we collected functional MRI data while subjects listened to narrative stories. We then compared the predictive power of linearized models that used spectral features of sound, single phonemes, diphones, triphones and full word meaning embeddings. We identified areas in the superior temporal (STC), lateral temporal (LTC) and inferior frontal cortex (IFC) that selectively responded to phonemic features. In all the identified phonemic areas, single phonemes features yielded small predictions while diphones features yielded the highest prediction suggesting that cortical segmentation of phonemic units occurs first and principally at the level of diphones.  Our analyses also allowed us to accurately determine the phonemic cortical regions and thus precisely localize the phoneme to lexical/semantic transition.  Two principal transition regions were identified: a medial-lateral gradient in the LTC and an inferior-superior gradient in the IFC.

April 22

Rachel Walker (UC Santa Cruz)Gestural Organization and Quantity in EnglishRhotic-finalRhymes

In phonological structure, the segment root node is classically the locus of temporal organization for subsegmental units, such as features, governing their sequencing and overlap (e.g. Clements 1985, Sagey 1986). Root nodes also classically figure in the calculation of weight-by-position, by which coda consonants are assigned a mora (Hayes 1989). In this talk, I discuss evidence that motivates encoding temporal relations directly among subsegmental elements, represented phonologically as gestures (Browman & Goldstein 1986, 1989). A case study of phonotactics in syllable rhymes of American English, supported by a real-time MRI study of speech articulation, provides evidence for a controlled sequence of articulations in coda liquids. This study finds support for phonological representations that include 1) sequencing of subsegments within a segment (within a liquid consonant), and 2) cross-segment partial overlap (between a liquid and preceding vowel). Further, the assignment of weight in the rhyme is sensitive to these configurations. To accommodate such scenarios, it is proposed that segments are represented as sets of gestures without a root node (Walker 2017, Smith 2018) with a requisite component of temporal coordination at the subsegmental level. A revised version of weight-by-position is proposed that operates over subsegmental temporal structure. By contrast, the scenarios motivated by the phonotactics of rhymes with coda liquids are problematic for a theory in which sequencing is controlled at the level of root nodes.

April 29

Anne HermesPatterns of variability in secondary articulation in Tashlhiyt

This talk is about our current research on secondary articulation in Tashlhiyt, a language which has both pharyngealization (e.g., /izi/ ‘fly’ vs /izʕi/ ‘gallbladder’) and labialization (e.g., /ikla/ ‘he was coloured’ vs /ikwla/ ‘he spent the day’). For this, I will present acoustic and articulatory attributes of plain consonants and their secondary articulated counterparts, as well as the patterns of variability that characterize these segments in different contexts (e.g., VCV, CCC, VCC, CCV). I will discuss how secondary articulation in Tashlhiyt could be accounted for within the Articulatory Phonology framework.


Fall 2021 Schedule

September 3

Round robin

September 10

David Gaddy (UC Berkeley): Decoding Silent Speech with Electromyography

In this talk I will discuss my research on decoding silent speech.  This work uses machine learning to recognize silently mouthed words and turn them into audible speech, based on electrical signals from muscles captured on the surface of the face and neck.  The talk will give an overview of silent speech and its applications, describe the machine learning models we use to decode the signals, and discuss our analysis looking into what speech features the system captures.

September 17

Jon Rawski (San José State University): Abductive Learning of Phonotactic Constraints

Inductive learning of phonotactic knowledge from data often relies on statistical heuristics to select plausible phonotactic constraints, such as in the popular Maximum Entropy learners (Hayes & Wilson 2008). Wilson & Gallagher (2018) claim that such statistical heuristics are necessary, given that feature-based constraints allow for exponentially large hypothesis spaces. I show that such statistical heuristics are unnecessary, by providing a series of non-statistical algorithms which use abduction to select the most general feature-based constraint grammars for both local and long-distance phonotactics. I compare these algorithms to MaxEnt grammars to showcase their similar behavior on synthetic and natural phonotactic data. Like any algorithms, these help us clarify general properties of phonotactic learning: 1) the space of possible constraints possesses significant structure (a partial order) that learners can easily exploit, 2) even given this structure, there are multiple pairwise incomparable grammars which are surface-true, and 3) particular constraint selection is due to the particular abductive (not inductive) principles learners possess which guide the search, regardless of whether such principles are statistically formulated or not.

September 24

AMP practice talks

October 1

Yevgeniy Melguy (UC Berkeley): Mechanisms of listener adaptation in perceptual learning for speech

Listeners often show processing difficulty when faced with a novel accent. However, research has shown that they can rapidly adapt, resulting in attenuation or disappearance of this "accent cost". The goal of this study was to gain a better understanding of the underlying mechanisms in the perceptual adaptation process. To do this, a well-established phonetic learning paradigm was utilized where listeners are exposed to an artificial accent involving an ambiguous pronunciation of a target sound (e.g., /θ/ =  [θ / s]) and subsequently tested on categorizing a phonetic continuum between these two sounds. If learning is successful, listeners show a shift in their categorization boundary, such that they accept a greater proportion of sounds as instances of the trained category. Following up on earlier research (Zheng & Samuel 2020), this experiment investigated whether such phonetic learning is the result of  1) a shift of the trained phoneme category in phonetic space or 2) relaxation of categorization criteria (the expansion of the trained category in phonetic space). Results suggest that perceptual learning for speech is better explained as category shift -- listeners demonstrated specific adjustments to the accent but did not show evidence of generalizing learning to neighboring phonetic space. 

October 8

Zion Mengesha (Stanford University): A Social Meaning Perspective on Vowel Trajectories: The Realization of FEEL and FILL Among African Americans in California

In this talk, I explore the participation of African American and White Californians in a merger found in AAVE and Southern American English – the FEEL-FILL merger. I investigate three aspects of the FEEL-FILL merger for these speakers: whether FEEL and FILL are distinct, the F1 height of the FEEL and FILL nuclei, and the vowel contours. Ultimately, it appears that the FEEL-FILL merger has only been taken up by African American men in the Central Valley. However, social meaning does not accrue to the FEEL-FILL merger, but to FEEL and FILL themselves, and hence each phonological aspect has the potential to index different social meanings from one another (Eckert & Labov, 2017). Accordingly, African American women produce a more monophthongal FILL contour, as compared to African American men. I argue that while vowel height indexes regional identity, the vowel contour is associated with finer grained social distinctions. When examined together, these aspects of the FEEL-FILL merger – whether FEEL and FILL are distinct, vowel height and vowel contour – will reveal the dynamic nature of the intersections of race, gender, and place identity.

October 15

Evelin Balog (Friedrich-Alexander Universität Erlangen-Nürnberg; Fulbright at UC Berkeley): Entrenchment revisited: old and new concepts and their empirical validation

The project attempts to explore entrenchment from a psycholinguistic perspective and analyses automaticity, routinisation, chunking as manifestations of entrenchment. Within the framework of this project, entrenchment is defined as a threefold concept triggered by usage and exposure frequency that leads to the strengthening and formation of mental representations and their reorganisation into chunks, resulting in effortless fluent processing and production of the entrenched units. The main focus of this project lies on automaticity, which is measured in the form of speech fluency, spectral and phonological reduction. Four empirical studies were conducted to determine which linguistic, social and cognitive factors lead to entrenchment. Special attention is paid to the interaction of these factors as it is believed that no single factor alone is powerful enough to define such a complex phenomenon. Speech fluency was elicited
using rapid word naming, sentence reading and sentence recall tasks. The recordings from these experiments were used to determine temporal and spectral reduction in high-frequency and high-transitional-probability adjective-noun combinations. In addition to speech fluency, the participants’ cognitive skills, social background and language experience were measured. Around 120 participants were tested. The initial results suggest that participants with well-developed non-verbal processing and implicit learning skills are more likely to achieve automaticity. Of the linguistic factors, higher familiarity scores and transitional probabilities appear to facilitate the entrenchment of the items.

October 22

Natalie Weber (Yale University): Correspondence of prosody and syntax by phase in a polysynthetic language

Research on prosodic phonology over the past 40 years has shown that prosodic structure is closely related to syntactic structure, but may mismatch in ways that are phonologically optimizing (Nespor & Vogel 2007, and many others. An open question is how syntax-prosody correspondence differs in polysynthetic languages with large “clausal words” (see Arnhold, Elfner, and Compton 2020; Compton & Pittman 2010; Dyck 2009; Miller 2018; Piggott & Travis 2013; Wojdak 2008). One hypothesis is that the Prosodic Word (PWd) constituent corresponds to different syntactic units in different languages, some of which are quite large. A second hypothesis is that the PWd constituent corresponds to the same syntactic unit across languages, and that some other mechanism creates large “clausal words”. In this talk, I argue in favor of the second hypothesis. I investigate the correspondences between syntactic, prosodic, and metrical constituents in Blackfoot (Algonquian), a polysynthetic language. I show that a particular vP phrase matches to a Prosodic Word (PWd) constituent, while DPs and CPs match to Phonological Phrase (PPh) constituents. I propose that syntactic vP, DP, and CP phases (Chomsky 2001; Uriagereka 1999) correspond by default to the PWd, the PPh, and the IPh, respectively. I model these relationships using a modified version of Match Theory (Selkirk 2011). The large “clausal words” in Blackfoot arise because of a phonological pressure for sisters to a PPh to also be a PPh. The findings in this talk show that phrasal correspondence like Match Theory extend “below the word” level.  

October 29

Chloe Willis (UC Santa Barbara): The Theoretical and Methodological Implications of Bisexuality in Language and Sexuality Research

Voices communicate not just what we say, but also who we are. The past three decades have seen an abundance of research on how sexuality is indexed through the voice. This work mostly focuses on stereotypes about sounding gay, especially as they relate to the pronunciation of /s/ and the “gay lisp” (e.g., Munson et al. 2006a,b; Campbell-Kibler 2011; Zimman 2017), whereas lesbian-sounding voices are less represented (e.g., Van Borsel et al. 2013; Barron-Lutzross 2015). Bisexuality is conspicuously absent in this literature. In this talk, I first overview my previous work on bisexuality and /s/ production, namely that bisexual women and men produce /s/ in a way that is distinct from their lesbian, gay, and straight counterparts (Willis 2021). Next, I present preliminary findings that directly expand upon this work. These findings suggest that ethnoracial identity—which is typically not considered or even reported in previous research—is a significant predictor for variation in /s/ production. Finally, I discuss the theoretical and methodological implications of these two analyses for the broader study of language and sexuality and identify how my current research addresses these issues.

November 5

Katie Russell (UC Berkeley): Nasalization in Paraguayan Guaraní

In this talk, I discuss the multitude of ways in which nasalization manifests in Paraguayan Guaraní [gug, Tupi-Guaraní, Paraguay]. While an understanding of regressive nasal harmony in Guaraní has been crucial in helping to form the foundations of theoretical phonology (e.g. Goldsmith 1976 and Beckman 1998), progressive nasalization patterns have remained understudied and dismissed as idiosyncratic (Gregores & Suárez 1967, Kaiser 2008, Estigarribia 2020). I offer the first theoretical account of these phenomena, based on both data collected firsthand with native speakers as well as corpus data compiled from written sources. I account for the nasalization facts based on a set of typologically and phonetically grounded constraints which capture the nature of nasal harmony in Paraguayan Guaraní as two distinct simultaneous processes: agreement for the feature [nasal] across adjacent syllable nuclei, and coarticulation within a single syllable (Thomas 2014). I show evidence that variation in progressive nasalization patterns is conditioned both by lexical stratum and individual morpheme identity: variation across lexical strata is accounted for using weighted constraints (Pater 2009), modeled using a Maximum Entropy grammar (Goldwater & Johnson 2003). This work has implications for our understanding of nasal harmony and nasalization processes, as nasal spreading in Guaraní presents a unique case of a phonological phenomenon which is categorical in one direction but gradient in another.

November 12

no meeting (Veteran's Day)

November 19

Caitlin Smith (UC Davis): Learning Derivationally Opaque Patterns in the Gestural Harmony Model (joint work with Charlie O'Hara (UMichigan))

In this talk, we examine the learnability of two apparently derivationally opaque vowel harmony patterns: attested chain-shifting height harmony and unattested saltatory height harmony. We analyze these patterns within the Gestural Harmony Model (Smith 2018) and introduce a learning algorithm for setting the gestural parameters that generate these harmony patterns. Results of the learning model indicate a learning bias in favor of the attested chain-shifting pattern and against the unattested saltation pattern, providing a potential explanation for the differences in attestation between these two derivationally opaque patterns. Furthermore, we show that feature-based learning models of these patterns show no such learning bias and provide no account of the typological asymmetry between chain-shifting and saltatory height harmony.

November 26

no meeting (Thanksgiving)

December 3

Zachary O'Hagan (UC Berkeley): Verbal Reduplication in Caquinte

Reduplication in the strongly headmarking Nijagantsi Arawak languages of western Amazonia -- namely in Asheninka -- has figured prominently in the theoretical literature (e.g., Spring 1990, McCarthy and Prince 1993:25-108, Downing 2005), based on original data and analysis only from Payne (1981:143-152; see Martel Paredes 2012 and Mihas 2015:86-90 for some data from the Perené dialect). While multiple patterns of reduplication are attested in Asheninka, comparable data from related languages is generally lacking (cf. Beier 2010 and Michael 2008 for Nanti, and Snell 2011:832 for Matsigenka). In this presentation I profile and augment Swift's (1988:126-131) data on reduplication in Caquinte, introducing patterns not attested in Asheninka and proposing some ways in which possible analyses contrast with those argued for for Asheninka. In the basic case, the Caquinte reduplicant is a trimoraic suffix following the stem (i.e., the verb root and possibly an epenthetic segment and/or the reversative suffix -rej). In the simplest case, the reduplicant is composed of the first non-initial CV.CV.CV (e.g., imatsagamatsagatakaro, with the root underlined), so long as the stem -- most often the root -- is at least three moras, such that suffixes are not included in the reduplicant (n.b., with the exception of -rej) . If the stem is fewer than three moras, a suffix -i is added to the reduplicant (e.g., nokenakenaibaepoji); and if the root begins with a vowel, the vowel is not included in the reduplicant (e.g., nanijinijiitanake). If the addition of -i is insufficient for reaching three moras, then a subject prefix and (when relevant) the initial vowel of a root are included in the reduplicant (e.g., notenoteitanaka). Other patterns include ones where -i is absent, where it applies unnecessarily given the above formulation, where -a seems to have the same function as -i, where there seems to be a lexicalized reduplicative stem (especially with roots ending in velars), and where the reduplicant appears to be prefixal. I conclude by discussing whether the root augmentation present in Asheninka (e.g., McCarthy and Prince 1995:46-59) is present in Caquinte.