Spring 2022
January 21
Kie Zuraw and Paolo Roca (UCLA): Rhythms in OPM (Original Pinoy Music)
We use music and lyrics of OPM, Original Pilipino Music, to investigate a controversy about whether it is stress or vowel length that is active in Filipino, and whether words divide mainly into penult-stressed vs. ultima-stressed, or into penult-stressed vs. unstressed. We found in a small corpus of songs that "stressed" syllables, as compared to "unstressed" syllables, are assigned to notes with longer duration, and also tend to contain stronger beats, and that this is true for both penults and ultimas. As a control, we found that vowel height, despite its phonetic correlation with duration and loudness in speech, does not correlate with musical duration or beat strength. We tentatively conclude that final-"stressed" syllables do bear real phonological prominence, and welcome your input on additional hypotheses to test as we add more songs to the corpus.
January 28
Anna Björklund (UC Berkeley): Nomlaki Vowel Quality and Duration: An Archival Examination
This talk discusses vowel quality and duration in Nomlaki (ISO: nol), a Wintuan language of Northern Cali fornia that survives via a limited number of archival recordings. Phonemic vowel length is reconstructed in Proto-Wintun (Shepherd 2005) and is inherited in Nomlaki’s sister languages, Wintu (Pitkin 1984,) and Patwin (Lawyer 2015), producing a vowel system of five long/short pairs. However, because current Nomlaki pedagogical texts use orthography which analogizes Nomlaki vowel pairs to English tense/lax pairs, it is ambiguous whether duration or quality is intended as the pairs’ primary cue (Ainsworth 1972). There are almost no published studies examining the phonetics of any Wintuan language (see Lawyer (2015) for the only detailed study of Patwin). This research therefore represents a novel attempt to quantitatively analyze the phonetics of Nomlaki. To analyze the data, K-means clustering and Linear Discriminant Analysis (LDA) modelling was used. Altogether, the results suggest that 1) the Nomlaki vowel space as a whole can be primarily catego rized via F2 and F1, without respect to duration, and 2) within each vowel pair, significant duration and quality differences exist. This finding represents a shift from the assumed historical pattern of duration as primary or even sole cue (Sheperd 2005), to one of mixed duration and quality. These results not only increase our knowledge of Wintuan historical development and California language typology, but are crucial for replicating and teaching Nomlaki faithfully in ongoing language revitalization efforts, using methods that may be adopted by others seeking to revitalize languages that have only archival materials.
February 4
Richard Bibbs (UC Santa Cruz): Perceptually-grounded contrast licensing by laryngeals in Chamorro
In Chamorro, a typically neutralized contrast between mid and high vowels is preserved before intervocalic laryngeals. This preservation of contrast is shown to be an instance of patterned exceptionality. Rather than being a result of syllable structure or the typical lowering pattern of high to mid vowels in Chamorro, this exceptionality is instead conditioned by perceptual factors. The phonetic context provided by intervocalic laryngeals permits more acoustic information pertaining to the quality of the preceding vowel, enabling the preservation of underlying contrast. I demonstrate that laryngeals allow more robust acoustic information for perceiving vowel height (F1) when compared to oral consonants. This lack of perturbation of characteristic vocalic formants allows an underlying mid-high contrast to surface faithfully in this “exceptional” environment. This supports theories that integrate phonetic information into the synchronic phonology. The perceptual basis for this contrast licensing is also probed experimentally, revealing suggestive evidence for a difference in vowel contrast perception before laryngeal consonants versus a supralaryngeal consonant.
February 11
Josefina Bittar Prieto (UC Santa Cruz): Borrowing of Mental Event Verbs from Spanish to Guaraní
Cross-linguistically, mental events (in particular, emotion and cognition) are expressed through non-prototypical argument coding, which comprises diverse strategies depending on the language (Croft, forthcoming). However, not much is known about the borrowing process of these constructions: What happens when a language that encodes mental events with one strategy borrows a mental event construction from a language that uses a different strategy? To answer this question, the present study explores the Guaraní-Spanish contact scenario.
While Guaraní construes mental events with its so-called Inactive Construction (e.g. che-pochy, 1INACT-‘get angry’), Spanish uses a Middle Voice (Reflexive-like) Construction (e.g. me enojo, 1MID ‘get angry’). Thus, when Guaraní borrows a mental event from Spanish, is the borrowed verb incorporated in an Inactive Construction, thereby joining its semantically-similar native counterparts?
To investigate this question, instances of mental event verbs were extracted from a corpus of spoken Guaraní: COGA (Corpus del Guaraní Actual). Preliminary exploration of these data shows that mental event verbs are virtually never borrowed into Guaraní Inactive Constructions. That is, although a Spanish verb like resabiar-se (get.mad-MID) is semantically similar to the Guaraní verb pochy (‘get mad’), the loan resavia does not join the cluster of Guaraní Inactive Construction verbs. Instead, the borrowed verb co-occurs with the Reflexive/Middle prefix je- and the active person prefixes.
These findings suggest that speakers of Guaraní establish equivalencies between the elements of the Spanish borrowed construction with those in Guaraní, replicating the entire Spanish pattern (in the sense of Matras & Sakel, 2007): As Spanish does not have equivalent elements to the Guaraní Inactive Constructions, borrowed mental event verbs co-occurs with the replication of the Spanish Middle Voice strategy instead. Finally, this case study shows that multiple elements in a construction can be borrowed or replicated simultaneously, which supports the hypothesis that linguistic knowledge is constructional (Goldberg, 2006).
February 18
Maksymilian Dąbkowski (UC Berkeley): A Q-Theoretic solution to A'ingae postlabial raising
I document and analyze the typologically unusual process of postlabial raising in A'ingae: After labial consonants, the diphthongs /ai/ and /ae/ surface respectively as [ɨi] and [oe], revealing that C[+labial]a sequences are marked. However, the monophthongal /a/ in the same environment surfaces faithfully as [a]. To capture these facts, I propose an analysis couched in Q-Theory, where one vocalic target of a diphthong corresponds to fewer subsegments than a monophthong. This predicts that diphthongs might show an emergence-of-the-unmarked (TETU) effect, while monophthongs surface faithfully. The prediction is borne out by A'ingae postlabial raising, contributing a novel argument for Q-Theoretic representations.
February 25
John Starr (Cornell): A first look at mind rhymes
Numerous linguistic phenomena involve conveying an overt message simultaneously with a covert message (e.g., humor, sarcasm, etc.), though identifying an exact trigger for these structures is challenging. We study a novel instance of such phenomena: mind rhymes (MRs), where the final intended target (IT) in a rhyming structure is replaced with a phonologically-unrelated overt target (OT) (examples 1-3 below):
1. He’s limber slouched
against a post
and tells a friend
what matters least. (IT: most)
2. The poems I write
are a real delight,
so please be polite,
when the rhyme is not perfect (IT: right)
3. I have a sad story to tell you.
It may hurt your feelings a bit.
Last night I walked into my bathroom
and stepped in a big pile of shaving cream. (IT: shit)
Compared to related phenomena, MRs elucidate a point of dual-message resolution that can be altered to unveil what factors aid in implicit and explicit language processing. This study takes a two-fold approach to mind rhymes. First, we experimentally probe the conditions under which an IT can be recovered using data from a novel corpus. We find that people prefer OT targets over all other targets excluding the IT, suggesting that the OT contains a retrieval cue absent in all other possible targets. Second, we computationally explore the contexts that license MRs through measurements of surprisal and cosine similarity across the experimental targets. Our computational results reinforce our experimental conclusions and support the hypothesis that the necessary cue between the OT and IT is semantic in nature. When examined together, these results of both components suggest that future studies must consider how multiple kinds of retrieval cues affect processing, both across implicit and explicit linguistic signals.
March 4
Andrew Cheng (Simon Fraser University): Measuring creak in novel words in Infant- and Adult-Directed Speech
Creaky voice, a voice quality associated with lower pitch, increased glottal constriction, and irregular fundamental frequency, has been the subject of phonetic and sociolinguistic study for quite some time. It has been shown that younger speakers of North American English use more creaky voice than older speakers, although there is less evidence that it is particularly prevalent in young women, and no evidence whatsoever that it is harmful to the vocal tract to use creaky voice. The majority of research on creaky voice in North American English has examined its use in spontaneous speech of adults, most often in conversation with other adults, but none so far has investigated whether creaky voice is prevalent in a different register, namely, Infant-Directed Speech (IDS). In this study, English-speaking parents were recorded speaking naturalistically to both an adult as well as their infant child. We tested whether creak, measured using mean f0 and three measures of glottal constriction (H1-H2, H1-A1, and H1-A2), was more or less prevalent in each register, hypothesizing that IDS, with its patterns of higher pitch and shorter utterance duration, would have less creaky voice. Results indicated that IDS did have higher pitch, as well as greater creakiness according to some, but not all, of the measures of glottal constriction. The amount of creak in IDS was less than what could be accounted for simply by factoring in the effects of higher fundamental frequency and shorter utterance duration; thus, we discuss whether parents might employ the opposite of creak (e.g., breathy or falsetto voice) as part of a "caretaker persona" that dissociates from the traits that creak typically indexes in the context of North American English.
March 11
Ryan Bennett (UC Santa Cruz): Prosodic smothering is idiosyncratic and lexical
Prosodic units like the phonological word ω or phonological phrase φ typically correspond to morpho-syntactic units, such as syntactic terminals or XPs. Even when this correspondence is imperfect, it is still lawful and systematic: deviations from syntax-prosody isomorphism can usually be attributed to grammatical principles (e.g. size constraints on prosodic units) which apply broadly and predictably to all relevant prosodic structures in the language. In Japanese, for example, compound words like keizi-sosyoohoo 'code of criminal procedure' are realized as phonological phrases φ rather than phonological words ω when the second member of the compound is too large (Ito & Mester 2021).
Recently, it has been observed that certain morphemes or morpheme-classes may idiosyncratically disrupt normal patterns of prosodification, an effect dubbed 'prosodic smothering' (Bennett et al. 2018, Rolle & Hyman 2019). As one example, negation in Macedonian expands the stress domain of the verb to include preverbal clitics: {clitic ['verb]}, but {['neg clitic verb]}. Most analyses of these patterns assume that prosodically exceptional elements (i) have regular, unremarkable syntax; (ii) are lexically specified to trigger their unique prosodic effects.
An alternative approach to smothering inverts this analysis: it assumes (i) that the syntax of smothering triggers is special; and (ii) that their prosodic behavior is expected from their special syntax, and therefore does not need to be lexically specified (Branan to appear). In this talk I argue against the particular command-based theory of smothering proposed by Branan, but also in favor of the larger claim that smothering effects cannot be reduced to syntactic differences, and instead require idiosyncratic lexical specification. I will also speculate that conditions on language acquisition may help resolve an important puzzle raised by Branan, namely the fact that smothering triggers often form coherent morpho-syntactic classes.
April 1
Julianne Kapner (UC Berkeley): The University Next Door: Sound change in an urban Rochester neighborhood
Recent evidence has documented the retreat of the Northern Cities Shift (NCS) in the Inland North, the advance of split pre-nasal and pre-oral /æ/ (BAT/BAN Split), and the merger of /ɔ/ and /ɑ/ (COT and CAUGHT; Low-Back Merger) in dialects throughout North America. We investigate the status of these three changes-in-progress among fifteen speakers in an urban neighborhood of Rochester, New York. Consistent with other recent studies, we find evidence that the NCS, long a characteristic feature of the Inland North region, is rising above the level of consciousness, acquiring stigma, and retreating in apparent time. We also find apparent time evidence that Rochester speakers are adopting the supralocal BAT/BAN Split and Low-Back Merger, with the former rising above the level of consciousness. In addition, we examine an underexplored social factor in language change: interaction with higher education. We find that speakers that are more socially oriented toward higher education exhibit fewer features of the NCS and more adoption of the supralocal speech patterns. Rather than analyzing education as a proxy for social class, we argue that it is the quality and intensity of interaction with higher education that promotes these changes, perhaps by increasing speakers’ awareness of local variants and by providing additional motivation to adapt.
April 8
Marko Drobnjak (UC Berkeley): Evaluation of testimony: speech perception vs. witness credibility
Speech carries a substantial amount of information about the speaker’s identity. It is possible to infer, among other things, the speaker’s age, gender, or ethnicity from speech alone. What happens when biases based on speech perception affect the assessment of the credibility of testimony in court? Previous empirical research has shown that witness credibility is influenced by several factors, such as ethnicity, socio-economic background, or perceived masculinity during speech interactions in court. Based on these findings, a novel experiment has been designed for the present study, focusing primarily on how dialectal features affect the assessment of credibility of testimony. Slovene, a dialectologically highly diverse language, was chosen as the language of the experiment, because Slovenia has one of the smallest income inequalities in the EU (according to OECD). This means the experiment at least partially controls for socioeconomic factors when testing the effects of dialects on witness credibility in court. In addition to dialectological characteristics, the central part of the experiment also tests the impact of gender of speakers and listeners on witness credibility, while the introductory part of the experiment focuses on the perception of age. Preliminary results (N = 300) will be discussed, as the experiment is still ongoing.
April 15
Xue “Lily” Gong (UC Berkeley): Phonemic segmentation of narrative speech in human cerebral cortex
Speech processing requires extracting meaning from acoustic patterns using a set of intermediate representations based on phonemic units, syllables, words, etc. This processing requires the segmentation of acoustical events in progressively longer time windows. Here we investigated the locus of cortical phonemic processing within the speech cortical network and the granularity of the phonemic segmentation within these phonemic areas. For this purpose, we collected functional MRI data while subjects listened to narrative stories. We then compared the predictive power of linearized models that used spectral features of sound, single phonemes, diphones, triphones, and full word meaning embeddings. We identified areas in the superior temporal (STC), lateral temporal (LTC), and inferior frontal cortex (IFC) that selectively responded to phonemic features. In all the identified phonemic areas, single phonemes features yielded small predictions while diphones features yielded the highest prediction suggesting that cortical segmentation of phonemic units occurs first and principally at the level of diphones. Our analyses also allowed us to accurately determine the phonemic cortical regions and thus precisely localize the phoneme to lexical/semantic transition. Two principal transition regions were identified: a medial-lateral gradient in the LTC and an inferior-superior gradient in the IFC.
April 22
Maksymilian Dąbkowski (UC Berkeley): Two grammars of A'ingae glottalization: A case for Cophonologies by Phase
A’ingae (or Cofán, ISO 639-3: con) is an Amazonian isolate spoken in northeast Ecuador and southern Colombia. This paper describes and analyzes phonological processes pertinent to the glottal stop in A’ingae morphologically complex verbs. A’ingae verbal suffixes are organized in two morphophonological domains, or strata. Within the inner domain, glottal stops assign stress to the syllable which contains the second mora to the left of the glottal stop. In the outer domain, glottal stops do not have any effect on stress. In addition, some verbal suffixes delete stress (they are dominant). Dominance is unpredictable and independent of the suffix’s morphophonological domain, but dominance and phonological stratification interact in a non-trivial way: Stress-deleting suffixes of the inner domain also delete glottalization, but stress-deleting suffixes of the outer domain leave glottalization intact.
The main theoretical import of the study resides in the architecture of the A’ingae grammar, which requires a phonological formalism capable of (i) modeling phonological stratification while (ii) allowing for morpheme-specific phonological idiosyncrasies which (iii) interact with the phonological grammar of their stratum. The formalism I adopt is Cophonologies by Phase (henceforth CbP; e. g. Sande, Jenks, and Inkelas, 2020). CbP fulfills the above desiderata by associating cophonologies, or morphologically-specific phonological grammars, with (i) phase heads as well as (ii) individual morphosyntactic features, which (iii) compile together to yield the phonological ranking applied at spell-out.
May 6
Wesley dos Santos and Hannah Sande (UC Berkeley): Apparent partial reduplication in Kawahíva is total reduplication of a particular spell-out domain
We present a case of apparent partial reduplication from Kawahíva, an endangered Tupí-Guaraní language of the Brazilian Amazon. We show that what appears to be partial reduplication is in fact best analyzed as total reduplication at a particular point in the derivation. The base of the reduplicant is shown to be syntactically rather than prosodically defined; the base is a syntactic constituent that corresponds with a syntactic phase domain. The reduplicant copies all of the material in the base at the time that reduplication applies. Additionally, copied morphemes show predictable allomorphy given their morphosyntactic context, and do not seem to be subject to phonological identity with the base. We show that these facts are best analyzed in a model that assumes cyclic spell-out of syntactic material and allows for a morphological doubling approach to reduplication.
Fall 2022
September 2
CJ Brickhouse (Stanford): Revisiting California’s apparent low-back merger: a lot of thoughts about LOT and THOUGHT
Since Moonwomon (1991), linguists have observed an apparent merger in the low-back vowels of California English speakers based on overlap in F1-F2 space (e.g., D’Onofrio, et al. 2016; Holland 2014; Kennedy and Grama 2012), but Wade (2017) demonstrates that apparent mergers may be distinguished along dimensions other than F1 and F2 frequency. Presenting two apparent-time analyses, I evaluate whether Californians’ low-back vowels are truly merged in production using wordlist data from ~400 speakers across 5 regions of California. I replicate previous findings of vowel convergence in formant space but demonstrate a simultaneous divergence in vowel duration over that same period. These findings suggest that speakers might have maintained a contrast, and that the low back vowels might not have merged in California English. These findings inform the design of future perceptual experiments, demonstrating a need to manipulate length in addition to formant frequency to account for this additional potential dimension of contrast in California English. I conclude with an argument against the exclusive use of the 2-dimensional F1-F2 plane, and I suggest ways of incorporating more holistic analyses of vowel quality into the workflow of future studies.
September 9
Emily Grabowski (UC Berkeley): Exploring phonetic time series analysis and representations
Until recently, phonetic analyses have hinged on researcher-determined features that are derived from spectral information in the data set and measured at a fixed point, such as formants, fundamental frequency, VOT, etc. While these measures have been unmistakably useful for phonetic analysis, more recent studies in acoustic phonetics have included an expansion towards including dynamic behavior of measures in analysis. In this talk, I will present some preliminary examinations of tools and techniques used in time series analysis and representation in comparison to more traditional methods, and discuss how machine learning and statistical advances might further influence acoustic analysis.
September 16
Rachel Weissler (University of Oregon): What is incorporated in emotional prosody perception? Evidence from race perception studies and analysis of acoustic cues
This research is centered upon how American English-speaking listeners cognitively interact with Black and White Voices. We investigated how individuals make judgements about the race and emotion of speakers. Participants listened to isolated words from an African American English (AAE) speaker and a Standardized American English (SdAE) speaker in happy, neutral, and angry prosodies, and were asked to indicate the perceived race and emotion of the speaker. Speech stimuli were analyzed for variation in pitch, creaky voice, and intensity, three acoustic factors used to distinguish emotion. Results of the perception study showed that SdAE was rated whitest in the happy condition, whereas AAE was rated blackest in neutral and angry conditions. Interestingly, the acoustic measurements of the two speakers evidenced that they use pitch, creak duration, and intensity in similar ways (according to mean and range). The results of the perception study indicate that listeners must be relying on cues beyond emotional acoustic ones to make their decisions for race and emotion of speaker. We argue that the pervasiveness of the Angry Black Woman trope in the U.S. is a stereotype that may have influenced participants' choices. As this is a first foray into raciolinguistic ideologies and emotion perception, we suggest that incorporating stereotypes into interpretation of emotion perception is crucial, as it may be a stronger driver of determining emotion from the speech than acoustic cues.
September 23
Scott Borgeson (Michigan State University): Long-distance compensatory lengthening
Compensatory lengthening (CL) is the phenomenon wherein one sound in a word is deleted or shortened, and another grows longer to make up for it. In mora theory (Hayes 1989), it amounts to the transfer of a mora from one segment to another. Traditionally, the two segments involved have always been adjacent to one another, or at the very least in adjacent syllables, but in this talk, I show (with evidence from Slovak and Estonian) that they can in fact be separated by multiple syllable boundaries.
Currently, no theoretical machinery exists that can distinguish between long-distance CL (LDCL) of this sort and the more widely-attested local CL, and as a result any language that displays local CL is also predicted to tolerate LDCL, contrary to fact. To fill this gap, I propose an expanded definition of the constraint LIN that applies across all tiers in the prosodic hierarchy. This will prohibit the inversion of precedence relations between moras and segments, effectively punishing moras the further they move from their input positions and thus requiring CL to be as local as possible.The addition of this constraint accomplishes two things. First, it renders LDCL more marked than local CL, and thus guarantees that LDCL should be rarer cross-linguistically, and disfavored even in the languages that do tolerate it. Second, it nevertheless allows for the existence of LDCL in some cases—specifically, if LIN is dominated by some markedness constraint, and if local CL violates that constraint but LDCL does not, then LDCL will be selected instead. For example, CL in Estonian may not create new long vowels or geminates because of the constraints *VV and *GEM. If local CL can take place without doing so, it is always selected, but if local CL violates this prohibition and LDCL does not, then LDCL is chosen instead.
September 30
Noah Hermalin (UC Berkeley): An Introduction to Phonographic Writing Systems
This talk is intended to be a general introduction to phonographic writing systems, which are writing systems for which graphic units primarily map to phonological or phonetic information. The first portion of the talk will go over some basic writing system terminology, then discuss the typological categories which are commonly used to describe phonographic writing systems, including syllabaries, alphabets, abjads, and abugidas/alphasyllabaries. From there, we'll go into more detail on the range of extant (and possible) phonographic writing systems, with an eye for questions such as: what information is more or less likely to be explicitly encoded in different (types of) phonographic writing systems; what strategies do different writing systems use to convey similar information; what challenges do extant writing systems pose for common typological categories of writing systems; and what relevance do phonographic writing systems have for phonetics and phonology research. Time-permitting, the talk will close with a brief discussion of some ongoing work regarding how one can quantify how phonographic a writing system is.
October 7
Allegra Robertson (UC Berkeley): Rough around the edges: Representing root-edge laryngeal features in Yánesha’
In Yánesha’ (Arawakan), the phonetic, phonotactic, and prosodic traits of laryngeals indicate that they are suprasegmental features associated with vowel segments, resulting in laryngealized vowels /Vʰ/ and /Vˀ/ (Duff-Tripp, 1997; Robertson, 2021). The non-segmental status of laryngeals is at odds with most Arawakan languages (Michael et al., 2015), but their unusual characteristics do not end there. Although laryngeals are contrastive and lexically consistent, they emerge and disappear at morpheme boundaries in seemingly unexpected ways. Furthermore, noun possession data imply that, in addition to lexical and (occasional) phonological factors, morphosyntactic factors affect laryngeals. Starting from an agnostic and purely intuitive space, this talk seeks to clarify and formalize the complex behavior of laryngeals in Yánesha’, using original data from 2022 fieldwork. In this preliminary study, I explore the relative advantages of three frameworks to capture Yánesha’ laryngeal behavior: Autosegmentalism (e.g. Goldsmith, 1976), Q-theory (e.g. Inkelas & Shih, 2014), and Cophonology Theory (e.g. Inkelas, Orgun & Zoll 1997). I provisionally conclude that two of the three frameworks can account for this phenomenon, but with differing implications for the constraints at play.
October 14
AMP practice talks
October 21
Canceled (AMP)
October 28
Michael Obiri-Yeboah (Georgetown): Grammatical Tone Interactions in Complex Verbs in TAM Constructions in Gua
Research in tonal properties has revealed interesting roles that tone plays using pitch to show both lexical and grammatical properties of language. Rolle (2018) provides crosslinguistic patterns and analytical tools for accounting for grammatical tones - marking of grammatical structures by a function of different tonal patterns. In this talk, I discuss grammatical tone (GT) in Gua and show that GT in Gua can be analyzed as tone melodies, L, HL and LH. I show further that, aside from the tonal patterns in verb roots, bòlí ‘break!’, bòlì ‘breaks’, bólì ‘broke’, there are verbal prefixes that increase the complexities in the tense, aspect and mood (TAM) structures in the language. I provide a formal phonological analysis that accounts for the interactions between GT and complex verbs in TAM structures in Gua. The analysis also enhances our understanding of morphophonological interactions in linguistic theory.
November 4
Rachel Walker (UC Santa Cruz): Gestural Organization and Quantity in English Rhotic-final Rhymes
In phonological structure, the segment root node is classically the locus of temporal organization for subsegmental units, such as features, governing their sequencing and overlap (e.g. Clements 1985, Sagey 1986). Root nodes also classically figure in the calculation of weight-by-position, by which coda consonants are assigned a mora (Hayes 1989). In this talk, I discuss evidence that motivates encoding temporal relations directly among subsegmental elements, represented phonologically as gestures (Browman & Goldstein 1986, 1989). A case study of phonotactics in syllable rhymes of American English, supported by a real-time MRI study of speech articulation, provides evidence for a controlled sequence of articulations in coda liquids. This study finds support for phonological representations that include 1) sequencing of subsegments within a segment (within a liquid consonant), and 2) cross-segment partial overlap (between a liquid and preceding vowel). Further, the assignment of weight in the rhyme is sensitive to these configurations. To accommodate such scenarios, it is proposed that segments are represented as sets of gestures without a root node (Walker 2017, Smith 2018) with a requisite component of temporal coordination at the subsegmental level. A revised version of weight-by-position is proposed that operates over subsegmental temporal structure. By contrast, the scenarios motivated by the phonotactics of rhymes with coda liquids are problematic for a theory in which sequencing is controlled at the level of root nodes.
November 18
Meeting cancelled due to graduate student strike.
November 25
Meeting cancelled due to graduate student strike.
December 2
Meeting cancelled due to graduate student strike.
December 9
Meeting cancelled due to graduate student strike.