The Berkeley Phonetics, Phonology and Psycholinguistics Forum ("Phorum") is a weekly talk and discussion series featuring presentations on all aspects of phonology, phonetics and psycholinguistics. We meet on Fridays from 3 to 4:30 pm. Phorum is organized by Katie Russell and Maks Dąbkowski. Our emails are respectively "krrussell" and "dabkowski"

Fall 2022 Schedule

September 2

CJ Brickhouse (Stanford): Revisiting California’s apparent low-back merger: a lot of thoughts about LOT and THOUGHT

Since Moonwomon (1991), linguists have observed an apparent merger in the low-back vowels of California English speakers based on overlap in F1-F2 space (e.g., D’Onofrio, et al. 2016; Holland 2014; Kennedy and Grama 2012), but Wade (2017) demonstrates that apparent mergers may be distinguished along dimensions other than F1 and F2 frequency. Presenting two apparent-time analyses, I evaluate whether Californians’ low-back vowels are truly merged in production using wordlist data from ~400 speakers across 5 regions of California. I replicate previous findings of vowel convergence in formant space but demonstrate a simultaneous divergence in vowel duration over that same period. These findings suggest that speakers might have maintained a contrast, and that the low back vowels might not have merged in California English. These findings inform the design of future perceptual experiments, demonstrating a need to manipulate length in addition to formant frequency to account for this additional potential dimension of contrast in California English. I conclude with an argument against the exclusive use of the 2-dimensional F1-F2 plane, and I suggest ways of incorporating more holistic analyses of vowel quality into the workflow of future studies.

September 9

Emily Grabowski (UC Berkeley): Exploring phonetic time series analysis and representations

Until recently, phonetic analyses have hinged on researcher-determined features that are derived from spectral information in the data set and measured at a fixed point, such as formants, fundamental frequency, VOT, etc. While these measures have been unmistakably useful for phonetic analysis, more recent studies in acoustic phonetics have included an expansion towards including dynamic behavior of measures in analysis. In this talk, I will present some preliminary examinations of tools and techniques used in time series analysis and representation in comparison to more traditional methods, and discuss how machine learning and statistical advances might further influence acoustic analysis.

September 16

Rachel Weissler (University of Oregon): What is incorporated in emotional prosody perception? Evidence from race perception studies and analysis of acoustic cues

This research is centered upon how American English-speaking listeners cognitively interact with Black and White Voices. We investigated how individuals make judgements about the race and emotion of speakers. Participants listened to isolated words from an African American English (AAE) speaker and a Standardized American English (SdAE) speaker in happy, neutral, and angry prosodies, and were asked to indicate the perceived race and emotion of the speaker. Speech stimuli were analyzed for variation in pitch, creaky voice, and intensity, three acoustic factors used to distinguish emotion. Results of the perception study showed that SdAE was rated whitest in the happy condition, whereas AAE was rated blackest in neutral and angry conditions. Interestingly, the acoustic measurements of the two speakers evidenced that they use pitch, creak duration, and intensity in similar ways (according to mean and range). The results of the perception study indicate that listeners must be relying on cues beyond emotional acoustic ones to make their decisions for race and emotion of speaker. We argue that the pervasiveness of the Angry Black Woman trope in the U.S. is a stereotype that may have influenced participants' choices. As this is a first foray into raciolinguistic ideologies and emotion perception, we suggest that incorporating stereotypes into interpretation of emotion perception is crucial, as it may be a stronger driver of determining emotion from the speech than acoustic cues.

September 23

Scott Borgeson (Michigan State University): Long-distance compensatory lengthening

Compensatory lengthening (CL) is the phenomenon wherein one sound in a word is deleted or shortened, and another grows longer to make up for it. In mora theory (Hayes 1989), it amounts to the transfer of a mora from one segment to another. Traditionally, the two segments involved have always been adjacent to one another, or at the very least in adjacent syllables, but in this talk, I show (with evidence from Slovak and Estonian) that they can in fact be separated by multiple syllable boundaries.

Currently, no theoretical machinery exists that can distinguish between long-distance CL (LDCL) of this sort and the more widely-attested local CL, and as a result any language that displays local CL is also predicted to tolerate LDCL, contrary to fact. To fill this gap, I propose an expanded definition of the constraint LIN that applies across all tiers in the prosodic hierarchy. This will prohibit the inversion of precedence relations between moras and segments, effectively punishing moras the further they move from their input positions and thus requiring CL to be as local as possible.
The addition of this constraint accomplishes two things. First, it renders LDCL more marked than local CL, and thus guarantees that LDCL should be rarer cross-linguistically, and disfavored even in the languages that do tolerate it. Second, it nevertheless allows for the existence of LDCL in some cases—specifically, if LIN is dominated by some markedness constraint, and if local CL violates that constraint but LDCL does not, then LDCL will be selected instead. For example, CL in Estonian may not create new long vowels or geminates because of the constraints *VV and *GEM. If local CL can take place without doing so, it is always selected, but if local CL violates this prohibition and LDCL does not, then LDCL is chosen instead.

September 30

Noah Hermalin (UC Berkeley): An Introduction to Phonographic Writing Systems

This talk is intended to be a general introduction to phonographic writing systems, which are writing systems for which graphic units primarily map to phonological or phonetic information. The first portion of the talk will go over some basic writing system terminology, then discuss the typological categories which are commonly used to describe phonographic writing systems, including syllabaries, alphabets, abjads, and abugidas/alphasyllabaries. From there, we'll go into more detail on the range of extant (and possible) phonographic writing systems, with an eye for questions such as: what information is more or less likely to be explicitly encoded in different (types of) phonographic writing systems; what strategies do different writing systems use to convey similar information; what challenges do extant writing systems pose for common typological categories of writing systems; and what relevance do phonographic writing systems have for phonetics and phonology research. Time-permitting, the talk will close with a brief discussion of some ongoing work regarding how one can quantify how phonographic a writing system is.

October 7

Allegra Robertson (UC Berkeley): Rough around the edges: Representing root-edge laryngeal features in Yánesha’

In Yánesha’ (Arawakan), the phonetic, phonotactic, and prosodic traits of laryngeals indicate that they are suprasegmental features associated with vowel segments, resulting in laryngealized vowels /Vʰ/ and /Vˀ/ (Duff-Tripp, 1997; Robertson, 2021). The non-segmental status of laryngeals is at odds with most Arawakan languages (Michael et al., 2015), but their unusual characteristics do not end there. Although laryngeals are contrastive and lexically consistent, they emerge and disappear at morpheme boundaries in seemingly unexpected ways. Furthermore, noun possession data imply that, in addition to lexical and (occasional) phonological factors, morphosyntactic factors affect laryngeals. Starting from an agnostic and purely intuitive space, this talk seeks to clarify and formalize the complex behavior of laryngeals in Yánesha’, using original data from 2022 fieldwork. In this preliminary study, I explore the relative advantages of three frameworks to capture Yánesha’ laryngeal behavior: Autosegmentalism (e.g. Goldsmith, 1976), Q-theory (e.g. Inkelas & Shih, 2014), and Cophonology Theory (e.g. Inkelas, Orgun & Zoll 1997). I provisionally conclude that two of the three frameworks can account for this phenomenon, but with differing implications for the constraints at play.  

October 14

AMP practice talks

October 21

Canceled (AMP)

October 28

Michael Obiri-Yeboah (Georgetown): Grammatical Tone Interactions in Complex Verbs in TAM Constructions in Gua
Research in tonal properties has revealed interesting roles that tone plays using pitch to show both lexical and grammatical properties of language. Rolle (2018) provides crosslinguistic patterns and analytical tools for accounting for grammatical tones - marking of grammatical structures by a function of different tonal patterns. In this talk, I discuss grammatical tone (GT) in Gua and show that GT in Gua can be analyzed as tone melodies, L, HL and LH. I show further that, aside from the tonal patterns in verb roots, bòlí ‘break!’, bòlì ‘breaks’, bólì ‘broke’, there are verbal prefixes that increase the complexities in the tense, aspect and mood (TAM) structures in the language. I provide formal phonological analysis that accounts for the interactions between GT and complex verbs in TAM structures in Gua. The analysis also enhances our understanding of morphophonological interactions in linguistic theory. 

November 4

Rachel Walker (UC Santa Cruz): Gestural Organization and Quantity in English Rhotic-final Rhymes

In phonological structure, the segment root node is classically the locus of temporal organization for subsegmental units, such as features, governing their sequencing and overlap (e.g. Clements 1985, Sagey 1986). Root nodes also classically figure in the calculation of weight-by-position, by which coda consonants are assigned a mora (Hayes 1989). In this talk, I discuss evidence that motivates encoding temporal relations directly among subsegmental elements, represented phonologically as gestures (Browman & Goldstein 1986, 1989). A case study of phonotactics in syllable rhymes of American English, supported by a real-time MRI study of speech articulation, provides evidence for a controlled sequence of articulations in coda liquids. This study finds support for phonological representations that include 1) sequencing of subsegments within a segment (within a liquid consonant), and 2) cross-segment partial overlap (between a liquid and preceding vowel). Further, the assignment of weight in the rhyme is sensitive to these configurations. To accommodate such scenarios, it is proposed that segments are represented as sets of gestures without a root node (Walker 2017, Smith 2018) with a requisite component of temporal coordination at the subsegmental level. A revised version of weight-by-position is proposed that operates over subsegmental temporal structure. By contrast, the scenarios motivated by the phonotactics of rhymes with coda liquids are problematic for a theory in which sequencing is controlled at the level of root nodes.

November 11

no meeting (Veteran's Day)

November 18

Nay San (Stanford)

November 25

no meeting (Thanksgiving)

December 2

Simon Todd (UC Santa Barbara)

December 9

no meeting (RRR week)