Phorum

The Berkeley Phonetics and Phonology Forum ("Phorum") is a weekly talk and discussion series featuring presentations on all aspects of phonology and phonetics. 

We meet Mondays 12-1pm in 1303 Dwinelle.

Phorum is organized by Myriam Lapierre and Eric Wilbanks. Our emails are respectively "myriam.lapierre" and "wilbanks_eric" @berkeley.edu.


Spring 2019 - Upcoming Talks


April 15

Meg Cychosz (UCB) - The Lexical advantage: Kids learn words, not sounds

A critical question in phonological theory is how speech representations develop throughout childhood. In a traditional view, children acquire phonology from the bottom up, beginning with the production and perception of individual sounds. Eventually, children learn to string these phones together to construct words and build a lexicon (Berent, 2013; Dinnsen & Gierut, 2008; Jakobson, 1941/1968; Jusczyk et al., 2002). Alternative accounts suggest that children construct speech representations gradually, by generalizing over language chunks such as words or syllables (Edwards et al., 2004; Ferguson & Farwell, 1975; Gathercole et al., 1991; Metsala & Walley, 1998). If children do rely on words to construct phonological representations, we should anticipate an interaction of speech production with children’s vocabulary size and language input. Specifically, children with larger vocabularies, who receive more environmental stimuli in the input, should have more abstract segmental representations. We test this hypothesis in four- and five-year-old children who completed nonword and real word repetition tasks. The children produced the real words more accurately, and with less coarticulation, than the nonwords. Performance interacted with children’s vocabulary size and the number of words heard in the input, which we take as evidence for the primacy of the lexicon in early phonological development.

April 22

Yulia Oganian (UCSF) - A Temporal landmark for syllabic representations of continuous speech in human superior temporal gyrus

The most salient acoustic features in the speech signal are the peaks and valleys that define the amplitude envelope. Perceptually, the envelope is necessary for speech comprehension. Yet the neural computations that represent the envelope, and their linguistic implications, are heavily debated. A widely held theory is that the amplitude envelope underlies segmentation of speech into syllabic units (e.g. /seg/-/men/-/ta/-/tion/), as speech amplitude peaks in syllabic centers (vowels) and reaches local minima around syllabic boundaries. In contrast, animal studies suggest that neural encoding of the speech envelope selectively represents timepoints of rapid increases in the envelope, or the continuous moment-by-moment envelope itself. I will describe a series of three experiments using high-density human intracranial recordings, that address this debate. In two experiments participants listened to natural speech at regular and slowed speech rates. Neural responses at all speech rates were driven by onset edges in the speech signal, i.e. local peaks in the first derivative of the speech envelope. A follow-up experiment with amplitude-modulated non-speech tones confirmed this result: Neural responses were time-locked to onsets of amplitude modulation ramps and larger for faster amplitude rises. Finally, acoustic analysis of natural speech revealed that 1) Auditory edges reliably cue the information-rich transition between the consonant-onset and vowel-nucleus of syllables, and 2) The sharpness of edges – encoded in the magnitude of neural responses - cues lexical stress. In summary, our findings establish that encoding of auditory edges in human STG underlies the perception of the temporal structure of speech.

April 29

Daniel Silverman (San Jose State University) - TBA


Archive

A list of previous Phorum talks can be found at the Phorum Archive