Language and Cognition

Berkeley linguists published in Journal of Language Evolution

May 1, 2022

A new paper by Berkeley linguists and colleagues has just appeared in the Journal of Language Evolution:

Noga Zaslavsky*, Karee Garvin* (PhD 2021), Charles Kemp, Naftali Tishby, and Terry Regier. 2022. The evolution of color naming reflects pressure for efficiency: Evidence from the recent past. Journal of Language Evolution. (* = co-first authors, contributed equally)

Click here for the preprint PDF. Congrats to all!

Didn't hear that coming: Effects of withholding phonetic cues to code-switching.

Alice Shen
Susanne Gahl
Keith Johnson
2020

Code-switching has been found to incur a processing cost in auditory comprehension. However, listeners may have access to anticipatory phonetic cues to code-switches (Piccinini & Garellek, 2014; Fricke et al., 2016), thus mitigating switch cost. We investigated effects of withholding anticipatory phonetic cues on code-switched word recognition by splicing English-to-Mandarin code-switches into unilingual English sentences. In a concept monitoring experiment, Mandarin–English bilinguals took longer to recognize code-switches, suggesting a switch cost. In an eye tracking experiment, the...

Twenty-eight years of vowels

Gahl, Susanne
Baayen, Harald
2019

Research on age-related changes in speech has primarily focused on comparing “young” vs. “elderly” adults. Yet, listeners are able to guess talker age more accurately than a binary distinction would imply, suggesting that acoustic characteristics of speech change continually and gradually throughout adulthood. We describe acoustic properties of vowels produced by eleven talkers based on naturalistic speech samples spanning a period of 28 years, from ages 21 to 49. We find that the position of vowels in F1/F2 space shifts towards the periphery with increasing talker age. Based on...

Didn't hear that coming: Effects of withholding phonetic cues to code-switching.

Alice Shen
Gahl, Susanne
Johnson, Keith
2020

Code-switching has been found to incur a processing cost in auditory comprehension. However, listeners may have access to anticipatory phonetic cues to code-switches (Piccinini & Garellek, 2014; Fricke et al., 2016), thus mitigating switch cost. We investigated effects of withholding anticipatory phonetic cues on code-switched word recognition by splicing English-to-Mandarin code-switches into unilingual English sentences. In a concept monitoring experiment, Mandarin–English bilinguals took longer to recognize code-switches, suggesting a switch cost. In an eye tracking experiment, the...

The processing of pseudoword form and meaning in production and comprehension: A computational modeling approach using linear discriminative learning

Chuang, Y. Y., Vollmer, M. L., Shafaei-Bajestan, E., Gahl, S., Hendrix, P., & Baayen, R. H.
2020

Pseudowords have long served as key tools in psycholinguistic investigations of the lexicon. A common assumption underlying the use of pseudowords is that they are devoid of meaning: Comparing words and pseudowords may then shed light on how meaningful linguistic elements are processed differently from meaningless sound strings. However, pseudowords may in fact carry meaning. On the basis of a computational model of lexical processing, linear discriminative learning (LDL Baayen et al., Complexity, 2019, 1–39,...

Berkeley linguists published in PNAS

December 8, 2021

A new article has been published in Proceedings of the National Academy of Sciences, co-authored by four current and former Berkeley linguists (the middle four authors). Congrats, all!

Francis Mollica, Geoff Bacon (PhD 2020), Noga Zaslavsky, Yang Xu, Terry Regier, and Charles Kemp. (2021). The forms and meanings of grammatical markers support efficient communication. Proceedings of the National Academy of Sciences, 118, e2025993118. [Preprint]

Identity-Based Patterns in Deep Convolutional Networks: Generative Adversarial Phonology and Reduplication

Gašper Beguš
2021

This paper models unsupervised learning of an identity-based pattern (or copying) in speech called reduplication from raw continuous data with deep convolutional neural networks. We use the ciwGAN architecture (Beguš, 2021a) in which learning of meaningful representations in speech emerges from a requirement that the CNNs generate informative data. We propose a technique to wug-test CNNs trained on speech and, based on four generative tests, argue that the network learns to represent an identity-based pattern in its latent space. By manipulating only two...

Generative Adversarial Phonology: Modeling Unsupervised Phonetic and Phonological Learning With Neural Networks

Gašper Beguš
2020

Training deep neural networks on well-understood dependencies in speech data can provide new insights into how they learn internal representations. This paper argues that acquisition of speech can be modeled as a dependency between random space and generated speech data in the Generative Adversarial Network architecture and proposes a methodology to uncover the network's internal representations that correspond to phonetic and phonological properties. The Generative Adversarial architecture is uniquely appropriate for modeling phonetic and phonological learning because the network is...

Dąbkowski speaks at CUNY 2021

February 16, 2021

Congrats to Maksymilian Dąbkowski, who will be presenting at the 34th Annual CUNY Conference on Human Sentence Processing (Thursday, March 4, at 3:45pm ET). The title of his talk is "Evidence of accurate logical reasoning in online sentence comprehension" and it is a collaboration with Roman Feiman.