Computational and Experimental Methods

Beguš speaks at ICON 2021

December 15, 2021

Gašper Beguš will give an invited lecture at ICON 2021: 18th International Conference on Natural Language Processing during a special session on the "Representation of speech, articulatory dynamics, prosody and language in layers." The talk is titled "Interpreting internal representations of deep convolutional neural networks trained on raw speech." More info is available here. Gašper can provide the link to anyone who would like to attend.

Beguš gives two invited talks

November 29, 2021

Gašper Beguš recently gave two invited talks—one at SRPP at Sorbonne Nouvelle (Paris III) and the other at Kuhl Lab Forum, University of Washington—both titled "Interpretable comparison between auditory brainstem response and intermediate convolutional layers in deep neural networks."

Beguš publishes in TACL

November 9, 2021

Gašper Beguš's paper "Identity-Based Patterns in Deep Convolutional Networks: Generative Adversarial Phonology and Reduplication" has just been published in Transactions of the Association for Computational Linguistics (TACL). It is available as an Open Access download here.

The paper was also presented at EMNLP 2021. The talk is recorded here.

Congrats, Gašper!

Modeling unsupervised phonetic and phonological learning in Generative Adversarial Phonology

Gašper Beguš

This paper models phonetic and phonological learning as a dependency between random space and generated speech data in the Generative Adversarial Neural network architecture and proposes a methodology to uncover the network’s internal representation that corresponds to phonetic and phonological features. A Generative Adversarial Network (Goodfellow et al. 2014; implemented as WaveGAN for acoustic data by Donahue et al. 2019) was trained on an allophonic distribution in English, where voiceless stops surface as aspirated word-initially before stressed vowels except if preceded by a sibilant...

Identity-Based Patterns in Deep Convolutional Networks: Generative Adversarial Phonology and Reduplication

Gašper Beguš

This paper models unsupervised learning of an identity-based pattern (or copying) in speech called reduplication from raw continuous data with deep convolutional neural networks. We use the ciwGAN architecture (Beguš, 2021a) in which learning of meaningful representations in speech emerges from a requirement that the CNNs generate informative data. We propose a technique to wug-test CNNs trained on speech and, based on four generative tests, argue that the network learns to represent an identity-based pattern in its latent space. By manipulating only two...

Generative Adversarial Phonology: Modeling Unsupervised Phonetic and Phonological Learning With Neural Networks

Gašper Beguš

Training deep neural networks on well-understood dependencies in speech data can provide new insights into how they learn internal representations. This paper argues that acquisition of speech can be modeled as a dependency between random space and generated speech data in the Generative Adversarial Network architecture and proposes a methodology to uncover the network's internal representations that correspond to phonetic and phonological properties. The Generative Adversarial architecture is uniquely appropriate for modeling phonetic and phonological learning because the network is...

CiwGAN and fiwGAN: Encoding information in acoustic data to model lexical learning with Generative Adversarial Networks

Gašper Beguš

How can deep neural networks encode information that corresponds to words in human speech into raw acoustic data? This paper proposes two neural network architectures for modeling unsupervised lexical learning from raw acoustic inputs: ciwGAN (Categorical InfoWaveGAN) and fiwGAN (Featural InfoWaveGAN). These combine Deep Convolutional GAN architecture for audio data (...

Beguš speaks at USC

October 5, 2021

Gašper Beguš gave a colloquium talk at USC Linguistics on October 4 entitled "Deep Learning and Phonology: Comparing Behavioral and Neural Speech Data with Outputs of Deep Generative Models."

Beguš publishes in Computer Speech & Language

September 16, 2021

Congrats to Gašper Beguš on the publication of his article "Local and non-local dependency learning and emergence of rule-like representations in speech data by deep convolutional generative adversarial networks" in Computer Speech & Language! Click here to download the article (Open Access).