Phorum 2018

Spring 2018

March 5, 2018 - Brian Smith (UC Berkeley)

"Surface optimization in English function word allomorphy"

In OT accounts of Phonologically Conditioned Allomorphy (PCA), phonological conditioning is the result of markedness constraints, which favor allomorphs that optimize surface structure (e.g. Mester 1994, Tranel 1996, Mascaró 1996, Kager 1996). For example, the use of an before vowels in English can be analyzed in OT as the result of a high-ranking constraint against onsetless syllables, which militates against a apple (Nevins 2011). The surface optimization approach has been challenged generally by Paster (2006) and Embick (2010) on the basis of languages with non-optimizing patterns of PCA, and the approach has been challenged specifically for English a/an by Pak (2016), who argues that surface optimization can’t account for the use of an before emphatic glottal stop (e.g. an [ʔ] apple, when apple is emphasized) or the use of before pause-fillers (e.g. a… um… apple). In this talk, I argue for a surface optimization analysis of English a/an, using addition data from the variable realization of function words, such as of ([əv]~[ə]), the ([ði]~[ðə])and to ([tu]~[tə]). Using data from the Buckeye corpus (Pitt et al. 2007), I show that all four function words conspire to avoid the same marked structure: a lax vowel followed by a vowel. In addition to conditioning function word allomorphy, the constraint  *Lax-Vowel-Vowel drives glottal stop epenthesis (samba[ʔ]-ing, uh[ʔ]oh), is active in phonotactics (*[bə.o]), and conditions other cases of suffixation (such as –(a)thon and –(a)licious).  Variation in the Buckeye data, both within and across speakers and within and across function words, along with the behavior of function words before pause fillerscan be captured in a weighted constraint model where *Lax-Vowel-Vowel is weighted with respect to constraints that encode the default allomorph for each function word.

March 19, 2018 - Jochen Trommer1 & Eva Zimmermann2 (1University of Leipzig, 2UBC Vancouver)

"The strength and weakness of tone: A new account to tonal exceptions and tone representations"

In this talk we show that Gradient Symbolic Representations (= GSR; Smolensky and Goldrick, 2016;
Rosen, 2016) open up a new perspective on a variety of central aspects in the phonology of tone: exceptional
triggers and undergoers in tone spreading and association, the representation of multiple tone
heights, and the elusive behavior of contour tones. Based on these phenomena, we argue – dissenting
from earlier applications of GSR to segmental phonology – that gradience is not restricted to inputs, but
may also be retained in outputs.

In the Gradient Symbolic Representations approach, phonological representations may have different
degrees of presence in an underlying form, expressed as numerical activities. Harmony evaluation
is formally modeled inside Harmonic Grammar where constraints are weighted, not ranked (Legendre
et al., 1990). Gradient Symbolic Representations have so far been motivated mainly with segmental phenomena
(such as French liaison). In our talk, we present three case studies applying the framework to
tonal phenomena. In a first case study on Oku (Hyman, 2010), we show that complex representational
assumptions on Grassfield Bantu tones – combinations of tonal subfeatures and floating features – can
be replaced by the assumption that differently behaving High, Low and Mid tones simply have different
gradient activation values for the atomic tone features Low and High. In a second case study based on the
phrasal tonology of the Western Nilotic language Jumjum (Andersen, 2004), we argue that an apparent
underlying contrast between high and falling tones can also be fruitfully reinterpreted as a distinction
between strongly and weakly activated high tones, an analysis which is much less abstract since the
alleged falling tone in most contexts surfaces as High, whereas its Falling realizations are due to independently
motivated additional Low tones (e.g. the utterance-final Low boundary tone). Our third case
study deals with lexical exceptions in the tonology of two varieties of Mixtec (Otomanguean; Pike, 1944;
Mak, 1950; Hunter and Pike, 1969; McKendry, 2013) where both morphemes that unexpectedly don’t
host a floating tone and floating tones that unexpectedly don’t associate to TBU’s can be found. Both
these patterns of exceptional non-undergoers and exceptional non-triggers follow under the assumption
of gradiently active phonological elements that gradiently satisfy/violate markedness constraints: Some
morphemes have tones that are only weakly active (=not a fatal problem for *FLOAT) and others contain
a weakly active TBU (= a dispreferred host for floating tones).

In all three analyses, output gradience, a possibility explicitly rejected in earlier work on GSR, plays
a crucial role.

April 2, 2018 - Aditi Lahiri (Somerville College, Oxford University)

"Metrical stress: Effects of loans in English"

April 16, 2018 - Reiko Kataoka & Hahn Koo (San Jose State)

"Comparing malleability of phonetic category between [i] and [u]"

Listeners can retune phonetic categories through “lexically-guided perceptual learning” (Norris, McQueen, & Cutler, 2003), whereby listeners come to recognize phonetically ambiguous sounds as particular phonemes after being exposed to these sounds within lexical contexts. While it is an adaptive mechanism that allows listeners to cope with a wide range of new and unfamiliar pronunciation patterns, recent studies suggest that there are various constraints on perceptual learning and that there is still much to learn about its precise mechanism. Factors constraining the process reported in the literature include speaker conditions, listener characteristics, exposure conditions, and identity of target sounds that listeners learn to retune.

We propose the variability of a phonetic category as another factor constraining perceptual learning and present an experimental study in support of our proposal. Our hypothesis is that a more variable category resists retuning more than a less variable category. In our experiment, we compared two categories [i] and [u], the latter of which is more variable and therefore predicted to resist retuning more.Two groups of American listeners were exposed to ambiguous vowels ([i/u]) within words that index a phoneme /i/ (e.g., athl[i/u]t) (i-group) or /u/ (e.g., aftern[i/u]n) (u-group). They were also asked to categorize sounds from a [bip]-[bup] continuum before and after the exposure. The i-group significantly increased /bip/ responses after exposure, but the u-group did not change their responses significantly.

We discuss implications of our results on the relationship among asymmetric variability, confusability, and malleability of phonetic categories. We also discuss how representation-based models of speech perception might account for the asymmetric perceptual learning effect, highlighting the complex relationship among distribution of sounds, their mental representation, and speech perception.

April 23, 2018 - Srikantan Nagarajan (UCSF)

"Psychophysical and imaging studies on speech motor control"

Normal speech is produced using a combination of feedback and feedforward control. In feedback control, sensory information is used to guide and correct an ongoing movement. Feedforward control involves executing movements in a predictive manner, using knowledge gained from past motor actions to anticipate the sensory outcomes of current motor commands. Feedback control can be studied in the laboratory by examining online responses to auditory feedback perturbations. Feedforward control can be studied by examining sensorimotor adaptation of speech resulting from consistent exposure to altered feedback. In this talk I will describe three studies from our research group on speech motor control in healthy adults and in various patient populations. First I will describe novel findings that demonstrate real-time feedback control of formants. Second, I will describe novel feedback direction dependence on sensorimotor adaptation of speech. Finally, I will describe feedback and feedforward control of speech in three patient populations - cerebellar degeneration, genetic variants of autism spectrum disorders and Alzheimer's disease (AD), including imaging studies during speech motor control in AD.

April 30, 2018 - Denis Bertet (Université Lumière–Lyon 2, DDL)

"What exactly is the phonological feature [nasality] in Ticuna (isolate, Western Amazonia)?"

Why is *[pã] nowhere to be found on the surface in Ticuna (and more generally Oral Voiceless Stop–Nasal Vowel combinations), if the phonological feature of nasality seems to be a syllabic suprasegment in the language? How to account for the absence of what one could expect to obtain as the realization of /{pa}[+nasal]/? In this talk several hypotheses will be explored in an attempt to understand what phonologically underlies the typologically unusual distribution of nasality observed on the surface in San Martín de Amacayacu Ticuna (Amazonas, Colombia).

Fall 2018

August 27, 2018 - Round Robin

Please join us as we get up to date with what the Phorum community is working on with a round robin discussion of current research projects and summer work.

September 24, 2018 - Amanda RyslingRegressive spectral assimilation bias in ambiguous speech sound perception

The vast majority of work on segmental perception focuses on how listeners differentiate adjacent speech sounds from each other, or "compensate for coarticulation." Much of this assumes that listeners are so successful that their failures, when adjacent speech sounds are heard as similar, are vanishingly rare (see Ohala, 1981, inter multa alia). I demonstrate that this assumption is wrong: listeners in specific clear listening conditions productively hear the first of two sounds as similar to the second. This tendence is argued to be responsbile for the overwhelming typological prevalence of regressive major place assimilation in the phonologies of the world's languages.

October 1, 2018 - Florian Lionnet: Phonetically grounded gradient faithfulness: the case of conditional feature affixation in Laal

In this paper I describe and analyze an unusually complex case of multiple feature affixation in Laal (endangered isolate, Chad), where one of the plural affixes consists in a L tone and two vowel features: [+high] and [+round], all replacive, i.e. realized on all possible bearers (vowels) in the base. Most interestingly, as seen in (1a,b) vs. (1c), the realization of the floating [+round] is conditioned by the presence of a labial consonant in the base, in any position (word-initial in a, word-final in b). This paper focuses on the conditional realization of [+round].

(1)

singular

plural

L

[+high]

[+round]

a.

mɛ̄lāg

mỳlùg

‘be red’

yes

yes

yes

b.

tārɨ̄m

tùrùm

‘fish sp.’

yes

yes

yes

c.

ɗāgān

ɗɨ̀gɨ̀n

‘be light’

yes

yes

*

I reanalyze the realization of the [+round] feature as being conditioned by the presence of a labialized vowel (non-round vowel coarticulated with a labial consonant) in the base. The phonetically grounded analysis I propose, couched in Harmonic Grammar, integrates both subfeatural representations (Lionnet 2017) and scalar constraints, and models the idea that faithfulness to coarticulated segments is weaker than faithfulness to non-coarticulated segments.

October 15, 2018 - Talks by Kayla Palakurthy and Myriam Lapierre

Kayla Palakurthy: The role of similarity in sound change: Variation and change in Diné affricates

Many studies have documented an increase in variation and frequency of change incommunities undergoing language shift. Changes between phonologically similar segments appear particularly vulnerable to change, but phonetic documentation of specific changes in minority languages remains limited. This paper presents a study of incipient sound changes in the Diné (Navajo) affricates: /tl/ > /gl/ and /tɬ'/ > /kɬ'/. Variation among older, proficient speakers points to the relevance of phonetic similarity in these changes, confirmed through acoustic analysis, while a strong correlation with age suggests external pressure, as younger speakers have less exposure to the Diné language and likely substitute the English cluster /gl/ for /tl/. This study shows how multiple motivators can be identified for changes in a threatened language that otherwise appear to be straightforward substitutions from a dominant language.

Myriam Lapierre & Lev MichaelNasal harmony in Tupí-Guaraní: A comparative synthesis

Complex nasal segments (e.g. [mb], [nd], and [ŋɡ]) and nasal harmony are among the most prominent topics in lowland South American phonology (Storto & Demolin 2012), with the Tupí-Guaraní (TG) family being one of the major genetic groupings that exhibit these phenomena. Despite the ubiquity of these phenomena in TG languages, however, works on this family have not reached consensus on how to analyze them, with similar systems of nasality-related phenomena receiving starkly different analyses. This talk presents a typological and analytical synthesis of nasal harmony phenomena in TG, based on a comprehensive review of works on this phenomenon in the languages of the family. First, we present a typological overview of nasal harmony in the family, distinguishing languages that exhibit the phenomenon from those that do not. Second, we review the major types of analyses presented in discussions of nasal harmony in the TG literature, namely 1) the presence of a [+nasal] prosodic feature that attaches to the phonological word (e.g., Gregores & Suárez 1967); 2) linear leftward spreading from a [+nasal] trigger to the left edge of a relevant morphophonological domain (e.g., Cardoso 2009); and 3) non-local leftward spreading of a nasal feature from a [+nasal] vowel (Thomas 2014). We provide a unified framework for analyzing nasal harmony in the TG family based on a unified treatment of nasal segments. Specifically, we argue that nasality-related phonological processes in TG languages must be separated into two types: (1) segmental nasality, which has strictly local coarticulatory effects on immediately adjacent segments, and (2) nasal harmony, which has long-distance assimilatory effects on a string of segments contained inside a given morphophonological domain (e.g., a phonological word).

October 22, 2018 - Eleanor Glewwe: Complexity Bias and Substantive Bias in Phonotactic Learning

Artificial grammar learning studies investigating synchronic biases in phonological learning have uncovered robust evidence for complexity bias (bias toward featurally simpler patterns) but little for substantive bias (bias toward phonetically natural patterns) (Moreton & Pater 2012). I present a phonotactic learning experiment that tested for both complexity bias and substantive bias by comparing how well subjects learned different distributions of a stop voicing contrast. The results support complexity bias but not substantive bias. A second experiment that changed the features of the filler consonants confirms the complexity bias effect and also supplies evidence for substantive bias. Together, the two experiments offer mixed evidence for substantive bias but stronger evidence for complexity bias while also demonstrating how the broader phonological structure of an artificial language affects performance.

October 29, 2018 - Sarah Bakst and Caroline A. Niziolek: Self-monitoring in L1 and L2: a magnetoencephalogaphy study

Learning to produce new phonetic categories in a second language can be challenging. In order to reach phonetic targets, speakers must be able to detect and subsequently correct themselves when they stray from their targets. Previous evidence from magnetoencephalography (MEG) (Niziolek et al. 2013) has shown that speakers can detect when they are straying from targets in their native language, and that they can use their auditory feedback to modify their vowel fromant trajectories while talking. In this MEG study we examine how L2 speakers of French are able to detect and correct for errors while producing newly-acquired phonetic categories.

November 5, 2018 - Jennifer Bellik: Vowel intrusion in Turkish onset clusters

Inserted vowels are typically assumed to be epenthetic, but can also be intrusive --- percepts created by gestural timing (Hall, 2003). In this talk, I argue that this is the case for vowels that break up complex onsets in Turkish. I present ultrasound and acoustic data showing that onset-repairing vowels are more affected by coarticulation than lexical vowels. These results support the interpretation that onset-repairing vowels are targetless intrusive vowels, rather than epenthetic vowels as previously claimed (Yavaş, 1980; Clements & Sezer, 1982; Kaun, 1999; Yildiz, 2010).


November 26, 2018 - Simon Todd: Word frequency effects in sound change: A listener-based exemplar model

Word frequency effects in regular sound change present a major unresolved puzzle. The literature has documented cases of regular sound change in which high-frequency words change faster than low-frequency words, other cases in which they change slower than low-frequency words, and yet other cases in which all words change at the same rate. How can these differences be explained? I propose that the answer lies in the implications of the change for the listener. I present an exemplar-based computational model of regular sound change in which the listener plays a large role, and I demonstrate that it generates sound changes with properties and word frequency effects seen empirically in corpus data. In particular, I show that the various word frequency effects in the literature follow from a single perceptual asymmetry, whereby a listener can more easily recognize high-frequency words than low-frequency words when acoustically ambiguous. Different word frequency effects arise in different kinds of sound changes because they change the listener's landscape of acoustic ambiguity in different ways.


December 10, 2018 - LSA Practice Talks

Amalia Skilton and David Peeters (Max Planck Institute for Psycholinguistics)- Speaker and addressee in spatial deixis: new experimental evidence.

Andrew Cheng- Style-shifting, Bilingualism, and the Koreatown Accent.