Phorum 2016

Fall 2016

AUGUST 29 -

You! (Phorum Round Robin)

Come to our first Phorum of the year with a 5 minute (or less) Ph-related musing, question, or update on what you did this summer.

SEPTEMBER 5 -

No Phorum (Labor Day)

SEPTEMBER 12 -

Brian W. Smith (UC Santa Cruz)
French schwa and cumulative constraint interaction

Grammars with weighted constraints predict the existence of ganging effects: cases where two constraints combine to overcome the effect of one competing constraint. This talk presents a case study of one such ganging effect in French, using it to argue for an analysis in MaxEnt Harmonic Grammar. The basic pattern is reported in Charette (1991), who shows that epenthesis occurs if and only if two conditions are met: (1) the epenthesis site is followed by exactly one syllable; (2) the epenthesis site is after a cluster. Although Charette reports a categorical pattern, for many speakers, the pattern is subject to variation. Using experimental data, I show that epenthesis is more likely when it meets one condition, and most likely when it meets both. This can be straightforwardly modeled in a theory of variation with weighted constraints, such as MaxEnt. The MaxEnt analysis also predicts that the constraints conditioning epenthesis should play a role whenever there is phonological variation. This is borne out in French: both constraints show independent effects in schwa deletion and phonotactics.

SEPTEMBER 19 -

Larry Hyman (UC Berkeley)
In search of prosodic domains in Lusoga

According to Selkirk’s (2011) “match theory”, the mapping of syntactic structure onto prosodic domains is universal. What this means is that if a language chooses to implement the relation between syntactic- or phrase-structure in the phonology, certain syntax-phonology relations should be predictable (and others not possible). This potentially produces asymmetries, as in Luganda, where a verb forms a tone phrase with what follows (e.g. an object, adjunct, right-dislocation), but not with what precedes (e.g. the subject, adverbial, left-dislocation). The purpose of my talk is to raise the question whether the phrasal tonology of Lusoga, the most closely related language to Luganda, is syntactically grounded—or is free to apply without respect to syntax. I begin by briefly outlining the situation in Luganda, and then turn to Lusoga. Although extremely closely related, the two languages are quite different in their implementation (Luganda) vs. non-implementation (Lusoga) of prosodic domains. While I provide an analysis that accounts for this difference, and which respects Selkirk’s claims, I also show that there is one head-dependent syntax-specific condition in both languages that does not so easily fall into line. I conclude with discussion of the typology of phonological phrasing in Bantu.

SEPTEMBER 26 -

Ronald Sprouse (UC Berkeley)
Berkeley Phonetics Machine demo

OCTOBER 3 -

Sharon Inkelas (UC Berkeley)
Looking into segments

OCTOBER 10 -

Eric Wilbanks (UC Berkeley)
An Apparent Time Study of (str) Retraction and /tɹ/ - /dɹ/ Affrication in Raleigh, NC English

This project presents an acoustic analysis of two sound changes in progress in Raleigh, North Carolina English: retraction of /s/ in /stɹ/ clusters and affrication of /t/ and /d/ in /tɹ/ and /dɹ/ clusters. Data are drawn from 140 sociolinguistic interviews with Raleigh natives (Dodsworth & Kohn 2012). Combining acoustic analyses with techniques from natural language processing, we investigate the social and linguistic factors conditioning the gradient progression of these innovative variants. In addition to demonstrating the methodological validity of incorporating technologies from speech recognition into the sociophonetician's toolkit, investigation of these consonantal changes in relation to previous studies on vocalic restructuring in Raleigh (e.g., Dodsworth & Kohn 2012) allows for a more complete understanding of the mechanisms of variation and change occurring within the linguistic system of the region as a whole.

OCTOBER 17 -

Meg Cychosz (UC Berkeley)
Sources of variation in an emerging Parisian French vernacular

Youth language, and its ramifications for language variation and change, is of interest to sociolinguists and phoneticians alike (Eckert 1989; Foulkes & Docherty 2006; Tagliamonte & D’Arcy 2009). Many sociophonetic works have focused upon adolescent language as a source of innovation or display of in-group identity (Eckert 1989; Mendoza-Denton 2014) and have determined that adolescent speech is not merely an improper vernacular full of slang and indicative of teenage rebellion. Rather, this is an age group in a constant state of constructing a linguistic identity, rendering them more prone to early adoption of innovative variants. Nowhere is construction of youth identity more apparent than the often-fraught banlieue, or suburbs, of Paris, France. Rife with socioeconomic disparity, the banlieue are home to many new immigrant arrivals and have become infamous as hotbeds where the working-class manifests against police brutality and social disempowerment (LePoutre 1997). Immigration here has resulted in an influx of new languages. Many works have examined the ensuing contact varieties (Conein & Gadet 1998; Fagyal 2010a; 2010b) and found that French in contact with languages of immigrant communities, especially North African Arabic, has led to innovative prosodic markings, vocalic reduction, and plosive affrication. Yet immigrant populations are not uniform: those of North African origin experience more stigmatization, and less cultural assimilation, than those from Western or sub-Saharan Africa (Khosrokhavar 2016; Valfort 2015). This study analyzes the highly-proficient speech of (N=11) French high school students, aged 16-18, from the banlieue. Sociolinguistic interviews were conducted in French with participants speaking in dyads. The French speech of three student groups was juxtaposed: L1 Arabic language speakers, L1 Bantu language speakers (Lingala, Swahili), and monolingual French speakers (control group). Phonetic measurements, F1 and F2 of front round and nasal vowels (7 time points, Bark difference metric normalization, N=6854) and duration of wordinitial plosives (N=1325), were taken. T-tests show significant differences in mean bilabial plosive duration (word-initial) between L1 Bantu students and monolingual French speakers ([p] p <.01, [b] p < .001). Mixed effects linear regression conducted on vowel data indicate the interaction of L1 and gender as a significant predictor of vowel advancement (L1 Arabic:Males: != 1.524, p<.001, L1 Bantu:Males: != 1.087, p=.006). Perhaps due to their shared immigrant status, Bantu-speaking students identify with Arabic-speaking students. The affrication and increased duration of word-initial alveolar plosives (t > t"/") is a sociolinguistic marker of banlieue French in students of diverse immigrant backgrounds (Trimaille et al. 2012). Bantu students have adopted, and modified, this variant. The phonemic status of prenasalized stops in Bantu languages serves as an additional impetus for the innovation: the variation has a social and phonological (L1) source. F2 differences across male students from the three groups suggest that integration into the host society is a determining factor in acquisition of variation and may exemplify a case of covert prestige. French immigration has resulted in an exponential growth of multiculturalism, yet not all incoming immigrant groups elect to adopt the standard variety. This is demonstrated by those most prone to innovation: adolescents.

OCTOBER 24 -

Andrew Wedel (University of Arizona)
Words predict how phoneme inventories change

Since the early 20th century, it has been proposed that phoneme inventories should change in ways that preserve those phoneme contrasts that do more 'work' to convey meaning (e.g., Gillieron 1918, Trubetzkoy 1939, Hockett 1967, Silverman 2010). In this talk, I’ll review recent evidence that supports this idea. When a phoneme contrast merges, words that were originally distinguished by that contrast also merge in pronunciation (like caught ~ cot in English). We have used a database of diverse languages to show that phoneme mergers are much less likely to occur when the phoneme pair distingushes many words (Wedel et al. 2013). Instead, we show that when a phoneme pair distinguishes many words, the pronunciation of those phonemes is more likely to change in ways that do not result in merger. These alternative kinds of changes result in alterations to the phoneme inventory, but maintain existing distinctions in the lexicon. These results suggest that speakers of languages preferentially preserve phonetic distinctions when these distinctions do more work to communicate word-distinctions in their language. To support this, in the second talk of the day I will show phonetic evidence that English speakers hyperarticulate sounds when those sounds distinguish their host-word from another word. More informally, all these results suggest that when linguists have an intuition that a particular sound contrast in their language is preserved because it plays an important role in communication – they may be right. I will end by providing some thoughts about ways we can extend this kind of research to a broader range of languages, and ask for ideas.

OCTOBER 31 -

Sam Zukoff (MIT)
Arabic Nonconcatenative Morphology and the Syntax-Phonology Interface

This paper develops a new integrated analysis of the phonological and syntactic proper-ties of nonconcatenative morphology in (Classical/Modern Standard) Arabic. The account centers around an algorithm for sub-word linearization at the syntax-phonology interface, here termed the “Mirror Alignment Principle” (MAP). The MAP determines the ranking of Alignment constraints (McCarthy & Prince 1993) in the phonological component based on asymmetric c-command relations in the syntax. Using the MAP, we can predict the exact position of all morphemes/segments in an Arabic verbal form based on their syntactic functions and structures without any recourse to templates (cf. McCarthy 1979, 1981).

I illustrate how this syntax-phonology mapping generates idiosyncratic behavior of the Reflexive morpheme, and can explain the morphophonological distinctions between the two different types of Causatives found in the language. Additionally, I will demonstrate that this framework allows for a more complete and internally consistent phonological account of the verbal system than previous approaches.

NOVEMBER 7 -

Adam McCollum (UC San Diego)
Vowel harmony in Tutrugbu: Is it phonology or morphosyntax?

Tutrugbu, a Kwa language of Ghana, shows a pattern of cross-directional vowel harmony that is amenable to either a phonological or morphosyntactic analysis of progressive harmony in the language. Tutrugbu roots trigger leftward, regressive assimilation of prefixes for the feature [ATR] while initial (subject) prefixes trigger rightward, progressive assimilation for the feature [RD]. Moreover, Tutrugbu appears to exhibit a variation of “sour grapes” vowel harmony (Padgett 1995; Wilson 2003), where leftward ATR harmony either assimilates all vowels within its domain of application or none. This is predicted by some OT constraints (e.g. AGREE and ALIGN), and has been regarded as pathological since no languages have been shown to instantiate this pattern. If vowel harmony is phonologically-driven in Tutrugbu, then at least one instance of “sour grapes” harmony is attested, justifying the predictions associated with constraints like AGREE and ALIGN in this regard. Furthermore, the progressive labial harmony pattern strongly resembles other known labial harmony patterns found in Kaun’s (1995) typology. That being said, a number of factors suggest the realistic possibility that vowel harmony in Tutrugbu may not be phonological in nature, but rather morphosyntactic subject-verb agreement. If vowel harmony is morphosyntactic, apparent phonological blocking of agreement, as well as numerous parallels between syntax and phonology in Tutrugbu, as well as in neighboring Tafi (Bobuafor 2013) and Logba (Dorvlo 2008), receive an explanation (see Baker & Willie 2010 for a similar analysis). In this presentation, phonological and morphosyntactic analyses of the data are discussed, focusing on the kind of relationship phonology and syntax must have in order to account for the Tutrugbu data.

NOVEMBER 14 -

Marilyn Vihman (University of York)
Phonological templates in development

This talk will be an interim report on my book in progress, covering three data-based chapters on the first words of children learning six languages (US and UK English, Estonian, Finnish, French, Italian and Welsh), their adult target forms, the use of prosodic structures at a later stage (end of the single-word period) and some examples of templates (children learning American English, French and Welsh). Discussion will address the similarity of the early forms, the challenge of consonant variegation and evidence for ‘whole-word phonology’ and for memory as a key constraint in early word learning.

NOVEMBER 21 -

Emily Clem and Lev Michael (UC Berkeley)
Principal component analysis as a tool for phonological areal typology: A South American case study

Recent years have seen a shift in the study of language contact to the coupled use of large typological datasets and computational techniques, in an effort to detect large-scale patterns and avoid subjective judgments of areality (O’Connor and Muysken, 2014; Wichmann and Good, 2014). In this talk, we demonstrate the utility of principal component analysis (PCA) for determining the major dimensions of typological diversity in the phonologies of a region, by applying this analytical technique to a database of South American phonological inventories. We identify areal and genetic patterns in the distribution of phonological segments across the continent, and show that this method is an informative, quantitatively rigorous, and non-subjective way of exploring large-scale typological patterns in a geographic region.

NOVEMBER 28 -

Rose-Marie Déchaine (University of British Columbia)


Spring 2016

JANUARY 25 -

Fernanda Ferreira (UC Davis)
Prediction in the processing of repair disfluencies

In recent work, we've been investigating how people interpret utterances containing repair disfluencies (e.g., 'The chef reached for some salt uh I mean some ketchup'). Our experiments involve presenting listeners with sentences at the same time that they are shown a small set of pictures. Eye movements are monitored and time-locked to different portions of the utterance to provide evidence concerning the incremental build-up of interpretations. One set of experiments has shown that listeners were more likely to fixate a critical distractor item (pepper) during the processing of repair disfluencies compared to the processing of coordination structures ('...some salt, and also some ketchup...'). In other experiments we have demonstrated that the pattern of fixations to the critical distractor for disfluency constructions is similar to fixation patterns for sentences employing contrastive focus (...not some salt, but rather some ketchup...). The results suggest that similar mechanisms underlie the processing of repair disfluencies and contrastive focus, with listeners generating sets of entities that stand in semantic contrast to the reparandum in the case of disfluencies or the negated entity in the case of contrastive focus.

FEBRUARY 1 -

Taja Stoll (LMU Munich)
Influence of palatalization on the tongue tip in liquids

It has been observed that palatalized trills are prone to undergo sound change, which is usually attributed to their complex articulation (Kavitskaya et al. 2009, Ladefoged and Maddieson 1996, Solé 2002, etc.). In this talk, I will present the results about the influence of palatalization on the tongue-tip gesture in Russian trills and laterals (by analyzing the tongue-tip peak velocity and stiffness by means of EMA) and discuss its possible consequence for sound change.

FEBRUARY 8 -

Stephanie Shih (UC Merced)
Lexically-conditioned phonology as multilevel grammar

This talk takes up two interrelated issues for lexically-conditioned phonological patterns: (1) how the grammar captures the range of phonological variation that stems from lexical conditioning, and (2) whether the relevant lexical categories needed by the grammar can be learned from surface patterns. Previous approaches to category-sensitive phonology have focused largely on constraining it; however, only a limited understanding currently exists of the quantitative space of variation possible (i.e., entropy) within a coherent grammar. In this talk, I present an approach that models lexically-conditioned phonology as cophonology subgrammars of indexed constraint weight adjustments (i.e., ‘varying slopes’) in multilevel Maximum Entropy Harmonic Grammar. This approach leverages the structure of multilevel statistical models to quantify the space of lexically-conditioned variation in natural language data, and allows for the deployment of information-theoretic model comparison to assess competing hypotheses of lexical categories. Two case studies are examined: part of speech-conditioned tone patterns in Mende (joint work with Sharon Inkelas, UCB), and lexical versus grammatical word prosodification in English. Both case studies bring to bear new quantitative evidence to classic category-sensitive phenomena. The results illustrate how the multilevel approach developed here can capture the probabilistic heterogeneity and learnability of lexical conditioning in a phonological system.

FEBRUARY 10 -

Florian Lionnet (UC Berkeley)
Phonological teamwork: A phonetically grounded account of cumulative effects in phonology

Note: Special day (Wednesday), time (10:00-11:00), and location (1229 Dwinelle)

Categorical phonological processes (e.g. assimilation) that seem to be driven by gradient, subphonemic effects traditionally considered to fall within the domain of phonetics (e.g. coarticulation), constitute a challenge for phonological theory. Such data raise the question of the nature of phonology and its relation with phonetic substance, which has given rise to a long debate in linguistic theory, schematically opposing two types of approaches to phonology: substance-free approaches, which hold that phonetic substance is not relevant to phonological theory, and phonetically grounded approaches, for which (at least some) phonological phenomena are rooted in natural phonetic processes, such as coarticulation.

In this talk, I argue in favor of phonetic grounding, on the basis of novel data relevant to this debate: 'phonological teamwork', a cumulative effect which obtains when two segments exerting the same subphonemic coarticulatory effect may trigger a categorical phonological process (e.g. assimilation) only if they 'team up' and add their coarticulatory strengths in order to pass the threshold necessary for that process to occur. Drawing from original fieldwork, I analyze a particularly rich case of teamwork: the doubly triggered rounding harmony of Laal (endangered isolate, Chad). I provide instrumental evidence that the harmony is driven by subphonemic coarticulatory effects, and propose to enrich phonology with phonetically grounded representations of such effects, called subfeatures. Subfeatures do not contradict the separation between phonology and phonetics, but rather constitute a mediating interface between them. Throughout the talk, I highlight the importance of linguistic fieldwork, meticulous data collection and analysis, and detailed description of seemingly minor phenomena, for contributing to important theoretical debates.

FEBRUARY 15 -

No Meeting (Presidents' Day)

FEBRUARY 22 -

Roslyn Burns (UC Berkeley)
Language contact and phonological borrowing: The case of Posen Low German

This talk explores the role of language contact in the development of consonants with secondary palatalization in the historical region of Posen (Polish Poznań). Linguistic documentation indicates that by the early 20th century, Low German spoken in Poznań and surrounding regions was in the process of losing secondary palatalization (Koerth 1913, 1914; Teuchert 1913). The loss of this articulation was due to the influence of Low German from other regions. Teuchert and Koerth are of the view that Polish played some role in the development of these consonants, but they are at a loss to explain how Polish had influenced the Low German. In this talk, I present evidence that secondary palatalization developed as the result of a Lechitic VC co-articulation rule being mapped onto the phoneme system of Low German. Differences between the output of the Lechitic rule in Polish and Low German are due to prioritization of Low German input features.

FEBRUARY 29 -

Susan Lin (UC Berkeley)
Gradience in articulatory variation

Synchronic variation in speech articulation is thought to be at the heart of most sound change, whether through the generation of phonemically ambiguous speech or the creation of phonological innovations available to language learners. The precise mechanism for the incorporation of phonetic variation into phonological systems however has remained elusive. Are some completed sound changes the grammaticalization of two extremes of phonetic variation? And when are additional mechanisms required to make the leap from gradient phonetic variation to discrete phonological categories? In this talk, I present a two-pronged approach to understanding the role of this variation in sound change at work in our lab.

The goal of the first series of work is to quantify the extent of articulatory variation towards a completed sound change. As a case study, I present the landscape of variation in the articulation of velarized coda laterals in English, and discuss the links between this variation and vocalization through intra- and inter-speaker variation. Our second goal is to examine what processes exist in the space between gradient articulatory variation and categorical phonological change. Here, I describe ongoing research, the goal of which is to capture speakers' behavior in response to external forces designed to push their speech into unstable articulatory territory.

MARCH 7 -

Neal Fox (UCSF)
Contextualizing Phonetic Processing: Lexical and Sentential Influences on Speech Perception and Production

One of the most fundamental observations about speech communication is that there is no one-to-one mapping between segments or words and their acoustic realizations. On one hand, it is clear that signal-to-word mapping is many-to-one: perfectly understandable productions of the same word can take on countless acoustic realizations that may differ from one another along many dimensions. On the other hand, signal-to-word mapping is also sometimes one-to-many: the same acoustic signal may, on different occasions, be perceived as different sounds, words or sequences of words. In this talk, I will discuss research (conducted with Megan Reilly and Sheila Blumstein) addressing two questions raised by this state of affairs.

Firstly, does the speech production system avoid perceptual confusability by modulating how a segment is produced in specific ways and/or under specific conditions? I will present two experiments investigating the influence of lexical and sentential information on stop consonant articulation; I will argue that 'listener-oriented models' fail to account for the patterns of systematic phonetic variation in speech production we observe.

Secondly, I will discuss how listeners accommodate the one-to-many mapping from signals to words. I will briefly present experimental evidence suggesting that the speech perception system integrates higher-level (e.g., lexical and sentential) cues and phonetic cues, weighting them with respect to their relative reliabilities. If there’s time, I will wrap up by suggesting a novel prediction made by combining the results of the speech production and speech perception work.

MARCH 14 -

Jeremy Steffman (UC Berkeley)
Prosodic transfer: a phonetic study of L2 English prosody produced by L1 French speakers

This study explores the way that language interference and transfer is manifested in the phonetic correlates of contrastive information marking and prosodic constituency in English, and how these manifestations vary based on native language experience. Native speakers of French (L2 speakers of English) and native monolingual speakers of American English were recorded while speaking English, producing utterances that presented contrastive information. Contrastive information was elicited by the sequential visual presentation of stimuli that contrasted in color or shape, with target segments also in different prosodic positions in the utterance. Results show that L1 French speakers were generally successful in producing the correlates of English prosodic prominence and in this this domain experience played a role, with more experienced speakers producing correlates of prosodic structure that were more in line with L1 English norms. However, the contrastive status of words was generally not encoded successfully by L1 French speakers, with even the most experienced failing to produce correlates of contrastive marking that were in line with L1 English norms. English syllable reduction and durational cues for accentuation also posed a problem to L1 French speakers. L2 English deviations from L1 English norms are explained in reference to the L1 systems of French. The differential successes of L2 learners in marking prosodic structure as opposed to contrastive information are in line with theories that posit a general difficulty in the production of semantically driven language features for L2 learners, and also perceptual accounts of French 'stress-deafness'.

MARCH 21 -

No Meeting (Spring Break)

MARCH 28 -

Will Bennett (Rhodes University)
Xhosa labial palatalization: inter-speaker differences and the morphology/phonology split

IsiXhosa has a pattern of labial palatalization in which the passive suffix /-w/ causes stem-final labials to become palatals. Some previous work has treated this as a phonological process, though others claim the pattern is fundamentally in the lexicon (with the change being historical rather than synchronic). We probe this question experimentally, using a wug test. If palatalization is phonological, speakers should extend it to nonce items. If, however, the palatalized forms are lexically stored, speakers will not palatalize in nonce items. The findings presented in this talk suggest that labial palatalization is genuinely phonological for some speakers, but not for others, and that members of the same speech community have different grammatical representations for the same pattern.

APRIL 4 -

Adam Chong (UCLA)
On derived-environment effects: What does the lexicon look like?

Morphologically derived environment (MDEEs) patterns are well-known examples in which static phonotactic patterns in the lexicon do not accord with what is allowed at morphological boundaries (phonological alternations). Analyses of MDEEs (e.g. Lubowicz 2002, Wolf 2008, a.o.) often assume that both the alternation patterns as well as the static phonotactic patterns are productive. Yet upon closer inspection, well-known MDEE patterns prove to be less clear-cut than most analyses suggest. In particular, as Inkelas (2009) and Antilla (2006) have shown for Turkish velar deletion and Finnish assibilation respectively, a morphologically derived environment is hardly sufficient to ensure that a supposedly derived-environment-only phonological process will apply. This talk takes up a related assumption regarding the static stem-internal phonotactic patterns in the lexicon of languages with MDEEs. It is implicitly assumed in analyses of MDEEs that stem-internal sequences that violate the generalization across morpheme boundaries are completely well-formed. To examine this, I present the results of corpus studies and phonotactic modeling simulations of two well-known MDEE cases: Korean palatalization and Turkish velar deletion. I show that in one case (Korean), a weaker version of the across-morpheme constraint (*ti) that motivates palatalization is active in the lexicon. This follows from the fact that forms violating *ti such as mati ‘joint’ are under-represented in the Korean lexicon. This contrasts with Turkish where the relevant alternation-motivating constraint is unavailable from pure phonotactic learning. These results further confirm the observation that MDEEs are not a unitary phenomenon. I also discuss the implications of this study for the relation between static and dynamic generalization as well as for phonological learning.

APRIL 11 -

 (No Phorum)

APRIL 18 -

Alicia Beckford Wassink (University of Washington)
English in the Pacific Northwest: the perception study

The goal of the perception test is to characterize listeners’ ability to identify sounds that are at various stages of change in PNWE. The two research questions we seek to address are: (1) Can social information (beliefs about speaker dialect) override listeners' use of phonetic information in vowel identification? (2) How do listeners use phonetic information when such information conflicts with the phonetic cues of their native dialect (here, changes in progress involving raising or near-merger). In this Phorum talk, I will describe the methods we are using to address these research questions. We are conducting an online test of Vowel identification (forced choice) adapted from methods used by Labov & Ash (1997) in the Cross Dialect Comprehension studies and by Niedzielski (1999). Our respondents include a 2-region sample of 100 listeners.

APRIL 25 -

Jonathan Manker (UC Berkeley)
Context, predictability, and phonetic attention: The competition between sound and meaning

Lindblom (1995) proposed two modes of listening to speech: a “what” mode, where listeners focus on meaning, and a “how” mode, where listeners attend to details of pronunciation. This theory fits with Hickok and Poeppel’s (2004, 2007) more recent dual stream model of speech perception. What conditions then are necessary for modulating the use of one listening mode or the other? Following observations from speech recognition studies (Cole & Jakimik 1980, etc.), this presentation will discuss the results of two experiments which considered how structural and semantic context (including word predictability) interact with the listener’s attention to phonetic details. Phonetic accommodation and intentional imitation were used as experimental tools for determining what phonetic details subjects noticed after hearing target words in a variety of structural and semantic contexts. The results suggest listeners attend more closely to details of pronunciation when less structural and semantic context is present. I will then briefly consider the relevance of these findings to patterns in sound change and phonology.