Phorum 2012

Schedule of Talks for Fall 2012

PREVIOUS MEETINGS:

SEPTEMBER 10 -

WILL CHANG
UC BERKELEY

Linguistic mirages and lexical borrowing between Tongan and Samoan

Despite more than a thousand years of cultural exchange between Tonga and Samoa before European contact, very few loanwords have been identified as having diffused between them, even though there is a large vocabulary peculiar to these two Polynesian languages. The difficulty lies in the fact that when Tongan and Samoan show related forms, it is almost always possible to reconstruct a Proto Polynesian form. One is tempted to explain this shared vocabulary as retentions in Tongan and Samoan that have been lost elsewhere in Polynesia. I show that this is impossible, by statistically comparing forms peculiar to Tongan and Samoan with forms widespread in Polynesia. The reconstructed protoforms associated with the second set of forms have far more *h and *r sounds than the (false) reconstructions associated with the first set of forms.this is a straightforward consequence of the sound laws in the history of the two languages, and of the way that Tongan words are nativized in Samoan. While it remains impossible to positively identify specific forms as loans, I have used a statistical model to identify a handful as having been borrowed with high probability; and of the approximately 850 etyma shared by Tongan and Samoan, an estimated 45% are loans.

SEPTEMBER 17 -

CLARE SANDY
UC BERKELEY

Tone and syllable structure in Karuk

Word-level prosody in Karuk incorporates both tone and stress, and the placement of prominence is sensitive to complex interactions of lexical, phonological, and morphological factors. In this talk I discuss results of a quantitative analysis of prominence in Karuk roots, which shows that syllable structure is an important factor in determining placement of tone. Nouns and verbs display different patterns of syllable structure and accentuation, and I argue that the patterns seen in nouns represents the default. A constraint against high tone on short closed syllables, *CVC, is used to explain both static distributions of tones in verb roots and is also shown to be active in derived contexts; contributing to the degree of stability of tone under affixation and a vowel lengthening process triggered by certain affixes. *CVC thus unifies and motivates seemingly arbitrary phonological rules, and results in the Karuk system being far more predictable than has previously been thought.

SEPTEMBER 24 -

SHARON INKELAS
UC BERKELEY

Modeling Transient Child Phonology with a Recycle Constraint

This talk, based on ongoing joint work with Tara McAllister Byun (NYU) and Yvan Rose (Memorial U.), presents work on a new model of phonological acquisition which incorporates performance-based generalizations into grammar. The model is designed to shed light on several conundrums of child phonology. One is that children can exhibit systematic patterns which are unattested in adult language (e.g. consonant harmony involving major place of articulation, or neutralization of consonantal place or stricture contrasts in perceptually strong positions only). Another is that children's productions does not tend to improve steadily and monotonically, but can show "U"-shaped curves, show systematic variation, and/or get stalled at plateaus.

The model we develop features a grammatical mechanism, the constraint Recycle, which favors reproduction of stored error forms. Under the influence of this constraint, errors become systematic and sensitive to phonological conditioning factors. We further assert that Recycle interfaces with a body of tacit knowledge about the stability of mappings between motor plans and acoustic outputs, here termed the A-map. If a target form is motorically complex and therefore error-prone, the associated entry in the A-map will indicate an unstable mapping. We define Recycle in such a way that the magnitude of the penalty incurred by a target is inversely proportional to the stability of its mapping: the less consistent the mapping, the greater the pressure to reuse an old form that has a more reliable production routine. Crucially, as the child matures, the changes in motor control ability that he/she experiences will be reflected as shifts in the topography of the A-map. Once the mapping from motor plan to acoustic target becomes stable for a particular form, the pressure to use an old form is lifted. An adult-like production, or the closest approximation permitted by the child's current grammar, will then emerge.

The present proposal is related to the UseListedError model put forward by Tessier (2008, 2012). However, the two accounts are distinct in that our Recycle model draws an explicit connection between children's performance limitations and their tendency to reuse stored forms, identifying a functional motivation for an otherwise perplexing pattern.

OCTOBER 1 -

KEITH JOHNSON & RONALD SPROUSE
UC BERKELEY

Preliminary analysis of electrocorticography signals produced while reading words aloud

Electrocorticography data is very rich, and in previous studies appears to capture a great deal of information about both auditory sensation and motor control in speech. We focus on two aspects of this type of data in a study of word list reading by a single patient. First, we will report on the use of principal components analysis (PCA) to reduce the dimensionality of the data (from about 17000 variables per word to "only" a couple of hundred). A couple of sanity checks on this method will be discussed. Second, we will report, in a preliminary way, on the use of PCA rank reduction to study neural firing patterns that are associated with phonetic information (bilabial versus velar initial consonants) and to study lexical information (high versus low frequency).

Acknowledgement: Dr. Edward Chang (UCSF) kindly provided us with the ECog data that will be discussed in this presentation. The patient we are studying (#GP31) read over 4000 words with a grid of electrodes implanted on the surface of her brain.

OCTOBER 8 -

ROSLYN BURNS
UC BERKELEY

The Vowel System of Mesoamerican Plautdietsch: A preliminary analysis 

This talk focuses on the vowel system of Mesoamerican Plautdietsch in both its synchronic state and diachronic development. Plautdietsch (PDT), a West Germanic language with origins in Poland, has been carried across the globe by Mennonites searching for religious freedom and economic opportunity. A consequence of this migration is that some PDT speech communities can develop innovations independent of contact with other PDT speech communities. On the basis of acoustic analysis and comparative investigation of other PDT vowel systems, I will argue that similarities between Canadian-Mexican PDT vowel systems are a parallel development to the Russian-German group. This parallel development is in part due to a phoneme whose presence has been recognized in the broader PDT literature, but whose exact phonetic quality had remained elusive due to lack of acoustic analysis.

OCTOBER 15 -

SARAH BAKST
UC BERKELEY

Rhotics and Retroflexes in Indic and Dravidian

Although the Indic languages of North India and the Dravidian languages of South India both have retroflexes in their phonetic inventories, Ladefoged and Bhaskararao (1983) demonstrated that the retroflexes in both language families do not share the same articulation. The researchers used static palatography and X-ray evidence to show that the Northern retroflex stops are apical and pronounced at the alveolar ridge, whereas the Southern ones are sublaminal and pronounced at the palate. In my thesis I aimed to see if these results could be replicated using static palatography with speakers of Hindi, Punjabi, and Tamil, and also to determine the acoustic correlates of the differences in articulation using acoustic-only elicitation.

I also extended this question to the long-standing problem of how to classify rhotics. The boundaries of the rhotic category has never been well-defined by a single acoustic or articulatory feature. The lowered third formant has been proposed in the past as a possible universal indicator of rhoticity, but Lindau (1985) found that many rhotics have a high third formant. Retroflexes, however, have been associated with rhotics because they both share a lowered third formant. I hoped to define the relationship between different retroflexes, rhotics, and their overlap. Specifically, I hoped that the difference between the two regions' retroflexes would give some insight into whether there can be a "degree" of rhoticity.

I found the same clear differences in retroflex articulation that Ladefoged and Bhaskararao found and was able to extend these differences to retroflex rhotics, but concrete acoustic differences have remained elusive. There is a trend of greater lowering of the third formant in Tamil than in Hindi or Punjabi, but this was not entirely consistent. In my talk, I will explain the problem of retroflex cues and propose some possible other avenues to explore.

OCTOBER 22 -

MARIA JOSEP SOLÉ
UNIVERSITAT AUTONOMA DE BARCELONA, SPAIN

Creating phonological categories in an L2

Perceiving and producing the sounds of a second language, particularly sounds that are not contrastive in the L1, is an especially difficult task. The current study examines whether L2 learners of English have formed separate phonological categories for sound contrasts differing in a feature that is nondistinctive in their L1, e.g. /kæt/ vs /kʌt/ vs /kɑt/ (‘same category’ assimilation, Best 1995). A medium-term auditory repetition priming task was used to investigate if a prime-target pair differing only by sounds that are subsumed perceptually by similar L1 sounds (/kæt/ vs /kʌt/) yield the same amount of priming as (i) a repeated prime-target pair (/kæt/ vs /kæt/) or (ii) a prime-target pair differing by features that are distinctive in the L1 (/kæt/ vs /kɪt/). It was hypothesized that if L2 learners have not formed distinct categories for English-specific contrasts (contrasts differing in features non-contrastive in the L1), e.g., /æ/ vs /ʌ/, they will process cat and cut as homophones that will show priming effects.

46 Spanish/Catalan advanced learners of English and 18 native American English speakers were tested in a lexical decision task involving real English words and nonwords produced by an American English speaker. Preliminary results indicate that there are no facilitation effects for words differing in an English-specific contrast for L2 speakers (suggesting that they keep the two words separate), but there is facilitation for non-words (e.g. /ʃæb/ primes both /ʃʌb/ and /ʃæb/). The fact that an English-specific vowel contrast is in part confusable in nonwords, whereas the different lexical items are kept separate suggests that the sound categories may only be abstracted from lexical contrasts at a later stage. The implications for models of speech processing and L2 learning will be considered.

OCTOBER 29 -

KRISTOFER BOUCHARD
UC SAN FRANCISCO

Single-trial control of vowel formants and coarticulation

Human speech depends on the capacity to produce a large variety of precise movements in rapid sequence, making it among the most complicated sequential behaviors found in nature. Here, we used high-resolution, multi-electrode cortical recordings during the production of consonant-vowel syllables to understand how the human brain controls vowel formants, a surrogate for tongue/lip posture. Population decoding of trial-by-trial sensory-motor cortical activity allowed for accurate prediction of formants. Indeed, a significant fraction of the with-in vowel variability could be accurately predicted. Interestingly, decoding performance of vowels formants extended well into the consonant phase. Additionally, we show that a portion of carry-over coarticulatory effects on vowel formants can be attributed to immediately preceding cortical activity, demonstrating that the representation of a vowel depends on the preceding consonant. Importantly, significant decoding of vowel formants remained during the consonant phase after removing the effect of carry-over coarticulation, demonstrating that the representation of consonants depends on the upcoming vowel. Together, these results demonstrate that the cortical control signals for phonemes are anticipatory and that the representation of phonemes is non-unitary.

NOVEMBER 5 -

SHARON INKELAS & KEITH JOHNSON
UC BERKELEY

Testing the Learnability of Sound-Based Writing Systems

This paper reports the results of an artifical learning experiment testing the hypothesis that the learnability of symbols used in sound-based writing systems is correlated with the acoustic stability of the type of speech chunks to which the symbols correspond. Three conditions are compared: a system in which symbols correspond to C and V segments (the 'Segment' condition); a system in which symbols correspond either to onset consonants or to VC syllable rimes (the 'Onset-Rime' condition), and a condition in which symbols correspond to CV or VC demisyllables (the 'Demisyllable' condition).

The Segment condition matches alphabetic writing systems like those of Spanish, English, etc. The Onset-Rime condition relies on a syllable-internal distinction long exploited by phonologists. The Demisyllable condition draws on findings from speech recognition and synthesis (see e.g. Jurafsky & Martin 2008) that diphones and triphones are acoustically and perceptually more stable than single segments because of the key segment-to- segment transitions they contain.

Data was obtained from 57 subjects (19 per condition). Subjects, all English-speaking university students, participated in a computer-based task in which they learned sound correspondences for 20 symbols. The sound-to-symbol mapping was randomized across subjects. After mastering individual symbols, subjects were trained on the combination of symbols into CVC words (C-V-C in the Segment condition, C-VC in the OnsetRime Condition, CV-VC in the Demisyllable condition). Subjects were then tested on their ability to read aloud novel (CVC) combinations of the symbols on which they had been trained. Results are based on readings of the same 18 test items in each condition.

The experiment tested the relative strengths of two identifiable biases: the Experience Bias and the Acoustic Bias. (1) As English speakers, the subjects were conversant with segment-based writing, creating a bias of experience which should favor the Segment condition. (2) However, the Experience Bias is offset by what may be termed the Acoustic Bias, according to which subjects will prefer symbols which correspond to acoustically stable speech chunks like CV and VC over chunks like C which encode context dependent segment-to-segment transitions. The Acoustic Bias hypothesis favors the Demisyllable condition.

Results support the Acoustic Bias hypothesis over Experience Bias. Along all dimensions of analysis, subjects in the Demisyllable Condition outperformed subjects in the Segment Condition (with subjects in the Onset-Rime condition falling, predictably, somewhere in between). Overall, subjects were more accurate, and responded with shorter reaction times, in the Demisyllable condition. Comparisons reached statistical significance for reaction times and for accurate vowel learning, and approached significance in several other areas as well. The results of this study resonate with the well-known finding that among independently evolved sound-based writing systems, alphabetic systems are extremely rare (perhaps a singularity), while writing systems making use of CV or VC symbols are common (e.g. Gelb 1984, Coulmas 1989, Daniels & Bright 1996). The study has obvious implications for the teaching of literacy, and suggests that phonologists should take seriously the notion of .demisyllable. as a basic unit of representation (e.g. Fujimura 1989, Itô & Mester 1993).

References
Coulmas, Florian. 1989. The Writing Systems of the World. Oxford: Blackwell. 
Daniels, Peter and William Bright. 1996. The World.s Writing Systems. Oxford: Oxford University Press. 
Fujimura, O. (1989) 'Demisyllables as sets of features: Comments on Clements' paper.' In J. Kingston and M.E. Beckman (eds), Papers in Laboratory Phonology I: Between the Grammar and the Physics of Speech (pp. 334-40). Cambridge: Cambridge Univ. Press. 
Gelb, Ignace. 1952/1963. A Study of Writing. Chicago: University of Chicago Press. 
Itô, Junko and Armin Mester. 1993. Japanese Phonology: Constraint Domains and Structure Preservation. In John Goldsmith (ed.) A Handbook of Phonological Theory. Blackwell Handbooks in Linguistics Series. 
Jurafsky, Daniel; James H. Martin (2008). Speech and Language Processing. Prentice Hall.

NOVEMBER 19 -

OLGA DMITRIEVA
UC BERKELEY

Phonetics vs. phonology: Fundamental frequency as a correlate of stop voicing in English and Spanish.

It is known that fundamental frequency at the onset of the vowel following the stop consonant (onset F0) tends to covary with the voicing feature of the stop itself: onset F0 is lower after voiced stops than after voiceless ones (Hombert 1975). However, there is some controversy about the origins of this covariation and its use in cuing voicing distinctions, especially given the phonetically non-uniform expression of phonological voicing across languages. There are languages which mainly contrast voiceless aspirated with voiceless unaspirated stops in prevocalic position (English), as well as languages which contrast voiceless unaspirated and prevoiced stops (Spanish). The behavior of onset F0 in such languages is expected to differ depending on the presumed cause of the covariation: physiology of the vocal folds vibration, aerodynamics of aspiration, or VOT in general. The present study examines the distribution of onset F0 in English and Spanish voiced and voiceless prevocalic stops, taking into account both their phonetic and phonological properties. The results cannot be easily reconciled with the proposed aerodynamic and physiological explanations of the onset F0 covariation with voicing (Ladefoged 1967; Löist et al. 1989) and suggest instead that phonological factors play a major role in determining the distribution of the onset F0. Members of the opposing phonological categories within each language are differentiated through onset F0 independently of their phonetic implementation, while equivalent phonetic categories across languages do not follow the same trend in the distribution of onset F0 due to differences in their phonological status. These results provide support for the adaptive dispersion theory (Liljencrants and Lindblom 1972, Lindblom 1990) in the domain of secondary cues to phonological contrasts.

NOVEMBER 26 -

JUDITH KROLL
THE PENNSYLVANIA STATE UNIVERSITY

Cross-language competition begins during speech planning but extends into bilingual speech

Recent bilingual studies have shown that both languages are engaged when only a single language is required. Critically, cross-language activation occurs in tasks that are highly skilled, such as listening, reading, and speaking. However, bilinguals do not ordinarily suffer the consequences of cross-language interference, suggesting that they possess a mechanism of cognitive control that allows them to effectively select the language they intend to use. I will present data from three studies that use acoustic measures to demonstrate that not only are both languages active during the earliest stages of planning, but that cross-language activity extends into the execution of the speech plan.

DECEMBER 3 -

LSA PRACTICE TALKS - NOTE THAT WE WILL MEET 11 AM TO 1 PM THIS WEEK

11:00 - SUSANNE GAHL
UC BERKELEY

11:30 - STEPHANIE SHIH
STANFORD UNIVERSITY AND UC BERKELEY

The similarity basis for consonant-tone interaction as Agreement by Correspondence

This paper addresses the on-going debate over the distinction between Agreement by Correspondence and the previously dominant theory of autosegmental feature-spreading, focusing on a key conceptual difference between the two theories: the role of similarity in harmony patterns. Using data from consonant-tone interaction in Dioula d'Odienné propose that sonority underlies the relationship between segments and tone. Agreement by Correspondence's unique ability to make direct reference to similarity in determining segmental agreement makes it better suited for handling phenomena like consonant-tone interaction.

12:00 - JOHN SYLAK-GLASSMAN
UC BERKELEY

The Phonetic Properties of Voiced Stops Descended from Nasals in Ditidaht

Five genetically diverse languages of the Pacific Northwest Coast of North America underwent an areally-diffused and cross-linguistically rare sound change in which nasal stops (e.g. /m, n/) denasalized to voiced oral stops (e.g. /b, d/). This study examines the phonetic results of that change for the first time based on new data from Ditidaht (Wakashan). The voiced stops exhibit significant prevoicing and have the same duration as the contemporary nasal consonants (which can all be traced to contact, baby talk, or sound symbolism). These characteristics may be phonetic relics of the historical nasals from which the contemporary voiced stops descended.

12:30 - MELINDA FRICKE AND KEITH JOHNSON
UC BERKELEY

Development of coarticulatory patterns in spontaneous speech

While previous studies have focused on carefully controlled laboratory speech, this study compares fricative-vowel rounding coarticulation in adults' and toddlers' spontaneous speech. We analyzed the spectra of /s/ when it occurred either before or after front vs. rounded vowels. For adults, we found clear evidence of anticipatory rounding coarticulation, as well as some transitory perseverative coarticulation. For children, there was no obvious rounding coarticulation, but rather palatalization of /s/ in front vowel contexts, especially in the perseverative direction. Compared to child speech, adult spontaneous speech thus exhibits less mechanical linkage of articulators, and more anticipatory inter-articulator coordination.


Schedule of Talks for Spring 2012

PREVIOUS MEETINGS:

JANUARY 30 -

WENDELL KIMPER
UC SANTA CRUZ

Variability, cumulativity, and trigger asymmetries in Finnish

The native phonology of Finnish exhibits a regular and well-described system of vowel harmony along the front/back dimension --- with the exception of neutral [i] and [e], front and back vowels may not co-occur within roots, and suffixes alternate to take on the backness value of the preceding root. In loanwords, however, front and back vowels are permitted to co-occur. Suffixes attached to these disharmonic roots display variable behavior --- following a [back]-[front] sequence, harmony can either be transparent (skipping the intervening front vowel) or opaque (blocked from reaching the suffix).

In this talk, I argue that the choice between transparency and opacity is best characterized as a competition between potential harmony triggers. I present the results of a nonce-word study on Finnish disharmonic loans, showing that non-high vowels are (a) less likely than their high counterparts to be transparent, and (b) more likely than their high counterparts to induce transparent harmony. This asymmetry is consistent with the cross-linguistic generalization that segments which are perceptually impoverished with respect to a feature contrast tend to be preferential triggers for harmony along that dimension. I analyze these results within the framework of (Serial) Harmonic Grammar, proposing a harmony constraint which (a) assigns rewards for spreading (rather than violations for disharmony) and (b) scales those rewards up or down as a function of the preferential status of the harmony trigger as well as the distance between trigger and target.

FEBRUARY 27 -

OTELEMATE HARRY AND LARRY HYMAN
UNIVERSITY OF THE WEST INDIES, MONA; UC BERKELEY

Construction Tonology: The Case of Kalabari

* Please note that this meeting is from 12 - 1 PM. *

Although it is common for tone to be assigned by (word-level) morphological constructions, it is far less common for tones to be assigned by (phrase-level) syntactic constructions. Kalabari, an Ijoid language of Nigeria, does exactly this: Within the DP, the N appears finally, followed only by a possible definite article. Whenever the N is non-initial, it loses its tones and receives different "melodies" depending on the word class of the preceding modifier or possessor-which may be a demonstrative, possessive pronoun, noun possessor, or numeral. (Adjectives allow the following noun tone to surface.) DPs which have greater structural complexity may follow the pattern of the first word or, in some cases, show sensitivity to the internal structure. In this talk we first provide a synchronic overview of the patterns and then discuss how such an unusual system might have come into being, diachronically.

MARCH 5 -

ARMIN MESTER
UC SANTA CRUZ

Non-prominent positions
download PDF abstract

MARCH 12 -

LARRY HYMAN
UC BERKELEY

Issues in the Morphology-Phonology Interface in African Languages

In this paper I address how morphology and phonology potentially affect each other in a grammar. Drawing from a number of African languages, I briefly provide a typological overview of the types of morphology-phonology interfaces for which African languages are well known, including morphologically conditioned P-rules, phonologically conditioned allomorphy, and prosodic morphology (templates, reduplication). I then turn to consider the most diverse and extensive morphology-phonology interface in sub-saharan African: tonal morphology. After distinguishing different types of tonal morphology, I focus on cases which are particularly unusual, specifically tonal morphology which extends beyond the lexical word. This will naturally lead to a discussion of what should be considered "morphology" vs. something else. I will show that tonal morphology can do anything that non-tonal morphology can do, but that the reverse is not true: There are morphological phenomena that appear limited to tone. While emphasis will be on the phenomena rather than on formal implementation, the implications (and potential difficulties) these facts present for formal modeling will be apparent.

MARCH 19 -

MATT FAYTAK
UC BERKELEY

Sonority islands and the canonical sonority scale

Syllabic nuclei are licensed from language to language at widely varying levels of sonority. An emergent property of syllabification as it is canonically described is that syllabic segments occupy a single, unbroken range of sonority levels, with no segment barred from syllabicity that is more sonorous than a given language's least sonorous syllabic segment (Blevins 1995). In this talk, I introduce languages which diverge from this canonical pattern: they allow fricatives as syllabic nuclei despite barring syllabification of rhotics, liquids, or nasals, which should be more sonorous than fricatives in a phonetically-grounded sonority scale. I refer to these discontinuous areas of sufficient sonority as "sonority islands." I attempt two explanations of sonority islands: the first, with analogues to existing phonological literature, is that syllabification in these unusual languages works along a "logical scale"--following Mortensen (2006), a scale that does not reference phonetic substance in being compiled--rather than a substantive, phonetically-derived scale. Justification for this approach can be found in the historical relationship between vowel-like fricatives and vowels proper: the former are overwhelmingly derivable from high vowels. This amounts to equating vowel-like fricatives in sonority islands with vowels. Formally identifying fricatives in sonority islands as vowels at some level of abstraction is adequate, but it is unsatisfying in that it discards phonological distinctions between the two classes. I review these distinctions and conclude by presenting a second explanation that is not necessarily mututally exclusive with the first: the characteristic turbulence of fricatives may explain their behavior as "sonority islands," both in the sense that turbulence is frequently exploited as salience in unusual attested scales (e.g. Nuxalk) but also feasibly allows for a language to maintain the logical scalar relationship between two classes of segments in a relatively substantive manner.

APRIL 2 -

STEPHANIE FARMER, LEV MICHAEL, JOHN SYLAK
UC BERKELEY

Nasal Consonant Harmony in Máíhɨki

In this talk we present an analysis of nasal harmony in Máíhɨki, a Western Tukanoan language, and arɡue that nasal harmony in this language is best analyzed as nasal consonant harmony, and not as the more common phenomenon of nasal spreading, or 'nasalization harmony' (Hansson 2001). Nasalization harmony characteristically affects all segments that fall within its span (as defined by the nasalization trigger and opaque segments, or other relevant boundaries), while nasal consonant harmony does not result in the nasalization of vowels separating the trigger and target consonant(s).

The Tukanoan languages (particularly those of the Eastern branch) figure prominently in the theoretical literature on nasal harmony (e.g. Boersma 2000, Piggot and van der Hulst 1997, Walker 1998), since they exhibit extensive nasal spreadinɡ. In general, nasalization in Tukanoan languages is characterized as a morpheme-level feature that spreads from left-to-right within a morpheme, with generally a very small number of opaque segments, though details vary from language to language. Although some Bantu languages demonstrate nasal consonant harmony (e.g. Yaka; Hyman 1995), no Tukanoan languages have been identified as doing so.

We argue that nasality is a morpheme-level feature in Máíhɨki, as in other Tukanoan languages, and that it preferentially docks to the leftmost nasalization targets (/b/, /d/, or /j/) in the the morpheme. When these segments nasalize to /m/, /n/, or /ɲ/, respectively, they serve as nasalization triɡɡers for the nasalization targets to the right in the same morpheme. Voiceless consonants and /g/ (< *k) are not targets for nasalization, and the nasal consonant harmony process leaves vowels intervening between the trigger and target(s) unaffected. The sole instance in which vowels nasalize is when no target consonants are available in a morpheme, in which case the leftmost vowel in the morpheme is nasalized (e.g. [tãke] 'monkey sp.').

We devote particular attention to phonetic evidence regarding vowel nasalization, since we argue that previous descriptions of nasality in the language (e.g. Velie 1976) have incorrectly characterized vowels between nasalization triggers and targets as nasalized.

APRIL 9 -

KEITH JOHNSON, GREG FINLEY, SHINAE KANG, CARSON MILLER
UC BERKELEY

Factors mediating consonant perception: A report from the phonology lab

In this hour we will present some of the speech perception work that we have been conducting in the phonology lab this year. Recent experiments have tested compensation for rounding coarticulation as mediated by linguistic experience (in this case, native language) and by visual perception. We will present preliminary results of these experiments: French listeners show more consistent interpretation of the rounding on the high front vowel /y/ than English-speaking listeners; and listeners consider visual lip-rounding cues in determining degree of roundedness, although this is stronger for some vowels than others. We will also discuss several new questions, hypotheses, and difficulties that have arisen from these experiments.

APRIL 16 -

*CANCELED*
JUDITH KROLL
PENNSYLVANIA STATE UNIVERSITY

Cross-language competition begins during speech planning but extends into bilingual speech

Recent bilingual studies have shown that both languages are engaged when only a single language is required. Critically, cross-language activation occurs in tasks that are highly skilled, such as listening, reading, and speaking. However, bilinguals do not ordinarily suffer the consequences of cross-language interference, suggesting that they possess a mechanism of cognitive control that allows them to effectively select the language they intend to use. I will present data from three studies that use acoustic measures to demonstrate that not only are both languages active during the earliest stages of planning, but that cross-language activity extends into the execution of the speech plan.

APRIL 23 -

DARYA KAVITSKAYA
UC BERKELEY

Vowels and vocalic processes in Crimean Tatar

In this talk, I will present my fieldwork on Crimean Tatar, an understudied West Kipchak language of the Turkic language family (Bogoroditskii 1933; Kavitskaya 2010). Crimean Tatar is spoken mainly in the Crimean peninsula, Ukraine, and in Uzbekistan. Traditionally, Crimean Tatar is subdivided into three dialects, Southern, Central, and Northern (Berta 1998). The Central dialect is now used as the standard variety of Crimean Tatar, while the number of speakers of the other two dialects is rapidly diminishing.

I will first discuss the sociolinguistic and dialectological situation of Crimean Tatar and then concentrate on two issues in Crimean Tatar phonology: the phonological representation of high vowels and the opaque interaction of vowel harmony, palatalization, and syncope.

APRIL 30 -

ARTO ANTTILA
STANFORD UNIVERSITY

Quantity alternations in Dagaare*

Dagaare (Gur, Niger-Congo) is a tone language and there is little direct evidence for stress. In this talk, I develop the view that a number of vowel shortening and lengthening processes in Dagaare are best understood in metrical terms as consequences of a word-initial moraic trochee (Anttila and Bodomo 2009).

*This talk is based on joint work with Adams Bodomo of the University of Hong Kong