Phorum 2013

Schedule of Talks for Fall 2013

Previous Meetings

SEPTEMBER 9 -

LARRY HYMAN
UC BERKELEY

What is phonological typology?

In this talk I am concerned with the following questions:

  1. What is phonological typology?

  2. How are phonological typology and phonetic typology the same/different?

  3. How are phonological typology and general phonology the same/different?

  4. How are phonological typology and general typology the same/different?

Despite earlier work by Trubetzkoy, Jakobson, Martinet, Greenberg and others, and its inclusion in even earlier efforts towards “holistic” typology (see Plank 1998), phonological typology is often underrepresented or even excluded in typology textbooks (see Hyman 2007). At the same time, many, if not most phonologists do not see a difference between phonological typology and cross-linguistic (formal and descriptive) phonology. As a result, they often address issues of comparison without awareness of the field of typology and its concern with distributions, e.g. Bickel’s (2007) “What, where, why?” and with little involvement in the foundational and methodological questions or controversies peppering the pages of Linguistic Typology, e.g. whether pre-established categories exist (Haspelmath 2007 vs. Newmeyer 2007). To address the above (and other) questions, Frans Plank, Aditi Lahiri and I co-organized the following workshop in Oxford on August 11-13, 2013:

http://www.ling-phil.ox.ac.uk/files/research/Phon-Typ-Schedule.pdf

In this talk, presented first at Oxford, I provide a number of reasons why (phonological) typology should not be about assigning languages to “types”, nor can it be limited to examinging the sameness vs. differences of surface inventories. Instead, like phonology in general, the attention of phonological typology must be on STRUCTURE, i.e. with input-output relations between the different levels and domains required to account for phonological properties across languages. 

SEPTEMBER 16 -

ANDRÉA DAVIS
UNIVERSITY OF ARIZONA

Learning robust word representations

A number of studies with infants and with young children suggest that hearing words produced by multiple talkers helps learners to develop more robust word representations. However, pilot work suggests two important caveats to previous data: 1) only less phonologically proficient learners benefit from phonetic variation in new word perception but 2) all learners may benefit from variation in new word production. The proposed studies seek further evidence to bolster these two points as well as the beginning of an explanation about why production is different from perception.

SEPTEMBER 23 -

ERIN DONNELLY
UC BERKELEY

When /n/ isn't just a nasal: codas in Choapan Zapotec

Cross-linguistically, codas are often constrained against completely, or are only tolerated when filled by sonorant phones (Zamuner 2003). Choapan Zapotec is not unusual in this regard: when a word-final coda is present at all, only /j/, w/, or /n/ can fill it, and word-medial codas are almost unattested. However, liquids, which most phonological theories treat as more sonorant than nasals, can never be codas in Choapan. Furthermore, /n/, /j/, and /w/ exhibit special features word-finally that are never associated with consonants in other positions, including these same phonemes in onsets. Based on these positional and featural patterns, I analyze the word-final allomorph of /n/ in Choapan as a glide (cf. Ferre 1988). These data can contribute to theories regarding lenition in codas, as well as the putative position of nasal consonants in sonority hierarchies.

SEPTEMBER 30 -

SAM BOWMAN AND BENJAMIN LOKSHIN
STANFORD

Idiosyncratic transparent vowels in Kazakh

Idiosyncratically transparent vowels - those that fail to either undergo or trigger harmony in a particular morpheme but do both elsewhere - have not been documented in any language, and have been claimed to be impossible as recently as Törkenczy (2013). We present two affixes in Kazakh that contain idiosyncratic vowels, and present evidence from wordlist elicitations and phonetic studies with two native speakers from different regions. We discuss the implications of these findings for constraint-based theories of vowel harmony and present analyses in two recent systems: both Rhodes's (2010) system for vowel harmony in Agreement by Correspondence and Kimper's (2011) Trigger Competition can be made to accommodate lexical specifications that force vowels in particular morphemes to act as transparent, and we show that implementing these lexical specifications in Trigger Competition forces us to make somewhat stronger predictions about the rarity of idiosyncratic transparency.

OCTOBER 7 -

SHINAE KANG
UC BERKELEY

Audio-visual compensation for coarticulation depends on listeners' familiarity to the sound

This study investigates how visual-phonetic information affects compensation for coarticulation in speech perception. Some studies suggested that listeners' phonological knowledge (obtained through gestural perception of the sound) plays a key role in shifting the auditory percepts during compensation (Fowler 2006, Mitterer 2006). However, these studies confound the knowledge of articulation with the phonological knowledge.

In this paper, we try to distinguish the two confounding factors (phonological knowledge and the gestural perception) by testing round vowels that are either native or non-native in American English. Listeners are known to hear more [sh] before round vowels to compensate for anticipatory lip-rounding (Mann& Repp 1980). Therefore, if the compensation effect were indeed caused by the gestural perception, listeners would compensate for both native and non-native sounds when they see the lip-rounding. We tested compensation effect in mid-round vowels: [o] and [oe] (and [e] for baseline). A series of CV syllables were created with fricative continuum from [s] to [sh] spliced before [e],[o] and [oe]. To balance the stimuli, vowels were extracted from both (s)V and (sh)V. The syllables were aligned with the videos of a face saying [s]V, [sh]V. Separate movies were made for each vowel environment. Three groups of American English listeners participated in audio-only, audio-visual, video-only experiment (A:15/AV:17/V:7). While compensation effect was found for both [o] and [oe] in audio-only experiment, it is caused by a greater shift in [e] by vowel transition cue. In audiovisual experiment, the compensation decreased overall. Although the videos had a similar effect for [o]and [oe] in video-only condition, it had a greater effect on [o] in AV condition. In summary, we found no evidence that knowing the gestural information enhances the listeners' compensation for coarticulation. The result is discussed in light of the findings from other research.

OCTOBER 14 -

JOHN OHALA AND MARC ETTLINGER
UC BERKELEY AND VA NORTHERN CALIFORNIA HEALTHCARE SYSTEM

The next big thing in phonology: the scientific method

Everyone who has had a decent high school science class knows what the scientific method is: observe (in a targeted domain), hypothesize (an explanation for the observations), test (the hypothesis), publish (details of the previous three). But the scientific method is not applied in various human endeavors: religion, fad diets, get-rich-quick schemes, and is poorly manifested in certain scholarly disciplines, e.g., anthropology, law. In our presentation we will explore the extent to which modern mainstream phonology does and/or is able to use the scientific method in its pursuit of an understanding of the grammar attributed to native speakers.

OCTOBER 21 -

FLORIAN LIONNET
UC BERKELEY

Phonological teamwork: an Agreement by Correspondence account of multiple-trigger assimilation

Multiple-trigger assimilations pose notoriously difficult problems to standard autosegmental analyses (Flemming 1997). I present the unusual double-trigger rounding harmony of Laal (unclassified, Chad), where V1 in a disyllabic stem assimilates in rounding to a same-height round V2, iff the root contains a labial consonant in any position: before (1a) or after (1b) the target vowel.

(1) Laal doubly triggered rounding harmony:

  1. /ɓ"r̀ -ú/

  2. /t̀ b-ó/

  3. /g!ń -ù/

  4. /m̀ ̀ g-ú/

  5. /d̀ n+-ú/

> ɓùr-ú > tòb-ó > g ! ń - ù
> m̀̀g-ú > d̀n-ú

‘hook-pl’ ‘fish(sp.)-pl’ ‘net-pl’ ‘tamarind-pl’ ‘tree(sp.)-pl’

(Height, Lab > rounding) (Height, Lab > rounding) (Height, *Lab > no rounding) (*Height, Lab > no rounding) (*Height, *Lab > no rounding)

I show that Agreement by Correspondence, initially developed for consonant agreement (Hansson 2001, Rose & Walker 2004), recently extended to vowel harmony (Rhodes 2012), consonant-tone interaction (Shih 2013), and harmony processes involving contour segments and tones (Inkelas & Shih 2013), can account for multiple-trigger assimilations such as that of Laal, provided it can access subphonemic phonetic information. 

OCTOBER 28 -

BODO WINTER
UC MERCED

Categorical speech perception is not that categorical

Humans perceive speech in a categorical fashion - but the cognitive processes that give rise to this are characterized by continuity. I will discuss results from work with Leonardo Lancia (MPI EVA, Leipzig) that provide converging evidence for the time-varying and dynamically changing nature of categorical speech perception: A connectionist computational model is built that synthesizes multiple cognitive processes (perceptual competition, adaptation, perceptual learning) in the same architecture. The model's predictions are tested via two experiments with French participants. Overall, this work highlights how categorical perception may arise from an underlying dynamic process, and how processes at multiple time scales affect categorical perception. I will also discuss some new preliminary results from a mouse tracking experiment on the same topic that did not pan out quite as planned.

NOVEMBER 4 -

LISE MENN
UNIVERSITY OF COLORADO AT BOULDER

The Linked-Attractor Model of child phonology: update

The Linked-Attractor Model for child phonology is a developmental exemplar model for phonological representation. It’s the offspring of two older models: my old ‘two-lexicon’ model, which stores both input and output representations for words in order to deal with the slow, partly lexical spread of new input-to-output mappings as the child’s phonology develops, and Vihman’s template model, which represents a given child’s output patterns as attractors in an articulatory space and recognizes the continuity between late babble and early speech. In the Linked- Attractor model, both input and output representations are attractors in a large articulatory- acoustic space, and – new idea due to my student Brent Nicholas - the mappings from input to output are also attractors.

Update: The Linked-Attractor Model is deliberately eclectic and redundant, responding both to the descriptive value of multiple formal devices (constraints, rules, and templates), and also to the limitations of each of these descriptive devices that we see when we look at the variety and the lumpiness of the ways that children organize their early vocabulary. The talk will focus on data supporting these assertions.

The model needs elaboration (how would the representations of segments and suprasegmental units interact?) and testing against sufficiently detailed corpora; anybody who wants to work with it is welcome to join the fun. 

NOVEMBER 18 -

CLARA COHEN
UC BERKELEY

Effects of abstract and usage information on pronunciation variation

NOVEMBER 25 -

TBA

DECEMBER 2 -

MELINDA FRICKE
UC BERKELEY, PENN STATE

A retrieval-based account of word and segment durations in speech production

Articulatory gestures vary in duration for many reasons. At the segmental level, duration is used to signal linguistic contrast between phonemes, e.g., the primary difference between voiced and voiceless stops in English is the duration of the vocal fold abduction gesture. At the suprasegmental level, longer duration is associated with prosodic prominence (Aylett and Turk, 2004) and contrastive focus (Katz and Selkirk, 2011), among other discourse-related factors. An additional factor hypothesized to affect the duration of articulatory gestures is the speed or ease with which words can be retrieved from the lexicon. Retrieval-based accounts of phonetic variation have tied the accessibility of words during speech planning to their duration in connected speech (Bell et al, 2009; Gahl et al., 2012). In this talk, I present data on word and segment durations produced in a diverse set of speaking contexts, bringing new evidence to bear on the relationship between word retrieval, phonological encoding, and articulatory duration. Data from a word learning experiment with preschoolers, single word and conversational speech produced by adult speakers, and pilot data on bilingual speech planning will be discussed. I argue for an interactive, cascading model of speech production, in which the availability of phonological segments during planning affects the duration of articulatory gestures.

DECEMBER 9 -

BENJAMIN MUNSON
UNIVERSITY OF MINNESOTA

Explicit versus implicit social priming in speech perception

In this talk, I will present the results of two lines of research on gender and phonetic variation. The first of these examines relationships among gender expression and gendered speech in boys aged 5 to 13. In this line of research, my colleagues (Janet Pierrehumbert, Ken Zucker, Laura Crocker, Allison Owen-Anderson) and I have shown that boys with a clinical diagnosis of Gender Identity Disorder have speech that is less prototypically boy-like than boys without this label. In this part of the talk, we can brainstorm the mechanisms that might underlie these differences, and ways to best measure the perception of gender through speech. The second part of the talk examines how the perception of gender affects the identification of anterior sibilant fricatives. Strand and Johnson (1996) showed that listeners' phoneme identification can be biased by purely social expectations about how talkers should sound. This finding has been replicated many times with a variety of social categories, including gender, age, social class, and regional dialect. In this part of the Phorum, we will talk about how the type of priming affects this process. In particular, we will talk about the differences between studies with very explicit priming methods (such as presenting a picture of a talker with an obvious attribute, or telling the listeners to imagine that speech was produced by a particular type of talker) versus ones that are very implicit. I will present some data on s-S categorization that tried to contrast these two types of priming, and we will brainstorm better ways to implicitly prime social categories than the method that I used.


Schedule of Talks for Spring 2013

PREVIOUS MEETINGS:

JANUARY 31 -

JONAH KATZ
UC BERKELEY

Lenition and contrast revisited

* Please note that this meeting is on Thursday, and starts at 3 PM*

This paper identifies a putative universal involving certain types of lenition and fortition, shows that the universal can only be captured if lenition is formally unified with fortition, and proposes a unified analysis couched in phonetically-driven optimality theory.

I begin by distinguishing between two types of lenition (Segeral & Scheer 1999). One type involves processes like degemination, debuccalization, and deletion; occurs in typically ‘weak’ positions such as codas; and sometimes results in positional neutralization of contrasts. A typical example comes from Slavey (Rice 1989), where coda consonants debuccalize to /h/. This type of lenition is much like any other case of positional neutralization (e.g. major place, voicing) and can be analyzed on a par with those phenomena. A second type, low-pass lenition, involves processes like voicing, spirantization, and flapping, and occurs in intervocalic or non-initial position. A canonical case of low-pass lenition is observed in Spanish, where (simplifying slightly) phrase-initial voiced stops are in complementary distribution with continuants elsewhere. Gurevich (2003) claims that lenition in general is rarely neutralizing; I argue that low-pass lenition in particular never results in positional neutralization, discussing apparent counterexamples including Burmese, Kannada, and American English.

The non-neutralizing character of low-pass lenition is difficult to analyze; recent research in fact misses (Kirchner 2004, Kingston 2008) or denies (Smith 2008, Kaplan 2010) this aspect entirely. The problem in all of these approaches is that one set of constraints drives lenition, while a separate set drives fortition. Any such analysis predicts the unattested positional neutralization pattern as a possible language. Typological data show that low-pass lenition is not independent: it occurs if and only if domain-initial fortition does. Separate constraint sets can’t capture this fact; only a unified analysis can.

Kingston (2008) provides a possible phonetic basis for such an analysis: fortition creates larger changes in the intensity of the acoustic signal at prosodic boundaries; lenition minimizes such disruption domain-internally. The phenomenon is essentially about the perception of contrasts between the presence and absence of a prosodic boundary. I implement this idea with a family of boundary-disruption constraints, which call for disruptions in low-frequency energy to occur at and only at prosodic boundaries. The formalism captures a number of tricky typological properties of lenition phenomena, including the problematic allophonic nature of lenition alternations. 

MARCH 4 -

MATTHEW FAYTAK
UC BERKELEY

Obstruent Vowels in Kom
Obstruentized vowels, rare and poorly understood, have been impressionistically described for a number of languages. I argue that Kom, a Ring language within the Grassfields Bantu family of Cameroon and Nigeria, has two vowel phonemes specified for and produced with partial obstruency. One vowel is produced with a labiodental constriction; the other with a coronal constriction; both are followed by a brief and highly variable vowel that has hindered proper identification and analysis of these sounds. I present phonetic evidence for these complex realizations and phonological and distributional evidence that these realizations should nonetheless be considered unit phonemes. 

MARCH 11 -

NATHANIEL DUMAS
UC SANTA BARBARA

MARCH 18 -

JOCHEN TROMMER AND EVA ZIMMERMANN
UNIVERSITY OF LEIPZIG

A typology of moraic linearization

One of the major assets of Autosegmental Phonology is that it allows to reduce procedural techniques of morphological exponence to a simple generalized concept of concatenation. In particular, the moraic approach to phonological length (Hayes, 1989) gives rise to a maximally simple account of morphologically triggered gemination, vowel lengthening, and coda epenthesis: as affixation of a μ. Although mora affixation is a standard assumption in numerous analyses (e.g. Lombardi and McCarthy (1991); Samek-Lodovici (1992); Davis and Ueda (2002); Grimes (2002); Davis and Ueda (2006); Álvarez (2005); Stonham (2007); Yoon (2008); Haugen and Kennard (2008)), some basic questions about the nature of mora affixation have never been properly addressed, one of them being the question about the linearization of prosodic affixes.

In this talk, we argue that prosodic nodes are assigned to a fixed position on their tier by the morphology and cannot be dislocated by later processes. Prosodic nodes are prefixed or suffixed to specific peripheral or prominent elements of their morphological bases on an affix-specific (i.e. phonologically arbitrary) basis. We therefore extend the assumptions about segmental affixation in Yu (2002, 2007) (cf. also Fitzpatrick (2004)) that affixation targets a specific member of a set of crosslinguistically possible anchor points by lexical subcategorization to prosodic affixation. An important empirical prediction of the subcategorization-based system is that an affix-μ cannot move to different linear positions under the pressure of phonological constraints. Based on a typological survey of quantity-manipulating morphological phenomena, we show that this prediction is true and argue that apparent counterevidence such as Keley-I gemination under the analysis of Samek-Lodovici (1992) is due to a morphological misinterpretation of the data.

One of the major assets of Autosegmental Phonology is that it allows to reduce procedural techniques of morphological exponence to a simple generalized concept of concatenation. In particular, the moraic approach to phonological length (Hayes, 1989) gives rise to a maxi- mally simple account of morphologically triggered gemination, vowel lengthening, and coda epenthesis: as axation of a . Although mora axation is a standard assumption in numer- ous analyses (e.g. Lombardi and McCarthy (1991); Samek-Lodovici (1992); Davis and Ueda (2002); Grimes (2002); Davis and Ueda (2006); Álvarez (2005); Stonham (2007); Yoon (2008); Haugen and Kennard (2008)), some basic questions about the nature of mora axation have never been properly addressed, one of them being the question about the linearization of prosodic axes.

In this talk, we argue that prosodic nodes are assigned to a xed position on their tier by the morphology and cannot be dislocated by later processes. Prosodic nodes are prexed or suxed to specic peripheral or prominent elements of their morphological bases on an ax- specic (i.e. phonologically arbitrary) basis. We therefore extend the assumptions about seg- mental axation in Yu (2002, 2007) (cf. also Fitzpatrick (2004)) that axation targets a specic member of a set of crosslinguistically possible anchor points by lexical subcategorization to prosodic axation. An important empirical prediction of the subcategorization-based system is that an ax-cannot move to dierent linear positions under the pressure of phonological constraints. Based on a typological survey of quantity-manipulating morphological phenom- ena, we show that this prediction is true and argue that apparent counterevidence such as Keley-I gemination under the analysis of Samek-Lodovici (1992) is due to a morphological misinterpretation of the data. 

APRIL 1 -

CLARA COHEN
UC BERKELEY

The (non)-effect of probability on the production of morphemes

Speech units of many sizes--segments, syllables, words or even full clauses--tend to be phonetically reduced when they are more probable. Vowels are more central, segments are more frequently deleted, and duration of the unit is shorter. Yet existing research on Dutch interfixes (Kuperman et al 2007) has shown that more predictable interfixes are phonetically enhanced: they are produced with longer, not shorter, duration. Why does probability have two opposite effects on pronunciation? Is it the case that morphemes simply behave differently from other speech units like words and syllables, or is the difference due to something else?

In my work I investigate two hypotheses respecting the relationship between morphemes and probability. Under the first hypothesis, more probable morphemes are phonetically enhanced, rather than phonetically reduced. Under the second hypothesis, it is necessary to distinguish two sorts of probability: contextual and global. Contextually probable units are probable only in the context of the specific utterance. Globally probable units are those which are more probable compared to related competitors in the language, regardless of utterance context. Under this hypothesis, the frequently observed phonetic reduction patterns are associated with higher contextual probability, whereas the phonetic enhancement observed by Kuperman et al are the result the interfixes' higher global probability.

To test these hypotheses, I am carrying out two experiments, one in English, and one in Russian. In these experiments I investigate the production of subject-verb agreement suffixes in sentences where both singular and plural agreement are possible, but with varying contextual probabilities. In English, these are sentences like "The cleaning staff in the meeting rooms look/looks grumpy." In Russian, these are sentences like "On the table were ([byl-i/o]) four large books." In particular, I measure the duration of the agreement suffixes (singular -s in English, and both singular -o and plural -i in Russian). If it is the case that morphemes simply behave differently from other speech units, then the more probable agreement suffixes should show phonetic enhancement, regardless of whether the probability is measured contextually or globally. On the other hand, if it is the case that contextual probability consistently results in phonetic reduction, then the more contextually probable suffixes should be phonetically reduced, while the more globally probable suffixes (as measured by the ratio of wordform to lemma frequency) should be phonetically enhanced.

APRIL 8 -

FLORIAN LIONNET
UC BERKELEY 

Doubly conditioned rounding in Laal: Conditional licensing and correspondence chain

This presentation aims at proposing an account of a typologically rare doubly conditioned phonological process attested in Laal (unclassified, southern Chad). In this language with maximally disyllabic words, the first vowel of the root is rounded in the presence of a round V2 (most of the time a suffix) only if two conditions are met: both vowels are of identical height (Height condition), AND the root contains a labial consonant in any position (Lab condition):

  1. (1)  /ɓɨr̀ -ú/ >ɓùr-ú

  2. (2)  /təb̀ -ó/> tòb-ó

  3. (3)  /ŋ-u/ > jíŋ-ù

  4. (4)  /mèn-ú/> mèn-ú

‘hook-pl’ ‘fish.sp.-pl’ ‘harpoon-pl ‘hoe-pl’

(Height, Lab > rounding) (Height, Lab > rounding) (Height, *Lab > no rounding) (*Height, Lab > no rounding)

This doubly conditioned assimilation poses problems to an analysis in terms of spreading from the suffix vowel. In particular, why should spreading to V1 of the [round] feature borne by V2 be conditioned by a parameter that is external to the two vowels involved (the presence of a labial consonant in the root)?

I propose instead to analyze this process as a case of conditional (i.e. coerced) licensing: the redundant V-place feature [round] borne by the root labial consonant(s) is sometimes forced out of its inertness. How? By being licensed by V1. When? When V2 is round AND both vowels are of identical height. I hypothesize that this process is driven by the articulatorily process of word-level plateauing of V-place gestures, achieved when both vowels have identical place features (only height and rounding are subject to plateauing in Laal). When one of the two round features in the word is redundant, it may only trigger plateauing (by being realized by V1) if another gesture (in this case, that associated with vowel height) is also plateaued in the same word.

I also propose a (very tentative!) formal analysis couched in Optimality Theory, which draws from Itô et al.’s (1995) notion of feature licensing, as well as the notion of similarity-driven correspondence proposed by Walker (2000a,b, 2001), Hansson (2001) and Rose and Walker (2004). In addition, I propose the notion of correspondence chain, and the possibility to state constraints over correspondence chains additionally to corresponding segments. A correspondence chain is defined in terms of transitivity between several correspondence relations: If xy and yz, then xyz, i.e. x, y and z are in an (indirect) correspondence relation. 

APRIL 15 -

DONCA STERIADE (WITH PETER GRAFF, PAUL MARTY)
MIT

French Glides after C-Liquid: the effect of contrast distinctiveness

APRIL 22 -

JOHN OHALA
UC BERKELEY

ATR Research: Progress Report

ATR (Advanced Tongue Root) has been recognized as a feature underlying vowel harmony in numerous languages, e.g., Mongolian but especially many West African languages. It has also been attributed to vowel pairs in German, English (and other Germanic languages), and Hindi, e.g., in English, +ATR vs. -ATR for the vowels in beet vs bit and suit vs. soot. But it has also been linked to preceding voiced stops (e.g., Shanghainese, Sundanese) and in such cases the voiced stop has been implicated in modifying the vowel quality, tone, and/or voice quality (i.e., lax voice) on the following vowel. I will report current progress aimed at finding out how ATR may be associated with voiced stops and how ATR on voiced stops might induce lax voice on a following vowel. The evidence will be drawn from acoustic analysis, EGG, fiberoscopy, ultrasound, and MRI.

APRIL 29 -

SHARON INKELAS AND STEPHANIE SHIH
UC BERKELEY, STANFORD/UC BERKELEY

Contour segments and tones in (sub)segmental Agreement by Correspondence

Phonological theory has long been challenged by the behavior of contour segments and contour tones in harmony patterns. Sometimes these entities participate in phonology as whole units; at other times, their subsegmental parts act independently. This talk builds on insights from Autosegmental Theory, Aperture Theory, and Articulatory Phonology to propose a novel phonological representation for segments: all segments, including contours, are subdivided into a maximum of three ordered, quantized subsegments that host unitary sets of distinctive features and can participate in harmony (and other processes). By incorporating these quantized segmental representations into Agreement by Correspondence, we can offer, for the first time, a united treatment for the behavior of both contour segments and contour tones across observed phonological patterns of harmony.

MAY 6 -

CALBERT GRAHAM
UNIVERSITY OF CAMBRIDGE, UK

Revisiting f0 range production in Japanese-English simultaneous bilinguals

A question frequently posed in cross-language research on speaking f0 range is whether and in what way bilinguals vary their f0 range according to the language they are speaking. Results in previous studies involving Japanese-(American) English bilinguals have been inconclusive, mainly because analysis is often confounded by sociophonetic influences on f0 range associated with the enactment of strict gender-defining roles in Japanese society. In this study, I report the results of an experiment in which 12 Japanese-AmEng simultaneous bilinguals (6 males, 6 females; all undergrads at UCB) were recorded performing comparable reading tasks in their two languages. The study builds on a relatively new approach to measuring f0 range - proposed by Patterson 2000 and operationalised in Mennen et al. 2012 - that computes its high and low points from actual tonal targets in the intonational phonology. Also, unlike in most previous studies where f0 range is traditionally treated as a one-dimensional measure, f0 range in both languages was measured along two quasi-independent dimensions: level and span. The results reveal that Japanese was realised at a significantly higher level and with a wider range of frequencies (span) than English. This finding provides new insights into the relation between intonational structure and f0 range in two typologically different prosodic systems.