Phorum 2007

SCHEDULE OF TALKS: FALL 2007

SEPTEMBER 10 - SAM TILSEN, UNIVERSITY OF CALIFORNIA, BERKELEY: A PERSEVERATORY COARTICULATION STUDY USING A PHONOLOGICAL PHONEMIC-RESPONSE PRIMING TASK

Vowel-to-vowel perseveratory coarticulation has been observed in a number of languages, and can occur between unreduced vowels across a medial schwa (Magen 1997). The two-syllable potential range of this phenomenon suggests that V-to-V perseveratory coarticulation is caused in part by cognitive mechanisms, rather than by purely mechanical or inertial forces (c.f. Recasens 1984), but is there any other evidence for a cognitive mechanism behind perseveratory coarticulation? Whalen (1990) conducted a study that addressed this question, in which VCV utterances were produced under conditions in which planning of the target vowel was limited; however due to design constraints the results were inconclusive. In this talk I report the results of a phonological phonemic-response priming experiment intended to address the same question, although the design of this task is in some respects the opposite of the Whalen (1990) design: here the source vowel in a two vowel sequence was planned but not articulated before the target was produced. Some unexpected patterns were observed in this study, and they suggest a more complex understanding of the cognitive mechanisms involved in speech planning. Intriguingly, for some subjects, response vowel formant measures exhibited significant "contra-articulatory" effects -- response vowels were acoustically less similar to a previously planned vowel. I draw on similar results in oculomotor, reaching, and neurophysiological studies, and upon models of neural inhibition and dynamical field representations of movement planning, to suggest that lexical faithfulness arises from an inhibitory motor planning mechanism and that balance or the lack thereof between this mechanism and coarticulatory forces can account for the variety of V-to-V coarticulatory patterns observed across languages.

SEPTEMBER 17 - JOHN HOUDE, UNIVERSITY OF CALIFORNIA, SAN FRANCISCO: DYNAMIC CORTICAL IMAGING OF SPEECH COMPENSATION FOR AUDITORY FEEDBACK PERTURBATIONS

Understanding how auditory feedback is processed and used during speech production is a long-standing issue, one that has classically been investigated by looking at how altering auditory feedback affects speech. Recently, the advent of functional neuroimaging methods has also allowed examination of how producing speech affects the neural processes serving auditory perception. Here, whole-head magnetic source imaging (MSI) was used to monitor cortical activity as speakers responded to brief perturbations of the pitch or amplitude of their speech. Prior studies have shown that such perturbations cause compensatory responses in speech motor output. MSI recordings were acquired at the onset of feedback perturbations as the subject continuously vocalized and also when the subject passively listened to recordings of the perturbed feedback. A response to the perturbations was found around auditory cortex between 100 and 400 ms postperturbation that was enhanced while subjects vocalized, as compared to when they passively listened. The size of this response enhancement was a significant predictor of how much a subject compensated for the feedback perturbation. Time-frequency optimized adaptive spatial filtering was also used to localize cortical activations that correlated significantly with degree of compensation across subjects.

SEPTEMBER 24 - DAN SILVERMAN, SAN JOSE STATE UNIVERSITY: "NEUTRALIZING APLOSIVATION AND ANTI-HOMOPHONY IN KOREAN"

Laryngeal neutralization is quite prevalent in pre-consonantal stops, and virtually unattested in pre-vocalic ones. This position of neutralization typically involves the loss of stop release. For aerodynamic and auditory reasons, stop releases are the optimal location for laryngeally-based cues (Kingston 1985, 1990, Silverman 1995, 1996, Steriade 1995, 1997, 2000). If a stop is not released into a more open gesture such as a vowel, it may lose the phonetic cues associated with this interval of the speech stream, among them, cues to the state of the larynx. In the limiting case, the perceptual distinction among contrastive laryngeal states is extinguished completely. This is laryngeal neutralization.

But phonological systems are not merely the immediate product of phonetic processes; simply because there is a phonetic tendency towards a particular articulatory state does not necessitate a diachronic move toward that state. Many other factors may exert an influence on the phonetic state of affairs, among them, the natural tendency for contrast to be maintained. In a synchronic theory like optimality theory, this is expressed as the ranking relationship between "faithfulness" constraints and "markedness" constraints.

In this paper I investigate one case of laryngeal neutralization -- Korean aplosivization -- and suggest that the natural tendency toward pre-consonantal aplosivization and consequent neutralization evolved here exactly because of concomitant morphological developments that maintained lexical contrast. My proposal is that the pattern was triggered by the huge influx of Chinese nouns. Phonologically, the aplosive nature of Middle Chinese (MC) root-final stops was gradually adopted in the native Korean vocabulary by supplanting native nouns, and by influencing the phonotactics of the language at large. For example, Korean *t, *t? *tʃ, *tʃ?(and eventually *s *s', and perhaps *h) all became [t}] when lexically non-pre-vocalic.

Clearly, this potential for neutralization of contrast runs the risk of creating a significant amount of homophony -- a patently counter-functional consequence. However, as I numerically document, modern Korean possesses remarkably little homophony. In the case of nouns, any potential functional damage caused by this massive collapse was avoided by the concomitant development of a compounding process, which was also likely due to contact with Chinese. As compounding rendered words longer, phonetic distinctness among them was increased significantly, thus offsetting any counter-functional consequences of the merger itself. In the case of the verb vocabulary -- which was not supplanted by Chinese loans and thus retained root-final laryngeal contrasts -- obligatory suffixation typically results in root-final stop plosivization (release into a vowel), and so any root-final laryngeal value may be realized in its canonical fashion, that is, at stop release. Indeed, Chinese-influenced compounding may have served a dual role in this scenario. First, it offset any potential homophony that aplosivization might have otherwise induced, and second, due to this, it may have sped the natural tendency toward aplosivization, as there were now fewer functional pressures that would inhibit its development.

Teleology plays no role here. The present day pattern is simply the passive consequence of selectional pressures acting on the variation inherent in speech and language. Homophony was minimal at the outset, and, despite laryngeal neutralization, has remained minimal to the present day.

OCTOBER 1 - CHARLES CHANG, UNIVERSITY OF CALIFORNIA, BERKELEY: THE DIACHRONY OF PALATALS IN ARGENTINE SPANISH

Argentine Spanish is well-known among varieties of Spanish for its "hardened" palatals: the palatal glide /j/ of Iberian Spanish is typically more constricted in Argentine Spanish, resulting in a fricative or affricate in all environments. Some recent studies ( e.g. Colantoni 2006) have examined the change from /j/ to /ʒ/ in great detail. However, in the conversational speech of Buenos Aires one hears [ʃ] as often as [ʒ], resulting in discrepancies among, and even within, popular language guides. For instance, Palmerlee et al. (2005) call attention to "the trait of pronouncing the letters ll and y as 'zh' (as in 'azure') rather than 'y'", but then say that 'y' is pronounced "as the 'sh' in 'ship' when used as a consonant".

In this paper, I present preliminary data on variation between voiced and voiceless realizations of the palatal phoneme and suggest that this variation is indicative of a sound change in progress. In a study of eleven middle-class speakers from Buenos Aires ranging from 18 to 79 years of age, production of the palatal phoneme was elicited through a reading task containing a variety of morphological, phonological, and orthographic environments. Three measures were taken on these data: (1) overall percent usage of [ʃ], (2) average percent voicelessness over a palatal's duration, and (3) consonant-vowel intensity ratios comparing a palatal to adjacent vowels.

While the usage of [ʃ] is not correlated with geography or gender, it is clearly correlated with age. Pearson's correlations of percent usage of [ʃ] with year of birth are highly significant (*p* < .0005): speakers born before 1945 almost exclusively say [ʒ], while speakers born after 1975 use mostly [ʃ], with speakers born between 1945 and 1975 showing more variability. Percent voicelessness and relative intensity of the palatals are significantly correlated with age as well. This age-graded pattern appears to reflect a sound change being led by younger speakers rather than a change across the lifespan (cf. Guy and Boyd 1990; Sankoff and Blondeau, in press). Both the fact that individual usage does not seem to change over time and the fact that usage of [ʃ], the "non-standard" form, does not increase in old age are indicative of the sort of individual linguistic stability that is typical of a sound change in progress.

OCTOBER 8 - ALEX JAKER, STANFORD UNIVERSITY: GEMINATION AND TONAL FEET IN YELLOWKNIFE DOGRIB

The Yellowknife (Weledeh) dialect of Dogrib (Tlicho Yatɪì) is spoken by people of the Yellowknives Dene First Nation, in and around Yellowknife, Northwest Territories. Within the formal framework of Lexical Phonology (Kiparsky 1982), this paper argues that the over-arching generalization in the morphophonology of Yellowknife Dogrib is captured by the constraint LEVELFOOT, which regulates tonal patterns within a moraic trochee. This constraint prefers (high-high) and (low-low) feet over (high-low) and (low-high) feet. LEVELFOOT is satisfied differently in different morphophonological domains. At the stem level, LEVELFOOT is satisfied by vowel deletion. For example, an underlying sequence /bò-kà-whe-wìd-t'è/, L-L-H-L-L, is realized as bò(kàwhì)tè,'two of us have cooked,' with an entire syllable deleted. At the word level, LEVELFOOT is satisfied by gemination and re-footing. At the postlexical level, any remaining non-level feet are accomodated by allophonic lowering of a high tone to a middle tone. Contrary to what has been previously assumed (cf. Ackroyd 1982, Marinakis 2002), this analysis assumes that Dogrib has phonological length for both consonants and vowels, which is supported by preliminary instrumental data.

These phenomena raise issues regarding diachrony and synchronic naturalness: it has been claimed to be impossible for tone to trigger vowel deletion, for example (Blumenfeld 2006). On the other hand, this process most likely originated as a prosodically conditioned syncope rule, when the low-toned vowels were heavy syllables, closed by glottal stops, and high-toned vowels were light, unfooted syllables (cf. Krauss 2005). Nevertheless, I conclude that, regardless of its historical origins, the LEVELFOOT constraint is still synchronically active and productive in Yellowknife Dogrib.

OCTOBER 15 - ZHAO YUAN, STANFORD UNIVERSITY: THE EFFECT OF LEXICAL FREQUENCY ON TONE SPACE DISPERSION: EVIDENCE FROM CANTONESE TONE PRODUCTION

Previous research has identified robust effects on segmental production of lexical factors like word frequency, predictability or neighborhood density. Frequent, predictable, or easy (sparse-neighborhood) words are produced with weaker segmental articulation and shorter duration. Explanations of this weakening (whether drawing on lexical access, articulatory routinization, prosodic intermediation, and/or direct hearer modeling) predict that the production of lexical tone should also be affected by lexical factors such as usage frequency. This study tests this prediction by investigating whether usage frequency affects tone production in Cantonese. We recorded Cantonese monosyllabic words of high and low usage frequency, controlling segmental factors. The results show that lexical factors do influence tonal production. Words of the same tone but of different usage frequency differ significantly in pitch height. Low-frequency words are hyperarticulated and produced with relatively higher pitch. The overall tone space of owfrequency words is more expanded than that of their high-frequency counterparts. Furthermore, as a result of tone space expansion, the acoustic distinctiveness of highly confusable tones increases. Our results support models of speech production like H&H, in which informativity guides hyperarticulation and hypoarticulation.

OCTOBER 22 - DIDIER DEMOLIN, UNIVERSITÉ LIBRE DE BRUXELLES: SUBGLOTTAL AND INTRA-ORAL PRESSURE IN THE PRODUCTION OF FRICATIVES

The talk will present results of experiments in which subglottal pressure (Ps), intra-oral pressure (Po) and oral airflow (Of) were measured with a group of 6 French speakers (three female and three male) during the production of fricative consonants. The data come from logatomes (Isolated CVCV and the same CVCV in a small carrying sentence) in which each of the six fricatives [f, v, s, z, ʃ, ʒ] was combined with the vowels [i, a, u]. In addition, Ps, Po and Of were recorded in a set of sentences, for the six French fricatives. Ps was measured by direct tracheal puncture, Po with a small plastic tube (2mm internal diameter) inserted in one nostril to the upper part of the oro-pharynx (just behind the velum). Oral airflow was measured with a small rubber mask set against the cheeks. All parameters and the sound were measured simultaneously with a Physiologia or EVA2 workstation.

More specifically, the paper evaluates differences in Ps and Po values, for voiced and voiceless labio-dental, alveolar and palatal fricatives, word initially and word internally between two identical vowels. Comparison is made between man and woman. Oral airflow values were also measured and integrated in order to evaluate the volume velocity for each of the fricatives consonants of the study.

These values are essential to know since to generate frication in voiceless fricatives, the difference between the pressure in the vocal tract (in this case Ps = Ps + Po) and atmospheric pressure has to be maximal to generate a turbulent flow, given a constriction in the vocal tract. For voiced fricatives it is the differences between Ps and Po and, at the same time, the difference between Po and atmospheric pressure that have to be maximal to generate voicing and the turbulence. As the aerodynamic conditions necessary to produce voiced fricatives (in addition to be threatening for voicing) require two pressure ratios that apparently contradict each other, i.e. that the pressure difference at the glottis Δglottis=Ps - Po = Max and the pressure difference at the constriction in the vocal tract Δoral=Po - Patm = Max have to be maximum almost simultaneously, a special attention will be given to the relation between pressure values for the voiced fricatives [v, z, ʒ]. Results allow discussing fundamental issue about regulation and control of speech production and phonological processes.

OCTOBER 29 - KEMS MONAKA, VISITING SCHOLAR, UNIVERSITY OF CALIFORNIA, BERKELEY: QUALITATIVE ASPECTS OF THE ARTICULATION OF PLAIN AND EJECTIVE STOPS: OBSERVATIONS FROM XHOSA AND SHEKGALAGARI

The nature of voiceless unaspirated plosives in some South-Eastern Bantu languages has been controversial for some time. Scholars have remained unclear as to whether these plosives are glottalic or pulmonic. In an attempt to contribute to a better understanding of these sounds, the unaspirated plosives were recorded for Shekgalagari, a Sotho-Tswana language, and Xhosa, an Nguni language, for experimental analysis to determine which airflow mechanism was used in their production. Recordings of 5 (1 for Xhosa and 4 for Shekgalagari) natives speakers were made and analysed using the laryngograph (Gx and Lx) and 2 speech signals (Sp and spectrogram). Qualitative information on the stops was derived by inspecting the signals: Gx, Lx, Sp and spectrogram, and quantitative information was derived from VOT values. Whilst the Gx signal, which depicts the displacement of the larynx in the egressive and ingressive airflow mechanism, deflects upwards for the Xhosa plosives produced in isolation and within a carrier sentence, it however remains relatively level for the Shekgalagari stops, indicating an undisplaced larynx for airflow initiation. The Lx signal and the spectrogram depict modal phonation prior to and after the plosives, indicating lack of glottalization in the Shekgalagari plosives. For the Xhosa stops, creaky voice, decreased open quotient and amplitude of vibration is observed prior to the plosive and at the start of the vowel, signaling a constricted glottis. The burst in the speech signal for Shekgalagari plosives is weak and hardly visible in some cases, indicating lack of significant air pressure in the vocal tract for these stops. It is however clearly visible both in the speech signal and the spectrogram spike for Xhosa plosives in the study. VOT values are small for Shekgalagari stops and vary for place of articulation. These results show that the stops in Shekgalagari maybe plain voiceless unaspirated stops, and the Xhosa informant at least produced them as ejectives. Further acoustic and air pressure and airflow investigation need to be done to help shed light on the nature of production of the voiceless unaspirated plosives in these languages.

NOVEMBER 5 - ASA DAY!: PRACTICE TALKS FOR THE FALL MEETING OF THE ACOUSTICAL SOCIETY OF AMERICA

NOVEMBER 19 - MARIA-JOSEP SOLÉ, UNIVERSITAT AUTÒNOMA DE BARCELONA: FEATURE INTERACTION AND PHONETIC CONTENT: INTERACTIONS BETWEEN NASALIZATION AND VOICING

The talk will focus on dependency relations of features within a segment and when segments follow one another; specifically, on the dependency between nasalization and voicing. Such dependency relations have been accounted for by 'redundancy rules' or OT constraints (e.g., NAS/VOI or *NC ), amongst others. The purpose of the talk is to present an account of the physical factors responsible for the dependency between nasality and voicing and to argue that if the interaction between the two features can be explained functionally, then there is no need for a formal statement of the constraints. First, we will show that due to acoustic-auditory factors glottal vibration favors the percept of nasality. Second, we will examine the aerodynamic factors responsible for nasality facilitating glottal vibration. We review cross-linguistic data showing that nasals tend to occur or be preserved next to voiced but not voiceless stops (partially explained by the lesser tolerance of voiceless stops to nasalization, Ohala and Ohala 1993) and, crucially, that nasals emerge in oral contexts to facilitate voicing in stops and that voicing contrasts are maintained in nasal but not oral contexts. These patterns suggest that nasal leakage is a maneuver to facilitate voicing in the stop.

We also report the results of a study that examines the effect of pressure variations with a pseudo-nasal valve on allophonically devoiced stops in American English. The results show that reducing oropharyngeal pressure affects the transglottal pressure drop and voicing is anticipated in initial stops, and is prolonged in final stops and stop clusters, lending support to the view that nasal leakage favors voicing. Finally, we argue that formal phonological notations which represent the nasal valve and the larynx at different nodes fail to capture the interaction between nasality and voicing.

NOVEMBER 26 - ANNE PYCHA, UNIVERSITY OF CALIFORNIA, BERKELEY: LENGTHENED AFFRICATES AS A TEST CASE FOR THE PHONETICS-PHONOLOGY INTERFACE

Many phonetic and phonological processes resemble one another, which has led some researchers to suggest that phonetics and phonology are essentially the same. This study compares phonetic and phonological processes of consonant lengthening by analyzing duration measurements collected from Hungarian speakers (n=14). Affricates, which crucially possess a two-part structure, were placed in target positions. Results show that affricates regularly undergo phonetic lengthening at phrase boundaries, and the affected portion of the affricate is always that which lies closer to the boundary. Affricates also regularly undergo phonological lengthening when next to a geminating suffix, but the affected portion of the affricate is always the stop closure. Thus while phonetic lengthening observes a strict respect for locality, phonological lengthening does not, and we conclude that the two processes are in fact quite different from one another.

DECEMBER 3 - MEGHAN SUMNER, STANFORD UNIVERSITY: EXPERIENCE, TRANSPARENCY, AND UNPRODUCTIVE SURFACE ALTERNATIONS

The specificity of representations has been a recent focus of much recent work in speech perception and psycholinguistics. The amount of data supporting specific, or concrete, representations has grown considerably over the past ten years. At the same time, there seems to be a reasonable amount of research suggesting that abstract and specific representations are not mutually exclusive of one another, but can coexist. What we know relatively little about, is what types of evidence promotes the different degrees of representational specificity.

In this talk, I examine two sets of weak verbs in Modern Hebrew via priming experiments with two subject populations: young adults aged 18 -35 and older adults aged 48 !V 66. I present evidence that the two subject populations treat the verbs differently from one another; older adults maintain a surface generalization and store abstract representations, while young adults fail to do this and store specific representations for these verbs. Once this difference is established, I consider a number of possible explanations for this difference including experience with the verbs and orthography. I suggest that in Modern Hebrew, simple transparent surface alternations are unproductive and stored episodically because younger adults do not have enough information and experience to make a surface generalization.


SCHEDULE OF TALKS: SPRING 2007

JANUARY 22 - ERIN HAYNES: "A SUBCATEGORIZATION APPROACH TO OPACITY AND NON-SUPPLETIVE ALLOMORPHY IN THE CUPEÑO HABILITATIVE MOOD"
ANNE PYCHA: "GEMINATION AS NON-LOCAL LENGTHENING"

Practice talks for the CUNY Phonology Forum Conference on Precedence Relations

JANUARY 29

Discussion of Eric Bakovic's paper, "A revised typology of opaque generalizations" (ROA-850)

FEBRUARY 5 - GABRIELA CABALLERO: "MULTIPLE EXPONENCE OF DERIVATIONAL MORPHOLOGY IN RARAMURI"
CHRISTIAN DICANIO: "WHEN FORTIS GOES 'BALLISTIC': THE CASE OF CONSONANTAL LENGTH IN TRIQUE"

Practice talks for BLS 33: Download PDF abstracts for Gabriela's talk and Christian's talk

FEBRUARY 12 - MEGHAN SUMNER, STONY BROOK UNIVERSITY & UC BERKELEY: "THE EFFECT OF EXPERIENCE IN THE PERCEPTION AND REPRESENTATION OF DIALECTS"

Variation in the speech signal abounds. A single speaker can produce a number of acoustically distinct utterances for any given word. Moreover, any word can be produced uniquely by different speakers depending on unpredictable indexical characteristics (e.g., gender, age; Abercrombie, 1967), or more systematic phonetic characteristics (e.g., dialect, native language). In short, spoken words are variable. They are not bounded by spaces. Identical productions of words (even by the same speaker) are rare. The task of recognizing spoken words is notoriously difficult. Once dialectal variation is considered, the difficulty of this task increases. When living in a new dialect region, however, processing difficulties associated with dialectal variation dissipate over time. While the issue of variation has been gaining attention in the field, the majority of attention has been given to indexical variation. The projects that have focused on language-specific and phonetic variation have focused either on arbitrary variation (e.g., the processing of service vs. gervice; Connine et al., 1993) or assimilation (e.g., Gow, 2001). Little attention has been paid to the processing of words with multiple surface instantiations or the effect of experience in the perception and representation of cross-dialectal variation.

Through a series of priming tasks (form priming, semantic priming, and long-term repetition priming), I examine the general issue of variation in spoken word recognition, while investigating the role of experience in perception and representation. The main questions addressed in this talk are: (1) How are cross-dialect variants recognized and stored, and (2) How are these variants accommodated by listeners with different levels of exposure to the dialect? Three claims are made based on the results: (1) Dialect production is not representative of dialect perception and representation, (2) Experience is linked with a listener's ability to recognize and represent spoken words, and (3) There is a general benefit for having the status as the 'ideal' variant, even if this variant is not the most common one. Results of this research have implications for autonomous models of phonology and raise interesting questions regarding the non-production side of having a dialect.

In addition to the discussion of cross-dialectal variation, the presentation of a new research program examining issues associated with acclimating to non-native speech is included. This project examines how listeners learn to remap acoustic cues that are consistent with one category (e.g., voiceless) to a new category (e.g., voiced). The proposed project examines issues such as perceptual learning, generalization across words and speakers, and frequency effects in representation. As an applied component, the project also objectively examines the effectiveness of mainstream ESL methodologies by asking whether explicit phonetics training improves ESL student performance. This issue is critical as global communication continues to rapidly increase.

FEBRUARY 19 - NO MEETING (PRESIDENTS' DAY WEEKEND)

FEBRUARY 26 - CARL HABER, LAWRENCE BERKELEY NATIONAL LABORATORY: "IMAGING THE SOUNDS OF THE PAST: NEW OPTICAL METHODS TO RESTORE AUDIO RECORDINGS"

Sound was first recorded and reproduced by Thomas Edison in 1877. Until about 1950, when magnetic tape use became common, most recordings were made on mechanical media such as wax, foil, shellac, lacquer, and plastic. Some of these older recordings contain material of great historical value or interest but are damaged, decaying, or now considered too delicate to play. This talk will begin with a discussion of the history and technical basis of sound recording and the issues faced by archives and libraries as they strive to preserve, and create greater access to, these valuable materials. Recently, a series of techniques, based upon optical metrology and image analysis, have been applied to restoring historical sound recordings. Preservation studies on discs, cylinders, and dictation belts, and a project, with the Library of Congress, to develop an imaging workstation for disc media access will be described. These topics, and prospects for the future, will be illustrated with sounds and images.

MARCH 5 - DAN EVERETT, ILLINOIS STATE UNIVERSITY: "A REVIEW OF ARAWAN STRESS SYSTEMS"

In this talk I will review the sound systems of the Arawan linguistic family of Southwestern Amazonas, Brazil (Jarawara, Banawa, Jamamadi, Deni, Paumari, Suruwaha, and Kulina), focusing on stress and moraic constituency in Paumari. The data all come from my own fieldwork on all of these endangered languages. In Paumari feet are quantity-insensitive iambs, built from right-to-left within the prosodic word. Both of these latter claims are theoretically important because they violate some proposed universals of foot structure. The paper also discusses more general implications of the Paumari data for theories of foot size and shape, further developing two constraints from Everett (1990) on foot size, Foot Maximality and Foot Minimality, to replace the less fine-tuned constraint Foot Binarity.

Download the paper here.

MARCH 12 - MIKE GROSVALD, UC DAVIS: "LONG-DISTANCE VOWEL-TO-VOWEL COARTICULATION: PRODUCTION & PERCEPTION STUDY"

The phenomenon of coarticulation is relevant for issues as varied as lexical processing and language change. However, research to date has not determined with certainty how far such effects can occur, nor how perceptible they are to listeners, at either the conscious or unconscious level. This study investigated anticipatory vowel-to-vowel (V-to-V) coarticulation. First, seven native speakers of English recorded sentences containing multiple consecutive schwas followed by [a] or [i]. The resulting acoustic data showed significant anticipatory vowel-to-vowel coarticulatory effects as many as three vowels before the context vowel. The perceptibility of these effects was then tested using behavioral methodology, and some pilot ERP data were also collected. Even the longest-distance effects were perceptible to some listeners. Of particular interest to historical linguistics is whether a correlation exists between ability to perceive coarticulatory effects and tendency to coarticulate. The results here offer limited support for this hypothesis, and are suggestive enough to warrant further study.

Snacks:Charles

MARCH 19 - ALAN YU, U CHICAGO: LEXICAL AND PHONOTACTIC EFFECTS ON WORD LIKENESS JUDGMENTS IN CANTONESE (JOINT WORK WITH JAMES KIRBY)

This paper reports the results of a wordlikeness experiment designed to investigate Cantonese speakers' gradient phonotactic knowledge of systematic versus accidental phonotactic gaps. The results showed that not all Cantonese systematic gaps were judged to be worse than accidental gaps. Certain systematic gaps consistently received higher goodness ratings than others and not all systematic gaps were judged as significantly more or less word-like than accidental gaps. Regression analyses showed that neighborhood density and transitional bigram probability were significant predictors of wordlikeness ratings.

Snacks:Reiko

MARCH 26 - NO MEETING (SPRING BREAK)

APRIL 2 - REIKO KATAOKA: "FREQUENCY EFFECTS IN CROSS-LINGUISTIC STOP PLACE PERCEPTION: A CASE OF /T/ - /K/ IN JAPANESE AND ENGLISH"

This study addresses the question whether language-specific speech processing is tied to the lexicon. Many researchers have observed cross-linguistic difference in speech perception (e.g. Miyawaki, et al., 1975; Kuhl, et al., 1992). One explanation for such difference is that speech perception is mediated by the listener's lexical knowledge (Whalen, et al. 1997). Thus, via exemplar-based lexicon (Johnson & Mullennix, 1997), it is predicted that the speaker perceive not only the native phonemes differently from non-native phonemes but also frequent native phonemes differently from non-frequent native phonemes.

The type frequencies of voiceless stops in English and Japanese are different: /t/ is the most frequent voiceless stop in the English lexicon, while /k/ is most frequent stop in Japanese lexicon (Yoneyama, 2002). With this finding, the above hypothesis was tested by studying the response to /t/-/k/ continuum from American English speakers and Japanese speakers.

The experiment consists of the following three tasks: 1) the discrimination task based on four-interval two-alternative forced-choice same/different (4IAX) procedure, 2) identification task of the boundary in /k/-/t/ continuum, and 3) the rating task for the goodness of speech sound.

The results from 30 American participants and 26 Japanese participants will be presented at the talk. The implications for the theory of speech perception and encoding of linguistic/phonetic information will be discussed.

APRIL 9 - CHARLES CHANG & YAO YAO: "TONE PRODUCTION IN WHISPERED MANDARIN"

Acoustic analyses of voiced and whispered Mandarin Chinese reveal significant differences in duration and intensity among the four lexical tones, differences that are moreover similar across the two phonation types. In contrast to previous claims, however, these differences among the tones are found to shrink in whisper rather than being exaggerated to facilitate perception. Furthermore, individual variation exists in the production of whispered tones, which are found to shorten or lengthen with respect to voiced tones depending on the speaker.

APRIL 16 - YUNI KIM: "VOWEL COPY IN HUAVE"

APRIL 23 - MARC ETTLINGER: "AN EXEMPLAR-BASED APPROACH TO OPACITY"

APRIL 30 - LEV BLUMENFELD, UC SANTA CRUZ: "A NEW LOOK AT LATIN ENCLITICS AND PROSODIC OPTIMIZATION"

The behavior of the Latin enclitics -que 'and', -ve 'or', and -ne 'not' remains one of the unsolved problems in Latin prosody. There are two basic possibilities, and any number of options sharing aspects of both: (1) stress regularly occurred on the syllable preceding the clitic; (2) the host-clitic group was stressed as a single phonological word. In this paper, building on recent work by Probert (2002), I address a previously neglected aspect of the problem: the distribution of the clitics with respect to the prosodic shape of the host word. Using a statistical analysis of the patterns of -que attachment in prose texts, I will argue for a new analysis of host-clitic prosody, and suggest that Latin meter provides independent evidence for it.

MAY 7 - STEPHANIE SHIH: "'SOMETHING'S GOTTA GIVE': RETHINKING LINGUISTIC MODELS OF RHYTHM AND TEXT-SETTING THROUGH EVIDENCE FROM JAZZ BOP SWING

Jazz has long been considered America's own form of Western music, distinguished from the older European classical tradition by its "swing," a syncopated and uneven rhythm. Given its variation from other Western musical traditions, the underlying rhythmic properties of swing have not been as widely explored and formalized as those of Western classical tonal music. The same is true for aspects of swing that rely heavily on rhythm such as text-setting, the process of putting together linguistic and musical rhythms.

In my talk, I will sort out some of the similarities and differences between the rhythmic structures of jazz and classical music. This will ultimately result in a reanalysis of the current grid theory for describing musical rhythm from Jackendoff and Lerdahl's definitive 1983 text, A Generative Theory of Tonal Music. I will then explore text-setting in jazz based on this redeveloped metrical model, introducing some necessary revisions to the previous analyses of text-setting (by esp. Hayes 2005, Hayes and MacEachern 1997, Hayes and Kaun 1996). Finally, I will attempt to unite the issues and phenomena in text-setting, such as stress matching, phrasal alignment, and stanza formation, through an Optimality-based constraint ranking.