Jan 13, 2017 – Lauren (Perrotti) Halberstadt (Penn State University)
Title: Investigating How Community Norms Affect the Processing of Codeswitched Language
Abstract: Is codeswitching costly? And if so, can the costs be mitigated? These questions guide the present studies to better understand how we produce and comprehend codeswitches. Given links between language usage and language processing, there is a glaring need to examine codeswitches in speakers who regularly engage in codeswitching. This was accomplished by recruiting participants from a single codeswitching community in Albuquerque, New Mexico and using natural speech samples as the experimental materials. To study production, a corpus of natural spoken codeswitched samples was created; to study comprehension, two experiments were conducted. During this talk, I will discuss the results of these studies which shed light on the way language variation within the community affects language processing in the individual.
Jan 20, 2017 – Katharina Schumann(Penn State University)
Title: Perceptual Learning Within and Across Languages
Abstract: Mounting evidence suggests that adult listeners readily fine-tune their speech perception in their native language (L1) to accommodate novel accents or atypical pronunciations. The general process of perceptual learning, whereby non-canonical speech leads to a change in the listeners’ phoneme category boundary, has been well established in laboratory studies of native listeners (Norris et al. 2003, McQueen et al. 2006, Kraljic & Samuel 2005, 2006, inter alia). In this talk, I examine how non-native (L2) and bilingual listeners of different proficiency levels handle phonetic variability in their speech input. The results of several studies involving novice and advanced L1 English–L2 German listeners (in the US) and advanced L1 German–L2 English listeners (in Germany) show that perceptual learning can generalize across languages. In particular, phonetic variability in English can affect perception of the L1 and the L2 for both native English listeners and native German listeners. Perceptual learning in the native language further generalizes to related phoneme contrasts within and across languages. Perceptual learning in a non-native language, on the other hand, appears to be specific to the trained phoneme contrast and does not generalize to other phoneme contrasts with shared phonological features. Additionally, cross-linguistic generalization effects are modulated by the listeners’ L2 language proficiency and L2 use. The aforementioned complex pattern of cross-linguistic perceptual learning in L1 English–L2 German and L1 German–L2 English listeners suggests that the grammatical representations of the tested phonemic contrasts in English and German are not independent for bilingual listeners. These findings have interesting implications for models of knowledge of sound and speech processing. Consequences for mental representations and directions for future research will be discussed.
Jan 13, 2017 – Deborah Morton(Penn State University)
Title: Language Development and Changing Language Attitudes Among the Anii People
Abstract: The Anii people of Togo and Benin (West Africa) are a minority language group, surrounded by unrelated languages whose speaker populations are much bigger than those of Anii. For over fifty years, the Anii have also been exposed to conflicting language ideologies from outside their community. These ideologies come from governmental and non-governmental organizations that have both promoted the use of French as the language of education and government, and sought to encourage language development and literacy in the ‘languages of the people’, including Anii. The earliest efforts to expand the use of Anii beyond its traditional use (as an oral means of communication within the Anii community), such as government-sponsored literacy classes starting in the 1970s, were often regarded as being directed towards the poor and uneducated. These types of initiatives were generally not embraced by wealthy and educated Anii, and resulted in many Anii people viewing their language as rural, or even backwards. In contrast, more recent language-development initiatives such as the creation of Anii programs on local and national radio, and a magazine that is published in paper and on-line formats, appear to have given rise to much more positive attitudes towards Anii. In particular, the use of modern technologies has been viewed as making Anii more similar to (and thus more equal in value to) both European languages and (perhaps more importantly) the languages of the larger ethnic groups the Anii people come in contact with. This talk presents data from interviews and language development materials to investigate how language attitudes affect and are affected by the ways in which the language is used within the community.
Feb 3, 2017 – Roberta Golinkoff (University of Delaware)
Title: Carving Events for Language
Abstract: Events are continuous. Our perception of them is not. Remembering the past and predicting the future demand that we parse events into components that will also lay the foundation for language learning. In this talk, we present a series of relatively new studies designed to examine infant attention to and interpretation of event structure. Using Mandler (2012) and Talmy (2000) as our inspiration, we find that infants are sensitive to event components like paths and manners and figures and grounds, among others. Infants also detect statistical relationships within event components that allow them to abstract predictable patterns with relatively little exposure. Finally, our work suggests that infants use both bottom up and top down processes to parse continuous events into the categories of experience. We explore several ways in which these new findings on event processing might interface with the acquisition of language.
Feb 10, 2017 – Abby Walker (Virginia Tech)
Title: Listening with an Accent: Long-term Multidialectal Exposure and Speech Perception
Abstract: Some people are exposed to a broader range of dialects than others, perhaps since birth, or throughout their lifetime due to social or regional mobility. Similarly, some people experience being “accented” in a way that other speakers do not. In this talk I will present three studies that highlight the ways in which these differences in experience result in differences in how people listen. In the first study, English and American expatriates and non-migrants transcribed English and American speakers in noise. There is evidence that participants with the most transatlantic experience do better with their non-native dialect than those with less-experience, reflecting the well-attested fact that greater familiarity with a dialect results in more accurate transcriptions. However, there is also evidence of some asymmetry between English and American listeners that may be due to the relative prestige of the two dialects.
In the second study, again using a listening in noise transcription task, we found that L1-English listeners who self-report having an accent were more accurate with L2-English than “unaccented” listeners. Error analysis reveals that “accented” listeners were attempting more answers than unaccented listeners. Finally, in a cross-modal lexical decision task we find that listeners who have lived in multiple dialect regions show less facilitation and more inhibition than monodialectal listeners. We interpret this difference as a “keep your options open” strategy by the former group resulting from the dialect-based ambiguity in their linguistic histories. Taken together, these findings highlight the ways in which (socio)linguistic experience may shape speech perception beyond familiarity.
Feb 17, 2017 – Rhonda McClain (Penn State University)
Title: Zooming In On Interaction Between Planning and Articulation Through the Lens of Disruptions
Abstract: A challenge in research on speech production has been determining whether interaction spans a single level at most or extends across several stages of processing. Many studies have demonstrated that semantic and phonological stages of lexical access involve co-activation of multiple representations, influencing the output of selection at adjacent stages of processing. There is also some evidence that effects originating in phonological planning extend to phonetic processing, altering the phonetic properties of speech. However, evidence for interaction extending from lexical access to phonetic processing is inconsistent. I will present a study conducted in collaboration with colleagues at Northwestern that aimed to demonstrate the extent and ways in which, activation of non-target forms influences phonetic processing. We exploited the sentence completion task to investigate whether disruptions that produce consequences for lexical access also affect articulation. We tested this hypothesis by varying the degree of cognitive disruption. In Experiment 1, we examined young adult monolinguals. In Experiment 2, young adults completed the paradigm under time-pressure. In Experiment 3, we examined a group of older adults, for whom normal cognitive aging increases the demands of lexical access. Our results revealed extended interaction from planning to articulation that was greater in Experiment 2 relative to the baseline of Experiment 1. We observed facilitation in all experiments when picture targets matched the expected sentence completion, but there was no evidence of semantic interference. I will discuss the implications of these results for dynamic accounts of interaction during speech production.
Mar 3, 2017 – Scott Fraundorf (University of Pittsburgh)
Title: What Happened (and What Didn’t): Prosody, Gesture, and Salient Alternatives in Discourse
Abstract: Representing salient alternatives to true propositions may contribute to successful understanding and memory of discourse (e.g., Rooth, 1992). I review recent work from my laboratory investigating how memory for discourse may benefit from linguistic devices that indicate contrasting alternatives. In Experiments 1-2,participantslistenedtoshort recorded discourses that contained contrast sets with two items (e.g. “Both the British scientists and French scientists were searching for the endangered monkey”); a continuation specified one item from the set (e.g., “Eventually, the British scientists found the monkey and planted a radio tag on it”). Prosodic pitch accenting on the critical word in the continuation was manipulated between non-contrastive (H* in the ToBI system) and contrastive (L+H*). On a subsequent recognition memory test, the L+H* accent facilitated correct rejections of the contrast item (i.e., “French scientists”) but did not benefit rejections of lure items never mentioned in the original discourse (e.g., “German scientists”), suggesting that participants had encoded something about the specific contrastive alternative. Subsequent work (Experiment 3) replicated these memory benefits in written discourse as a function of font emphasis. Further, increased reading times on emphasized words suggests that encoding contrastive alternatives may be effortful and time-consuming. Consequently, it might be more difficult for comprehenders processing in their second language (L2), and I present recent work (Experiments 4-5) on how and why these cues are processed differently by L2 learners. Finally, I close by discussing ongoing work examining how these prosodic cues may be integrated with gesture in multi- modal discourse processing (Experiment 6). On the whole, the results suggest that computing and remembering salient alternatives contributes to successful memory for discourse, but that doing so can be time-consuming and difficult.
Mar 17, 2017 – Laurel Brehm (Penn State University)
Title: Language Anomalies in Comprehension and Production
Abstract: Not all utterances are produced as planned, and not all individuals would consider the same utterances to be well- formed. This variability means that comprehenders are required to extract meaning from utterances that are anomalous to them. In the present work, I demonstrate the interplay between what is produced, what is comprehended, and the speaker-specific cues that listeners use to infer meaning from anomalous utterances. I outline two experiments that show how readers extract meaning from speech errors and dialect variations alike in a fashion that considers speaker- specific properties. I then integrate these data within new models of production and comprehension, showing how examination of errors and speaker-driven variable forms can illuminate the mind’s architecture.
Mar 24, 2017 – Kevin McManus (Penn State University)
Title: Investigating the Benefits of L1 Explicit Instruction in L2 Input Processing
Abstract: Persistent L1 effects throughout L2 learning are repeatedly acknowledged (Izquierdo & Collins, 2008; Tokowicz & Warren, 2010; Roberts & Liszka, 2013), but very little research has investigated the impact of explicit instruction about L1 properties on L2 learning. This paper presents an experimental intervention designed to examine how differences in the type of explicit instruction impact L2 learners’ online processing of the French Imparfait – a feature with complex L1-L2 form-meaning mapping differences and similarities. Four instructional conditions manipulated explicit information (EI) and/or practice:
i) EI about L1 and L2, with practice in L1+L2 (n=17)
ii) EI only about the L2, with practice in L1+L2 (n=19)
iii) EI only about the L2, with practice in L2 (n=17)
iv) Test-only (n=16)
In this talk I build on previous analyses of learners’ performance in offline and online outcome measures and examine learners’ accuracy and speed of input processing during the intervention. Results show more accurate and faster processing over time for learners receiving L2+L1 EI plus practice (group i) than in the other groups (ii and iii). These results are consistent with previous findings about the benefits of L1+L2 EI plus practice for online processing (McManus & Marsden, 2016, in press). In addition to the unique nature of this intervention, how differences in the nature of EI and practice influence L2 learning is discussed (with links to L2 theory about the role of L1 and awareness, e.g. Ellis 2006, Truscott 2015), including why explicit instruction about both the L2 and L1 appears particularly beneficial for this target feature.
Mar 31, 2017 – Janna Oetting (Louisiana State University)
Title: Variability within Varieties of English: Profiles of Typicality and Impairment
Abstract: Although advances have been made in the study of childhood language impairment in nonmainstream dialects of English, there remain significant gaps in our knowledge of these dialects and of the manifestations of impairment within them. These gaps in the literature create barriers to the representation of nonmainstream English-speaking children within applied and theoretical research, and this impedes the development of valid and efficient clinical services for children who speak these dialects. Within this talk I will present findings from studies conducted with children who speak different nonmainstream dialects of English, including African American English as spoken in MI, DC, rural LA, and the Gullah/Geechee Corridor of SC, and Southern White English as spoken in rural LA by children with and without a Cajun French heritage. Using data from these studies, I will present some of the ways in which child speakers of various nonmainstream dialects differ from each other and some of the ways in which nonmainstream English-speaking children with language impairment differ from their same dialect-speaking, typically developing peers.
Apr 7, 2017 – Cristobal Lozano (University of Granada)
Title: From Experiments to Corpora and Back: Anaphora Resolution in L2 Spanish
Abstract: In this talk, we will argue for the need to combine experimental data with naturalistic corpus production data to study the same linguistic phenomenon. A case in point is anaphora resolution (AR) in L2 Spanish, i.e., how anaphoric expressions (null pronouns, overt pronouns and NPs) refer to their antecedents in prior discourse. Much experimental work has overrelied in one type of context (Position of Antecedent Strategy), but naturalistic production data from recent L2 corpus studies reveals that there are additional scenarios and factors that have been overlooked in experimental work (Lozano 2016). Therefore, relevant factors from corpus findings can be implemented and controlled in experiments, both offline in L1 Greek-L2 Spanish (Lozano forth. 2017), and an online (reaction time experiment in L1 English-L2 Spanish that we are currently designing in PSU).
We will discuss the implications of these findings within a current theory of L2 acquisition / bilingualism (Sorace’s 2011 Interface Hypothesis) and within a ‘cyclic model’ of data triangulation in L2 acquisition research (Mendikoetxea & Lozano submitted): The experimental findings can be the departure point for an exploratory corpus study, which in turn reveals factors that can be manipulated/controlled in an experiment, which in turn may show additional factors to explore in the corpus, and so on.
Apr 14, 2017 – Grant Berry (Penn State University)
Title: Shrinking Down Sound Change: Dual Mechanisms of Cognitive Control and Phonological Adaptation
Abstract: Language users readily adapt to linguistic variation, both in the short term—after exposure to another’s speech—and in the long term—as linguistic structures change in the surrounding community. How is an individual’s interaction with linguistic variation influenced by the general cognitive strategies they use to resolve competition inherent to their surrounding environment? Under a dual mechanisms framework (e.g., Braver et al., 2007; Braver, 2012), cognitive control—the mechanisms engaged to resolve conflict and regulate expectations—is divided into two interrelated strategies: proactive and reactive control. Less habitual engagement of these strategies is hypothesized to facilitate the integration of phonological variation (Berry, 2016; see also Darcy et al., 2016; Lev-Ari and Peperkamp, 2013, 2014; Lev-Ari and Keysar, 2014). In this talk, I discuss two studies examining the relationship between dual mechanisms of cognitive control and phonological adaptation. Study 1 investigates how proactive and reactive control modulate participants’ adaptation to distributional changes in their speech input, using a controlled laboratory setting to simulate sound change (lowering of /ɪ/ to /ɛ/precedingvoicelesscoronalcodas). In this paradigm, participants alternated between listening to a model talker produce 80 controlled mono- and bi-syllabic words in isolation and blocks where they spoke those words aloud themselves (cf. Maye et al., 2008; Kittredge and Dell, 2016). Gradually, a sound change was embedded in the exposure blocks, such that the relative frequency of a lowered variant in a pre-specified phonetic context increased by 25% in each block. The degree to which this change was integrated was calculated by measuring participants’ log-mean normalized F1 from the production block for words in the pre-specified environment asafunctionofblock number and the variant heard in the preceding listening block, and these were correlated to composite indices of proactive and reactive control. Results of linear mixed effects modeling (cf. Barr et al., 2013; Bates et al., 2015) indicate that individuals with weaker reactive control integrated the simulated sound change into their own production gradually as the relative proportion of the novel, lowered variant increased in the exposure stimuli. This finding suggests that cognitive strategies engaged to resolve conflict correlate to one’s tendency to resolve distributional changes in one’s linguistic input. Study 2 explores some long-term implications of these findings in community-based research examining individual participation in three socially-stratified sound changes-in-progress in Philadelphia. An example is EY-raising (when ‘wait’ sounds more like ‘wheat’), which is rapidly advancing in that community (cf. Labov et al., 2013). The hypothesis is that—modulo the influence of community-level social valuation of a given variable—findings from Study 1 will also correlate to findings from Study 2, i.e., that those who are most advanced in the target sound changes-in-progress are also those who demonstrate less habitual engagement of cognitive control. Connections between laboratory and field approaches to this question are then discussed, which together motivate the inclusion of cognitive control strategies in models of language variation and change.
Apr 21, 2017 – David Adger (Queen Mary University of London)
Title: Three Sources of Syntactic Variation
Abstract: In this talk I distinguish three sources of syntactic variation and exemplify them through some preliminary findings that have emerged from the SCOSYA project (Scots Syntactic Atlas, funded by the UK Arts and Humanities Research Council). One source of variation in syntax is to be understood as deriving from the way that syntax is spelled out as morphological form. I show this by an investigation of aspects of the morphosyntax of negation across Scottish dialects and argue that certain phenomena that have been treated as head movement are better understood, not as syntactic movement, but as a direct link between syntactic and morphological structures. The second source involves a difference, not in how syntax is spelled out, but in the inventory of syntactic features. I present an analysis of agreement differences between different Scottish dialects that shows surface variation in this area emerges through the interaction of feature inventory variation and spellout mechanisms. The third source of syntactic variation is that varieties syntactically combine different resources to attain structures which can be uniformly mapped to the interface with semantic interpretation to achieve similar semantic. I illustrate this by looking at variation in the interaction between certain auxiliary and main verbs across Scottish dialects. The sources of variation, then, lie at the interface with morphology, the inventory of syntactic features available in a language, and in how languages combine their syntactic resources to achieve structures which uniformly map to semantic interpretation. We can see how all three sources interact to give rise to a rich pattern of variation across the dialects of Scottish English.
Apr 28, 2017 – Laura Sabourin (University of Ottowa)
Title: The Bilingual Mental Lexicon: An ERP Masked Priming Investigation
Abstract: Research on the mental representations of language and how it is processed by the bilingual brain is an important aspect of not only understanding linguistic processes but also for understanding neural organization. It is still under debate whether bilingual and monolingual language processing make use of similar neural networks. One major reason for the uncertainty is due to the numerous types of bilinguals used in research. For example, studies have tested early bilinguals, late bilinguals, second-language (L2) learners (considered as bilinguals), bilinguals whose languages are typologically related, and bilinguals whose languages are not related. The research presented here will contribute to rectifying this. This study investigates the role of age of L2 immersion (AoI) on the organization of the bilingual mental lexicon. Our behavioural research has demonstrated that both early and simultaneous bilinguals show evidence of an integrated lexicon while late bilinguals and second language learners (functional monolinguals) do not (Sabourin, Brien & Burkholder, 2014). However, later research investigating manner of L2 acquisition (MoA), showed that late learners with a naturalistic MoA did show evidence of an integrated bilingual lexicon (Sabourin, Leclerc, Burkholder & Brien, 2014). To further understand these results we are currently combining Event-Related brain Potentials (ERPs) with a masked priming paradigm to examine early, automatic lexical processing at the semantic level by testing both a within-language semantic priming condition as well as a cross-language translation condition. Four groups of participants were tested: 1) English native speakers with minimal exposure to French (Functional Monolinguals; N=20); 2) English-French bilinguals whose initial immersion in French was from age 7 or later (Late Bilinguals; N=9); 3) English-French bilinguals whose initial immersion in French was before age 7 (Early Bilinguals; N=23); and 4) Simultaneous English-French bilinguals (N=12). Using this ERP data we hope to further support and build upon our claims concerning bilingual lexical organization.
Sept 9, 2016 – Isabel Deibel and John Lipski (Penn State University)
Title: Licensing adpositions in Media Lengua: Quichua or Spanish?
Abstract: This study analyzes the distribution of adpositions in Media Lengua, a mixed language found in the northern Ecuadorian region of Imbabura and composed of mainly Quichua grammar and Spanish vocabulary (Muysken 1981). In terms of their linguistic profile, Quichua and Spanish differ fundamentally: While Spanish is a synthetic language employing head-initial prepositional phrases, Quichua is an agglutinating, postpositional language. Little consensus exists in the literature on the exact nature and realization of adpositions in the world’s languages; some have suggested that they straddle the boundary between lexicon and grammar. Since models of generative grammar describe syntactic structures as projections from the lexicon, the neat separation between lexicon and grammar found in Media Lengua can offer interesting insights into the licensing of adpositions in the context of language contact and into their status as a linguistic category. While an earlier study had described Spanish prepositions as alternating with Quichua suffixes or occurring in double constructions (Dikker 2008), the results of the current study underscore the robustness of Quichua morphosyntax and stand in direct contrast to the results found in Dikker 2008.
A group of participants, trilingual in Quichua, Spanish and Media Lengua, participated in a video description task in Media Lengua and a translation task from Spanish or Quichua into Media Lengua. The study was conducted in the villages of Casco Valenzuela, Angla and Pijal, Imbabura, Ecuador, rendering approximately 20 minutes of recorded speech per participant. Results indicate that most adpositional phrases were headed by Quichua postpositions in fulfillment of Quichua structural requirements – even in the contexts of priming in Spanish to Media Lengua translations. Very few Spanish prepositions are found incorporated in Media Lengua in either task and they mostly occurred in frozen expressions or borrowed collocations. Among the few Spanish tokens found, most were embedded with their respective Quichua counterpart postposition. In line with Muysken’s relexification hypothesis (1981), complex adpositional phrases appeared with the Spanish preposition occupying the spot of a Quichua lexical item and hence seem to carry lexical features. However, no simple Spanish prepositions were found incorporated as postpositions, indicating that they carry grammatical instead of lexical features.
Sept 16, 2016 – Clara Cohen (Penn State University)
Title: Durational clues to number morphology and word structure: What are they, and when do we hear them?
Abstract: In suffixing languages, segmental information about word structure does not arrive until the end of the word, yet from the very beginning of the word, the stems contain subsegmental, durational clues about the presence of following suffixes or additional syllables. A canny listener might therefore do well to draw on those clues during speech perception, in order to make predictions and speed their understanding of what they hear. This talk will discuss ongoing work to determine what sorts of durational cues listeners draw on during online speech perception, and the role of sentence context and language structure in affecting listener sensitivity.
Sept 23, 2016 – Meredith Tamminga (University of Pennsylvania)
Title: Architectural implications of the dynamics of variation
Abstract: In this talk I discuss the proposal from Tamminga, MacKenzie and Embick (forthcoming) of a framework that recognizes three types of factors conditioning linguistic variation: sociostylistic (s-), internal linguistic (i-), and psychophysiological (p-). I elaborate on the point that p-conditioning and i-conditioning are distinct in their mental implementation, and discuss what implications this point has for a) understanding the locality of the factors conditioning alternations, b) the universality and language-specificity of variation, and c) the general question of whether grammar and language use are distinct.
Sept 30, 2016 – Frances Blanchette (Penn State University)
Title: Linguistic variation in English negation: Structure, meaning, and sound
Abstract: Linguistic negation is a fundamental aspect of human language and thought. In English, there exists rich variation in how negative meanings are expressed. For example, a sentence like ‘Ididn’t eat nothing’ can mean either that I ate nothing, or that it is not the case that I ate nothing.Inthistalk I discuss a series of studies that examine micro- variation in the structure, meaning, and sounds of English negative sentences. The results demonstrate how the expression and interpretation of negative sentences andwords is shaped by a complex interaction and interdependence between: (i) syntactic structure; (ii) sentence-internal (semantic) meaning; (iii) pragmatic context; (iv) prosody; and (v) social or prescriptive norms. I discuss what these combined results implicate for grammatical theories of negation and linguistic variation, as well as for the role of linguistic complexity in sentence processing, and I also discuss how the results inform our understanding of the relationship between grammar and usage.
Oct 14, 2016 – Scott Schwenter (Ohio State University)
Title: Would you just die already? Priming and Obsolescence in Grammar
Abstract: For 30 years, priming—thetendencytore-use a given linguistic structure after its prior, typically recent, use in discourse—has been an important topic in psycholinguistics and language processing. In the last10+years it has also (under the name “persistence”) come to be a key element in corpus research. The implications of priming, however, have mainly been drawn outside the linguistic system and have been related to general cognitive principles associated with memory and processing. In this talk, I show that priming/persistence also has important implications for the linguistic system and especially for variation and change. First of all, when two or more grammatical variants are in competition, priming/persistence effects are always stronger on the obsolescing variants. Second, priming/persistence actually keeps obsolescing grammatical forms alive much longer than they would survive otherwise. Third, beyond the well-known syntagmatic effects of priming/persistence in discourse, there are also important paradigmatic results, and priming/persistence can actually lead to partial resuscitation of grammatical forms that are otherwise dying. Data from both Spanish and Portuguese will be used to illustrate these points.
Oct 14, 2016 – Gretchen Sunderman (Florida State University)
Title: The Bilingual Lexicon: Connecting Words in the Mind
Abstract: Learning a second language (L2) necessarily entails building a new lexicon in that language. But how do L2 learners accomplish this feat given that they begin with a well-developed lexicon in their first language (L1)? In this talk, I first describe one of the most well-known developmental models of the lexicon, Kroll and Stewart’s (1994) Revised Hierarchical Model (RHM) and its predictions related to cross-language lexical and conceptual connections, focusing specifically on the notion of concept mediation within the RHM. I then describe two psycholinguistic production studies that have investigated conceptual mediation. The first study uses the Deese-Roediger-McDermott (DRM) false memory paradigm (Deese, 1959; Roediger & McDermott, 1995) to test L2 semantic associative links. The second study investigates the nature of learners’ errors as they named pictures in the L2 under blocked and mixed presentation. The results of the combined studies suggest 1)that conceptual mediation is related to more skilled performance, but comes at a cost to the L1 and 2) that control of spoken production may be affected by proficiency as well as individual differences in the ability to allocate cognitive resources. Finally, applications of the RHM for learning L2 vocabulary are presented.
Oct 21, 2016 – Patricia Schempp (Penn State University)
Title: L2 Learners’ Processing of Grammatical Gender Varies According to Cognate Status and Proficiency: An ERP study
Abstract: Previous studies highlight the difficulty of mastering L2 gender (e.g., Hopp 2013), however recent ERP evidence suggests that after training, L1 English speakers can exhibit native-like ERPs to gender violations (e.g., Morgan-Short et al., 2012). This study extends Morgan-Short et al.’s (2012) work on artificial language learning to natural language, investigating whether classroom-based L2 German learners are sensitive to gender violations in L2 German, before and after training, and whether advanced L2 German learners are sensitive to these same violations. Additionally, it investigates L2 noun-gender mappings for cognates versus non-cognates, broadening research showing that L2 speakers process cognates faster than non-cognates (see Schwartz & van Hell, 2012). In an ERP task, twenty-three intermediate L2 German learners and 19 advanced L2 German learners read target questions and then answered the question by choosing the appropriate picture onscreen. The target questions were grammatically correct or incorrect, and contained cognate or noncognate nouns. Next, intermediate participants were trained offline to high accuracy with the nouns and their gender using picture naming. One week later they were tested on their vocabulary and gender accuracy offline, retrained, and then repeated the ERP task. ERP analyses timelocked to the critical noun phrase revealed that before training, intermediate learners exhibited no significant ERP responses to grammatical gender violations. After training, intermediate participants exhibited an N400 effect for gender violations with cognates only. At 500-900ms there was a frontal positivity for gender violations with cognates, and at 900-1100ms there was a frontal positivity for gender violations with cognates and non-cognates. Analysis of the picture naming test prior to retraining reveals that cognates were easier to recall, but gender accuracy did not vary with cognate status. These results suggest that for intermediate learners cognate effects extend to the processing of L2 morphosyntactic features, whereby cognates facilitate the online processing and retrieval of nouns and associated grammatical information. In contrast, advanced learners exhibited sensitivity to gender violations for noncognates only, suggesting that there is a fundamental difference in the processing of gender for cognates and noncognates among L2 learners, that may vary as a function of proficiency level.
Oct 28, 2016 – Federica Bulgarelli (Penn State University)
Title: Double trouble: Statistical learning of multiple structures
Abstract: How do naïve learners come to identify the number of languages they are learning? This question is central to the study of language acquisition, but to date our understanding is far from complete. One means of approaching this problem is through the study of statistical learning, the process by which learners track rudimentary distributional information from their sensory input. For the past two decades, research has established that statistical learning is particularly critical for early language acquisition, allowing learners of all languages to gain a foothold into acquisition from which language-specific properties emerge. Research in our lab aims at broadening the scope of statistical learning tasks to understand how statistical learning might operate when learners are exposed to multiple underlying structures, arguably more closely approximating bilingual language acquisition. During this talk, I will discuss previous and ongoing studies that have focused on how learners across the lifespan and from different linguistic backgrounds detect shifts in patterns of speech streams as well as contend with multiple statistical regularities and rules.
Nov 4, 2016 – Matt Carlson and Alex McAllister (Penn State University)
Title: Phonological repair of initial /s/-consonant sequences in speech perception and production
Abstract: Spanish phonotactics prohibits word-initial /s/-consonant (#sC) clusters, repairing them as needed by prepending an initial /e/, e.g. in loanwords such as esnob ‘snob’. This process is easily described in any of the available phonological frameworks, but in an age where much of grammar is turning out to be gradient to some degree, it is not yet clear what contributes to such apparently stable and consistent patterns as this. One possible source of stability comes from speech perception: recent evidence suggests that [e] is so likelytoprecedeansC sequence, that Spanish speakers tend to hear it even when it is not there (and that they do not tend to hear other vowels in this context) (Cuetos, Hallé, Domínguez, & Segui, 2011; Hallé, Dominguez, Cuetos, & Segui, 2008; cf. Dupoux, Kakehi, Hirose, Pallier, & Mehler, 1999 on Japanese). We show using nonword discrimination and lexical decision data that this is indeed the case, but that under certain circumstances, listeners can respond as if other vowels are present. Our results support an abstract process whereby [e] isinsertedbeforesC sequences in Spanish, but they also suggest that phonetic details in the signal can shape this process, details that may be related to reduction processes in speech production (e.g. Davidson, 2006; Munson, 2001; Van Son & Pols, 2003).We therefore pursue two hypotheses in a speech production task. First, we ask whether word-initial [e]precedingsC is in fact so predictable that speakers leave it out (such that Spanish speakers may produce and hear examples of “illicit” #sC clusters in natural Spanish speech), and that the reduction of articulatory gestures may nonetheless preserve sufficient acoustic detail to support identification of certain vowels in this position.
Nov 7, 2016 – John McWhorter (Columbia University)
Title: The missing Spanish creoles are still missing: revisiting afrogenesis and its implications for a coherent theory of creole genesis
Abstract: Theories that plantation creoles were all born as pidgins at West African coast slave castles, including that proposed in McWhorter (2000), have not fared well among creolists, amidst a preference for supposing that creoles are born, or not, according to factors local to a given context.In this paper I review some of the responses to McWhorter (2000) and spell out why, especially in light of research since, the “Afrogenesis” paradigm is still worth serious consideration. A key fact is the following. Many creolists argue that a creole did not appear when there was extensive black-white contact and many slaves were locally-born, a scenario most often associated with the Spanish Caribbean and Reunion and now proposed for South American colonies by Sessarego (2014) and Díaz-Campos & Clements (2008). However, conditions were of just this kind in early St. Kitts and Barbados, where most scholars now locate the birth of English-based and French-based plantation creoles. The disparity in outcomes between these locations means that after fifty years, there is no coherent theory of how or why creoles come to be. I argue that only Afrogenesis shows the way out of this conundrum.
Dec 2, 2016 – Laurel Brehm (Penn State University)
Title: Distinguishing Discrete and Gradient Category Structure in Language
Abstract: Work in cognitive psychology underscores the probabilistic (gradient) nature of mental classes, but traditional linguistic analysis rests upon the discrete separation of classes. I present work that uses memory errors to examine the mental representation of verb-particle constructions (VPCs, e.g., *make up* the story, *cut up* the meat). VPCs are diverse in terms of their semantic and syntactic properties; an outstanding question is how this variability connects with the class structure in the mental representation of VPCs. To experimentally examine this question, I present a novel paradigm that elicits illusory conjunctions of sentence elements– memory errors that are sensitive to linguistic structure. Applying piecewise regression on these error data demonstrates that illusory conjunctions of verbs and particles follow a graded cline rather than discrete classes, supporting the presence of gradience in the mind’s representation of linguistic elements.
Dec 9, 2016 – Chaleece Sandberg (Penn State University)
Title: Development of a Theoretically-Based Culturally-Relevant Therapy for Anomia in Bilingual Aphasia
Abstract: Training abstract word retrieval improves generative naming for trained items, and promotes generalization to generative naming of concrete words in the same context category. However, this therapy has not yet been adapted for bilingual persons with aphasia (PWA). This study aimed first to develop culturally and linguistically relevant stimuli for the extension of this therapy to bilingual PWA, and second to conduct this therapy with a test case. Pertinent differences regarding cultural relevance and linguistic content of several context-categories across three languages will be discussed as will the results of the test case.
Jan 15, 2016 – Deborah Burke Deborah C. Morton (Penn State University)
Title: An Overview of the Structure of Gisida Anii
Abstract:The Niger-Congo language family is the largest in the world with reference to number of languages. Many of those languages, however, particularly non-Bantu ones, are not well-known to linguists. The Anii language is a member of the Kwa sub-family of Niger-Congo, and is spoken by approximately 50,000 speakers in Togo and Benin, West Africa. This talk will provide an overview of the structure of the Gisida dialect of Anii with an emphasis on the ways that Anii is typologically different from many more well-known languages (in particular, Indo-European ones).
Jan 22, 2016 – Melinda Fricke (Penn State University)
Title: Production and perception of codeswitching: Leveraging linguistic variation to study processing
Abstract: The linguistic form of codeswitched speech represents the end of a long chain of psycholinguistic planning processes that went in to producing it. Consequently, the study of codeswitched speech can yield insight into the psycholinguistic factors that modulate cross-language activation during bilingual speech planning. Further, to the extent that cross-language activation gives rise to distributional regularities in the surface form of speech, laboratory experiments can exploit these regularities to shed light on the learning processes that allow (or don’t allow) listeners to develop sensitivity to informative cues during language comprehension. In this talk, I describe a set of studies that follow this logic, first asking how cross-language activation during spontaneous speech planning affects the surface (phonetic) form of codeswitched bilingual speech, then investigating the extent to which listeners with different language backgrounds can perceive and make use of the relevant linguistic variation. I will discuss the implications for models of bilingual language processing, and will also consider the ways in which the results are relevant for psycholinguistics and linguistics more generally.
February 5, 2016 – Aaron Rubing and Lily Kahn (Penn State University, University College London)
Title: Jewish Languages, Past and Present
Abstract:This presentation is devoted to the rich array of languages other than Hebrew that have been written and spoken by Jewish communities throughout history. Jewish languages are genealogically very diverse, with representatives from the Germanic, Romance, Slavic, Hellenic, Indo-Aryan, Semitic, Dravidian, Caucasian, and Berber language families. They include ancient languages such as Judeo-Aramaic and Judeo-Greek, medieval varieties such as Judeo-French and Judeo-Portuguese, and newly emerging ones such as Jewish Amharic, Jewish English, and Jewish Swedish. Some Jewish languages (such as Judeo-Arabic, Judeo-Persian, Ladino, and Yiddish) have substantial written traditions in the Hebrew script, while others (such as Judeo-Malayalam and Jewish Berber) are or were primarily spoken varieties. While the degree of difference between a Jewish language and its non-Jewish equivalent can vary considerably, they typically have a Hebrew and Aramaic lexical component, and most of them exhibit certain phonological, morphological, and syntactic differences from their non-Jewish sister languages. The presentation will provide historical and sociolinguistic introductions to these fascinating language varieties and will survey some of their most characteristic features.
February 12, 2016 – Megan Zirnstein (Penn State University)
Title: Language Experience and Executive Function: What bilinguals bring to the table when reading in the L2
Abstract: A current topic in research on bilingualism is whether and in what ways being bilingual has repercussions for cognition and brain plasticity in older adulthood. This work often focuses on bilingual language production and compares this to performance on non-linguistic measures of executive function. In contrast, we know very little about how bilinguals, young adults especially, recruit executive function to support language comprehension, and in what ways these moment-to-moment processes may potentially result in changes to cognition across the lifespan. In this talk, I will discuss a series of studies that take advantage of aspects of reading comprehension that draw upon executive function skill, namely prediction and integration, in order to tease apart how executive function ability and multilingual experience can impact language processing itself.
February 19, 2016 – Rachel Wu (UC Riverside)
Title: A new framework for lifespan cognitive development: Implications for language learning
Abstract: This talk will present a novel theoretical framework (CALLA –Cognitive Agility across the Lifespan via Learning and Attention) that merges research from cognitive development and cognitive aging (two largely distinct research areas). The purpose of this framework is to better understand the role of cognitive and environmental factors in the etiology and course of healthy cognitive aging. In particular, cognitive development sacrifices short-term efficiency in favor of long-term adaption to novel situations. By contrast, cognitive aging allows for specialization in familiar environments, perhaps leading to premature decline in cognitive abilities in novel and eventually familiar situations. By examining cognitive and environmental factors (in addition to genetically-encoded factors) across the lifespan, we can identify potential “triggers” and “brakes” for the cognitive development and aging processes (c.f. Werker & Hensch, 2015). These “triggers” and “brakes” are essential in theories on critical and sensitive periods, which have implications on learning potential across the lifespan. CALLA promotes known cognitive development factors (e.g., open-minded learning, immersion, and scaffolding) to improve future cognitive training regimes for aging adults and provides a unifying approach for understanding the mechanisms underlying cognitive training effects in older adults. The goal of this research is to determine the optimal methods for inducing long- term cognitive development to delay the onset of cognitive decline in aging adults. I will discuss the implications of this theoretical framework in relation to language learning across the lifespan.
February 26, 2016 – Holly Koegler (Penn State University)
Title: Mystery Action: What motor control can tell us about language and language disorders
Abstract: Language and action are related across the lifespan. Much of the work investigating this relationship has focused on adults and typically developing children, and suggests that there are shared processes interacting and supporting development and performance in both domains. In many language disorders, there is evidence that action is affected in some way as well. Specific Language Impairment (SLI) is one such disorder, where significant language difficulties co-occur with poor motor performance. However, the literature on SLI is only beginning to look beyond linguistic-based accounts to explore the nature of these motor impairments and the processes supporting both language and action. In this talk, I will discuss ways of studying these types of language impairments, the relationships between action and language in atypical populations, and how studying these processes together may help us better understand the nature of both language development and language disorders.
March 4, 2016 – Aaron Albin (Penn State University)
Title: Mystery Action: A theoretical and methodological framework for the analysis of intonation produced by second language learners
Abstract: The mechanisms at work behind how one acquires the intonation system of a second language remain very poorly understood. While the major models in Second Language Phonology (such as the Speech Learning Model or Perceptual Assimilation Model) have been widely extended to lexical contrasts involving pitch, until relatively recently, surprisingly little attention had been given to sentence-level intonation (as used, for example, to communicate information structure or discourse meanings). In particular, to date there is still no fully worked-out account of cross-linguistic transfer manifests itself in L2 intonation. Our limited understanding in this domain is also due in no small part to the fact it is far from trivial to unpack a pitch contour into its underlying phonological category structure, even in native speech. Thus, the problems hindering progress on this front are both theoretical and methodological in nature.
This talk sketches out a framework for tackling these two problems. On the theoretical end, based on a review of several hundred empirical studies on L2 intonation published between 1950 and 2013, twelve ways that the intonation system of the L1 can influence speech production in the L2 are identified. These are then assembled into a typology of L2 intonation transfer, expanding a previous typology by Mennen (2015). On the methodological end, a framework is presented whereby an L2 learner’s intonation contour is ‘stylized’ into a quantitative representation reflecting the shape of the contour. Such stylizations can then be ‘queried’ in phonologically-informed ways to probe a phenomenon of interest for a particular research question. As an illustration, this approach is applied to corpus data on boundary rises in yes no questions produced by L1 Japanese learners of L2 English. Taken together, this framework not only lays out an intricate web of empirical predictions but also provides a means by which to test them, thus serving as a foundation for future research on this aspect of bilingual speech production.
March 18, 2016 – David Reitter (Penn State University)
Title: Mystery Action: Syntactic Priming: Why it exists, and how it helps dialogue
Abstract: In this talk, I will discuss corpus-based, “big-data” methods to study a psycholinguistic process in naturalistic dialogue: syntactic priming. The data from corpora such as Penn TreeBank and Map Task motivate a cognitive model of priming in language production. This model, in ACT-R, explains syntactic choice as a declarative memory retrieval (Reitter, Keller, & Moore, 2011).
Syntactic priming (Bock, 1986) is of interest as it reveals syntactic processing, and also because it has been claimed to form the basis of interactive alignment (Pickering & Garrod, 2004). The theory posits that speakers mutually adapt to their linguistic choices, reaching a more efficient common language.
I will discuss some key questions surrounding interactive alignment: whether priming is a social signal rather than just a mechanistic effect, and whether divergence effects found by Healey, Purver, & Howes (2014) are truly an argument against Interactive Alignment. The ACT-R explains these effects, and new analyses of the large-scale Reddit dataset support these viewpoints empirically.
March 25, 2016 – Avery Rizio (Penn State University)
Title: Age differences in language production: The neural correlates of semantic interference, phonological facilitation, and target picture frequency
Abstract: Research indicates that picture naming is facilitated when targets are presented with phonologically related words, but slowed by semantically related distractors. Older adults often show declines in phonological aspects of language production, particularly for low frequency words, but maintain strong semantic systems. Here we used fMRI and behavioral measures to investigate age differences as a function of distractor type and target frequency (N=20 younger, 20 older adults). Older adults recruited more activation in left occipital fusiform gyrus and inferior and middle temporal gyri during picture naming with semantically-related distractors compared to phonologically-related distractors. Activation in the occipital fusiform gyrus was significantly greater for older compared to younger adults. Older adults also recruited more activation in left superior parietal lobe during naming with semantic compared to unrelated distractors, though this activation pattern was not different from that of younger adults. Age differences emerged when comparing phonological to categorical distractors, as younger adults showed greater activation than older adults in left postcentral and right precentral gyri. With respect to the effect of target frequency, older adults showed greater negative correlations than younger adults. Specifically, older adults showed increased activation in right precentral and left supramarginal gyri during naming of low frequency items when paired with phonological distractors. These results indicate that the presence of phonological distractors facilitated picture naming in older adults for low frequency targets. The presence of a phonological distractor may increase activation in regions that support motor planning, potentially aiding articulation for words that are most difficult to produce.
April 1, 2016 – Nate George (Penn State University)
Title: Language as a window into event representations across the lifespan
Abstract: Verbs and prepositions are fundamental components of language, conveying dynamic and static relations between objects in events (e.g., “The boy kicked the ball over the fence”). Yet, these “hard words” (Gleitman, Cassidy, Papafragou, Nappa, & Trueswell, 2005) are notoriously difficult to learn for both first and second language learners (George, Göksun, Hirsh-Pasek & Golinkoff, 2014). While we might describe a child playing in a park as a series of distinct events, such as running and climbing, this flurry of nonstop activity contains no natural pauses that distinguish one event from the next. Thus, a signature challenge of verb learning in particular is fitting the discrete categories of language onto events that are inherently continuous and dynamic (Hespos, Grossman, & Saylor, 2010). My research adopts a developmental approach to explore how infants, children, and adults tune their attention towards components of events, such as manners and paths of motion, that are relevant to parsing ongoing activity for language. In this talk, I begin by focusing on the problem of hierarchies in event structure. This research highlights the role of language in helping infants, children, and adults assemble simple actions (e.g., rinsing a plate) into meaningful events on a broader scale (e.g., washing dishes). I then extend my research to consider issues of adult second language learning by looking at how languages differ in their parsing of events, and how these differences yield unique challenges for acquiring a new language. This research employs training studies to examine the malleability of biases regarding how verbs and prepositions relate to events, and how the process of detecting these patterns in a new language may differ across monolingual and bilingual speakers.
April 8, 2016 – Ariana Mikulski (Penn State University)
Title: The writing behaviors of heritage and foreign-language learners of Spanish
Abstract: For many Spanish foreign-language (SFL) courses in the United States, it is becoming the norm to find two combined populations: 1) traditional SFL learners and 2) Spanish heritage-language (SHL) learners, who have attained some level of proficiency in Spanish via home and/or community exposure (e.g., Valdés, 2001). These learners often are grouped together despite their different experiences with written Spanish. This presentation describes SHL and SFL learners’ writing behaviors in English and Spanish, including time allocation for planning, execution, and monitoring; revision; accuracy; and fluency. We compared writing behaviors across languages in each learner group (Elola and Mikulski, 2013; Elola and Mikulski, in press; Mikulski and Elola, 2011) and across learner groups (Elola and Mikulski, in press). Twelve SHL learners and six SFL learners in a third-year Spanish class responded to prompts in Spanish and English while screen-capture software recorded their behaviors. SHL learners spent significantly more time planning between sentences in their Spanish responses, but demonstrated more fluency and accuracy when writing in English. SFL learners wrote less fluently, performed more surface revisions, and demonstrated less accuracy when writing in Spanish than in English, but spent more time monitoring their writing in English. Compared to their SHL counterparts, SFL learners wrote less fluently and accurately and devoted less time to Spanish inter-sentential planning and English monitoring. The SFL learners performed more surface revisions in Spanish and fewer meaning revisions in English and Spanish than the SHL learners. Although some writing behaviors appear to transfer across languages, instructors of mixed SHL-SFL courses also should take into account each learner group’s needs.
April 15, 2016 – Courtney Johnson Fowler (Penn State University)
Title: Exploring cross-language grammatical gender interaction in German-Italian bilinguals
Abstract: Psycholinguistic research has shown that even when bilinguals are processing in only one language, both of their languages remain activated (e.g., Costa et al., 2000). This co-activation leads to cross-language interaction and in cases where both of a bilingual’s languages contain grammatical gender, the two gender systems have been shown to interact (e.g., Paolieri et al., 2010). Our understanding of this so-called ‘gender-congruency effect’ is mainly limited to the influence of the L1 on the L2 (but see Morales et al., 2011) and to late L2 speakers living in either the L2 (e.g., Bordag & Pechmann, 2007) or the L1 (e.g., Salamoura & Williams, 2007) environment. The current study seeks to expand our understanding of how and when gender systems interact in bilinguals by comparing two groups of L1 German-L2 Italian speakers from South Tyrol, one living in bilingual South Tyrol and the other living in German-speaking Austria. Both groups completed a series of picture naming tasks in both their L1 German and L2 Italian so that interaction can be measured bidirectionally. In Experiment 1 bilinguals named images in isolation, whereas in Experiment 2 images were embedded in sentences to see whether the gender-congruency effect is modulated by sentence context as is the case with the cognate effect (e.g., Schwartz & Kroll, 2006; Starreveld et al., 2013). Results show that the gender systems of these South Tyrolean bilinguals interact only in L2 naming, both when naming in isolation and in sentence context, and that this interaction is present regardless of current language environment.
April 22, 2016 – Grant Berry (Penn State University)
Title: The long and short of it: How short-term alignment and cognitive processing may influence sound change
Abstract: Human beings are adept at processing variation in speech, and a wealth of research attests to individuals’ ability to quickly adapt perception to their input. Another, immediate consequence of exposure to variation may be modifications in the listener’s subsequent production (alignment/accommodation). Over the course of a conversation, interlocutors may align in their production of fine phonetic detail, including speech rate, pitch, spectral properties of vocalic production, and voice-onset time. However, interlocutors using the same language may also differ at the level of their phonological inventories (e.g., pen and pinare homophonous for Kansas City natives like me, but not for most Northeasterners), which affects both perception and production. Accommodation in production at the phonological level remains understudied, but may be essential to understanding how subtle, short-term variation in production is related to language change on a larger scale.
In this talk, I discuss two studies investigating phonological production in discourse. The first, resulting from collaboration with Mirjam Ernestus at Radboud University, investigates phonetic alignment in English as a lingua franca among Spanish-English participants and Dutch-English confederates. We examine dynamic changes to the production of two key phonological contrasts (/i/-/ɪ/ and /ɛ/-/æ/) in English, finding that Spanish participants align to the English of their Dutch interlocutors, which involves a merged /ɛ/-/æ/ category but a distinction of /i/ and /ɪ/, rather than more native-like English (which would require a four-way distinction). These results imply that phonological category production dynamically updates in response to one’s input, even after a single conversation. The second is a pilot study addressing how individual differences in processing variation may correlate to differences in the adoption of variable phonological rules over time. I collected personal narratives from Spanish-English bilinguals who are longstanding residents of Philadelphia, focusing on their production of three context-restricted sound changes-in-progress in that community (Ey Raising, where bait and beat become near homophonous; Canadian Raising, where the vowel in price raises and becomes distinct from the vowel in prize; and AE-tensing, where the vowel in ham raises and tenses and becomes distinct from the vowel in had) with distinct social valuations (non-salient, slightly salient, and socially salient, respectively). I then correlate adoption of these changes-in-progress with individual difference measures (proactive control, reactive control, and the Autism Spectrum Quotient). Notably, the effect of individual difference measures depends on the social value of the variable analyzed. Cognitive processing measures better describe changes-in-progress with low social awareness (Ey Raising, Canadian Raising) than they do salient changes-in-progress (AE-Tensing). This suggests that while the way an individual processes variation in his/her input may have implications for his/her adoption of changes present in the environment, social valuation can suppress these effects. I conclude this talk by outlining a working hypothesis regarding the importance of cognitive control and phonetic alignment in the actuation of sound change.
April 29, 2016 – Jennifer Roche (Kent State University)
Title: The long and short of it: Miscommunication: A useful component of successful communication
Abstract: In an ideal world, interlocutors should be explicit and only provide necessary and sufficient information to a conversation partner (Grice, 1975). However, we do not live in an ideal world and much of communication is riddled with unsuccessful attempts. These unsuccessful attempts need not be deemed as detrimental aspects of the communication system, rather as an integral part of how we communicate. In fact, these moments of communication breakdown have the potential to promote adaptation and adjustment during the dynamic exchange of information during interactive communication (Roche, Paxton, Ibarra, & Tanenhaus, under review). In what follows, I will present two studies that focus on 1) how a listener handles ambiguity that might promote communication breakdown and 2) how a speaker’s intention to feign one’s true intentions affects a listener’s ability to represent a speaker’s message. I will show that ambiguity is only sometimes problematic, and the locus of the ambiguity may prompt alignment of effort between speaker’s and listeners (Craycraft, Kriegel, & Roche, accepted). I will also show that interlocutors advantageously withhold extralinguistic information during communication, which has varying outcomes on listener’s comprehension of world knowledge (Roche, Fissel, & Duchi, under review). The results from these studies are meant to show that miscommunication, as situated in context, shapes how a listener interprets a speaker’s message.
Aug 28, 2015 – Deborah Burke (Pomona College)
Title: Mechanisms of Cognitive Aging: Implications for Effects of Bilingualism
Abstract: Aging during adulthood is characterized by both preserved and declining cognitive performance, creating a challenge for explanatory models. Older adults’ language performance, for example, is relatively stable for comprehension processes whereas language production is marked by increasing retrieval failures for well known words, i.e., tip of the tongue states. This has been explained within a connectionist model of language wherein aging and frequency of use affect connection strength, with phonological representations the most vulnerable to transmission deficits. Bilinguals report a similar pattern with more word retrieval failures than monolinguals, consistent with phonological transmission deficits caused by reduced word production in either language. A second aging mechanism, proposed to explain negative aging effects on memory and attention, is diminished executive control processes, especially inhibitory processes. Bilingualism, however, has a beneficial effect on executive processes and this, within this framework, should produce a greater bilingual advantage for older than young adults on executive tasks, a result that has been observed. However, older adults’ general slowing, new learning deficits and sensory declines affect performance attributed to executive processes, especially inhibitory processes. We discuss, for example, why older adults show less inattentional blindness than young adults. This research clarifies the need for theoretical development of the processes involved in executive control of older adults and bilinguals
Sep 11, 2015 – Chaleece Sandberg (Penn State University)
Title: Imageability, generalization, and neuroplasticity in aphasia rehabilitation
Abstract: Abstract and concrete words are interesting subdivisions of the semantic system to study in both healthy and language-disordered populations. This talk will present research showing: a) that the relationship between abstract and concrete words can be manipulated in the treatment of word-finding deficits in aphasia to promote generalization to untrained items, and b) that abstract and concrete words can also be used to systematically examine changes in brain activation and functional connectivity related to direct training and generalization effects of treatment in aphasia
Sep 18, 2015 – Phil Baldi (Penn State University)
Title: Good Words Gone Bad: Semantic Change in the History of English
Oct 02, 2015 – Caitlin Ting (Penn State University)
Title: The effects of prior experience on cognitive control during syntactic processing in music
Abstract: In both music and language, information must be presented and processed in serial order to be properly integrated. According to the Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, 2003; 2008) resources to process the syntax of music and language are shared, and these resources have limited capacity.In this talk, I will discuss a study in which we examined whether cognitive control is one of these shared processes and whether prior experience with syntactic representations modulates how cognitive control is involved during syntactic processing in music. In particular, we examined the effects of musical training and bilingualism. In addition, we examined whether and how prior experience with auditory processing in the form of tonal language use modulates syntactic processing in music. I will discuss also how the Adele Miccio Memorial Travel Award and NSF PIRE Fellowship allowed me to conduct this study.
Oct 09, 2015 – Jessi Aaron (University of Florida)
Title: Greater than any one of us, yet nothing without us: On the role of perception and everyday life in language
Abstract: The apparent dual nature of language, ephemeral and deeply social on the one hand, and highly structured and long-lived on the other, suggests an interesting paradox. How are our everyday interactions, perceptions, prejudices, and affiliations integrated into our language—a structure in flux that is both greater than any one of us and nothing without us? Quantitative analysis of language use can provide empirical evidence regarding the role of the powerful yet elusive social and cognitive forces that shape the way we use language, as well as how our language changes over time. These often have to do with perception. First, our language may reflect our perceptions of social groups. Second, our usage patterns may demonstrate our perception of what is sociolinguistically appropriate in our local communities, including the use of regional features and code-mixing. Third, broad patterns of language change may reflect how we perceive—and create—orderliness in our language through mechanisms such as analogy. Simply put, humans, as cultural and social minds, are builders of dynamic systems, including language. After all, life itself, and everything in it, is in constant motion. As the French Modernists pointed out, we can only perceive the world through our personal experiences within it. With this hodge-podge of disparate experiences, we construct a concrete reality, much like the linguistic structure we codify and study—both transient and eternal.
Oct 16, 2015 – Jorge Valdés Kroff (University of Florida)
Title: Learning to expect the unexpected: How bilinguals integrate code-switched speech
Abstract: Bilinguals in the presence of other known bilinguals engage in code-switching, generally defined as the fluid alternation between languages within a conversation (e.g Poplack, 1980). Socio- and theoretical linguists have proposed a series of factors for when and why (i.e. social) and where in a sentence (i.e. structural) code-switching occurs. More recently, psycholinguists have also shifted their attention to code-switching primarily because its production and especially its comprehension present a unique cognitive paradox. Experimental evidence on externally cued-language switching and non-linguistic task switching points towards obligatory switch costs that can be reduced but not eliminated (e.g. Meuter & Allport, 1999; Moreno et al., 2002; Monsell, 2003). Additionally, the hallmark of sentence processing is that we are incremental processors (e.g. Altmann & Kamide, 1999), i.e. speakers incrementally build interpretations as they integrate incoming speech. Logically, staying in one language alone should most efficiently benefit comprehension. Yet code-switching is ubiquitous and does not appear to impede successful comprehension. One plausible hypothesis relies upon exposure-based accounts to suggest that bilingual code-switchers learn via cues when to better anticipate an impending code-switch. Using eye-tracking and fMRI, I will provide evidence for a cue-based approach to code-switching primarily through the asymmetric use of grammatical gender in Spanish-English code-switching. In turn, the results from these studies suggest that code-switching is a highly skilled linguistic ability that bilinguals must learn in order to successfully integrate code-switched speech.
Oct 16, 2015 – Anna María Escobar (University of Illinois at Urbana-Champaign)
Title: CLoTILdE Project: Defining Semantic Influence in Quechua-Spanish Contact
Abstract: Long-term and intense Quechua-Spanish language contact has given rise to non-lexical contact phenomena in the Andean region. In the Peruvian case, due mainly to population movements that have taken place in the region since the early 20th century, Andean contact features are also found in non-Andean regions, such as on the coast and in the capital (Lima). Innovative morpho-syntactic features in the region take the form of new patterns of use (e.g., accusative clitic doubling), Zdrojewski & Sánchez 2014; possessive su, Escobar 2014) and of innovative functions (e.g. evidential Present Perfect, Escobar 1997, Jara 2013; inalienable possessive su, Escobar 2014), although the processes that explain the trajectories of the contact influence are not clear.
The CLoTILdE’ Project brings together researchers from the U.S. and Peru in the pursuit of an innovative historical and sociolinguistics study that analyzes almost 50 years (from 1968 to 2015) of real-time oral Peruvian Spanish data with the goal of determining the trajectories that define ‘Andean contact influence’ (or ‘semantic influence’) in Peruvian varieties of Spanish.
In this presentation, I present examples of how properties of inalienability, and evidentiality from Quechua underlie innovative functions found in Peruvian varieties of Spanish, that are consistent with cross-linguistic tendencies. The presentation calls for rethinking methodologies for the study of language contact with ethnocultural languages, such as Amerindian languages.
Oct 23, 2015 – Sarah Grey (Penn State University)
Title: Comprehension of foreign-accented speech: evidence from ERPs and neural oscillations
Abstract: Worldwide, there are more multilingual than monolingual speakers and, by extension, more accented than non-accented speakers of English and many other world languages. Language is used and processed in context-rich social situations that are often layered with pragmatic content, but we know surprisingly little about how this contextual-pragmatic content affects the neurcognition of language. Here, I focus on an important yet under-studied area of research in the neurocognition of language: the effects of foreign-accented speaker identity as a pragmatic cue that influences language comprehension, and the impact of individual variation in listener experience, perception, and attitudes on comprehension. In this talk, I will present recent findings from an experiment that tested neuropragmatic sensitivity in monolingual listeners who were recruited to have limited experience with foreign-accented speakers. Listeners heard sentences that were well-formed or had an error in grammar or semantics; sentences were spoken by either a native-accented or a foreign-accented speaker. I will discuss our behavioral results for sentence comprehension, accent perception, and attitudes as well as two brain-based measures of sentence processing: ERPs and time-frequency analysis of neural oscillations.
Oct 30, 2015 – Mike Putnam and Lara Schwarz (Penn State University)
Title: Co-activation in bilingual grammars: A Gradient Symbolic Computation account of code mixing
Abstract: In this presentation we outline a novel approach to code-mixing that combines aspects of formal linguistic theory and the wealth of psycholinguistic evidence that suggest that bilinguals simultaneously co-activate elements from both source languages during production (e.g., see Kroll & Gollan 2014, for a review). We propose to integrate these two traditions within the formalism of Gradient Symbolic Computation (Smolensky et al. 2014; Goldrick et al. forthcoming). This approach allows us to formalize the integration of grammatical principles with gradient mental representations. We apply this framework to code-mixing structures: in particular, portmanteaus, where an element is doubled, appearing in both languages within a single utterance. While one might classify such portmanteaus as a production error, these utterances are evidence that the second source grammar has not completely been inhibited once the matrix language has been selected. Through GSC we discuss the conditions under which these doubled-structures can be realized.
Nov 06, 2015 – Carrie Jackson (Penn State University)
Title: Harnessing prosodic cues to improve the learning of L2 grammatical structures
Abstract: In this talk, I will present results from three recent studies that investigate how the inclusion of prosodic cues in the input learners receive results in sustained learning and more efficient online processing of two different grammatical structures that are notoriously difficult for American L2 learners of German. In so doing, I will demonstrate how foreign language instructors can harness intrinsic prosodic features of a language to improve the learning and retention of L2 grammatical features. At the same time, I will argue that the results from these three studies have important implications for our understanding of the underlying cognitive mechanisms that drive L2 processing and learning.
Dec 03, 2015 – Gigi Luk (Harvard Graduate School of Education)
Title: Bilingualism as a transdisciplinary field: What are the next questions?
Abstract: In this talk, I plan to argue that the debate on “bilingual advantage” is oversimplified, which risks dividing researchers into taking sides of a binary response on this question. The consequence is that no meaningful insights will result from this debate. Instead, harnessing cognitive neuroscience findings on brain differences associated with bilingual experience, we can ask relevant questions that will inform our understanding of bilingual development and learning. I will address two problems of fixating on the debate of the existence of “bilingual advantage”, primarily on the divergence of tasks and sample characteristics. Secondly, I will provide a framework of three research directions, namely measurement, relevance, and continuity, that go beyond the monolingual-bilingual comparison. In the third section, I will connect these three research directions to cognitive neuroscience, posing developmental questions from what we know about bilingualism and aging. I will end the talk with a quote from Peal & Lambert’s 1962 paper suggesting that looking at advantage/disadvantage does not advance our understanding on bilingualism and the developing mind. Considering bilingualism as a transdisciplinary field and ask questions focusing on differences will unify researchers in psychology, psycholinguistics, neuroscience, and education.
Dec 11, 2015 – Holger Hopp (Universität Mannheim)
Title: Bilingual lexical activation in L2 sentence processing
Abstract: In this talk, I will explore different aspects of how lexical representations and lexical processing affects the time-course and the outcome in native and non-native sentence processing. Research on the bilingual mental lexicon shows that bilinguals have interconnected lexical representations (non-selective access) and that bilinguals encode “Weaker Links” between word forms and lemmatic and conceptual information in their L2 than their L1. In three experiments on the processing of gender agreement in L2 German, on the parsing of reduced relative clauses and cleft sentences in L2 English, I explore the consequences of bilingual lexical co-activation and delays in lexical retrieval for L2 sentence processing. I will argue that many differences between native and non-native sentence comprehension that have hitherto been claimed to reflect morphosyntactic deficits in adult L2 sentence processing follow from the structure of the bilingual mental lexicon.
January 16, 2015 – Marc Authier (Penn State University) – Meeting begins at 8:30 am
Title: Intransitive prepositions with transitive meaning: Lexicon or syntax?
Abstract: I will argue, contra Cervoni (1991), Olivier (2007) and others, that French does have prepositions without an overt complement that are syntactically transitive. That is, these prepositions take a syntactically projected, phonologically unrealized pronominal complement, as first hypothesized by Zribi-Hertz (1984), who coined them “orphan prepositions”. I will offer new evidence in favor of this view based on a number of tests that distinguish between various types of implicit arguments, among which (in)definiteness, the (un)availability of bound variable readings, scopal properties relative to operators such as negation and intensional verbs and the (un)availability of sloppy identity readings. I will also show that the set of French orphan prepositions turns out to be much smaller that that envisaged by Zribi-Hertz in that some lexical prepositions can function as orphan prepositions in some uses but not others and some lexical prepositions have an intransitive use as well. Finally, time permitting, I will lay out a number of reasons why orphan prepositions should not be assumed to be prepositions that have become adverbs through a process of grammaticalization, nor should they be assumed to be relational definite descriptions that pragmatically link up to an antecedent via accommodation, as suggested by Olivier (2007).
January 23, 2015 – Megan Zirnstein (Penn State University) – Foster Auditorium
January 30, 2015 – Michael Frank (Stanford University) – Foster Auditorium
Title: Predicting pragmatic reasoning
Abstract: A short, ambiguous message can convey a lot of information to a listener who is willing to make inferences based on assumptions about the speaker and the context of the message. Pragmatic inferences are critical in facilitating efficient human communication, and have been characterized informally using tools like Grice’s conversational maxims. They may also be extremely useful for language learning. In this talk, I’ll propose a probabilistic framework for referential communication in context. This framework shows good fit to adults’ and children’s judgements. In addition, it makes interesting novel predictions about both language acquisition and processing, some of which we have already begun to test.
February 6, 2015 – Lindsay Kay Butler-Trump (Penn State University)
February 13, 2015 – Pablo Requena (Penn State University)
February 20, 2015 – Gerrit Jan Kootstra (Penn State University)
February 27, 2015 – Cynthia Lukyanenko (Penn State University)
March 6, 2015 – Nate George (Penn State University)
March 13, 2015 – No Meeting – Spring Break
March 20, 2015 – Lisa Davidson (NYU)
March 27, 2015 – Michele Diaz (Penn State University)
April 3, 2015 – Elizabeth Stine-Morrow (University of Illinois, Urbana-Champaign)
April 10, 2015 – Lisa Reed (Penn State University)
April 17, 2015 – Charles Yang (UPenn)
April 24, 2015 – Armin Schwegler (UC Irvine)
May 1, 2015 – Zofia Wodniecka (Uniwersytet Jagielloński)
All meetings will be held at Fridays 9-10:30am in Moore Building, room 127 unless otherwise noted.
August 29 – Courtney Johnson-Fowler (Penn State, German and Linguistics)
Title: Miccio Travel Award
Abstract: First presented in 2010, the Adele Miccio Memorial Travel Award was established as an opportunity for CLS graduate students to begin building a professional relationship with a senior scientist, through a lab visit or meetings at a conference. In this talk, Courtney Johnson Fowler will discuss her experiences as a Miccio Travel Award winner. In March 2014 Courtney travelled to Spain to visit Dr. Daniela Paolieri and the Language and Memory Group in the Department of Experimental Psychology at the University of Granada. As part of her talk she will outline her dissertation research, explain why she chose to visit Dr. Paolieri, and detail how the travel award benefitted her as a researcher.
September 5 – Rhonda McClain (Penn State, Psychology)
Title: Using ERPs to investigate the scope and time course of inhibition in bilingual speech.
Abstract: Bilingualism has been hypothesized to place heightened demands on speech production. Speaking a second language (L2) is often effortful and thought to reflect the competition arising from the more dominant first language (L1). To date, very little research has directly examined the consequences of bilingualism for speech planning in the L1, although recent fMRI studies suggest that the neural substrates of speech production in the L1 may differ for bilinguals and monolinguals. One hypothesis about the source of L1 differences for bilinguals and monolinguals comes from ERP studies that have examined the time course of speech planning. These studies suggest that the L1 may be suppressed when bilinguals speak the L1 after speaking the L2. Critically, the time course over which suppression is observed is long, suggesting the presence of a global mechanism of inhibitory control. If bilinguals repeatedly inhibit the L1 to enable production in the L2, there may be consequences that then account for the observed fMRI differences. In this talk, I report a set of experiments that use ERPs to investigate the differences in the earliest stages of speech planning in the L1. These experiments ask three questions about speech planning: 1. Do L2 speakers and monolinguals differ in planning L1?; 2. Do L2 learners who are less proficient in the L2 and highly dominant in the L1 reveal the same differences that have been observed for proficient bilinguals?; and 3. When L2 speakers are required to produce L2 and then revert to L1, how extended are the consequences of L2 on L1 speech?
September 12 – Federica Bulgarelli (Penn State, Psychology)
Title: Tracking multiple structures: an investigation of the primacy effect
Abstract: A fundamental challenge of statistical learning is to determine whether variance observed in the input signals a change in the underlying structure. Interestingly, when learners encounter two consecutive inputs, they only learn the first structure unless exposure to the second is tripled or a contextual cue correlates with the change (Gebhart, Aslin, & Newport, 2009). In two experiments, we explored the conditions under which both structures can be acquired. We found that learners who switch to the second input immediately after mastering the first are more likely to learn both, whereas those who continue to receive input in the first structure are more likely to remain entrenched, exhibiting the primacy effect. Further, the ability to learn both structures correlates with performance on a Flanker task, suggesting that the first input may need to be inhibited to acquire the second structure. We relate our findings to real world learning and bilingualism.
September 19 – Noriko Hoshino (Kobe City University of Foreign Studies)
Title: Time course of language selection in bilingual language production
Abstract: When bilinguals plan to speak in a given language, both of their languages are activated. However, cognitive control mechanisms effectively allow the target language to be selected during speech planning. In this talk, I address the questions of when the target language is selected and what factors influence the locus of language selection using behavioral and electrophysiological measures. A series of behavioral experiments with same and different script bilinguals demonstrate that script differences allow the bilingual to select the language of production at an earlier point in speech planning when they are perceptually available. In the next set of experiments, we have been examining the time course of cross-language activation and language selection with event-related potentials (ERPs). A critical result of the ERP experiments is that both languages are activated to the phonological level at least for same script bilinguals. These findings will be discussed in terms of models of bilingual word production.
September 26 – Patricia Román (Penn State, Psychology)
Title: What does sentence comprehension tell us about bilingualism?: Evidence from fMRI and ERPs
October 3 – Rosa Guzzardo Tamargo (University of Puerto Rico) & John M. Lipski (Penn State)
October 10 – Nick Henry (Penn State, German and Linguistics)
Title: Morphosyntactic Processing, Cue Interaction, and the Effects of Instruction: An Investigation of Processing Instruction and the Acquisition of Case Markings in L2 German
October 17 – Carrie Jackson (Penn State, German and Linguistics)
October 24 – Dick Aslin (University of Rochester)
Title: Distributional language learning: Mechanisms of category formation
Abstract: In the past 15 years, a substantial body of evidence has confirmed that a powerful distributional learning mechanism is present in infants, children, adults and (at least to some degree) in non-human animals as well. I will briefly review this literature and then discuss some of the fundamental questions that must be addressed for any distributional learning mechanism to operate effectively within the linguistic domain. In particular, how does a naive learner determine the number of categories that are present in a corpus of linguistic input, and what distributional cues enable the learner to assign individual lexical items to those categories? Contrary to the hypothesis that distributional learning and category (or rule) learning are separate mechanisms, I will argue that these two seemingly different processes — acquiring specific structure from linguistic input and generalizing beyond that input to novel exemplars — actually represent a single mechanism. Evidence in support of this single-mechanism hypothesis comes from a series of artificial grammar-learning studies that not only demonstrate that adults can learn grammatical categories from distributional information alone, but that the specific patterning of distributional information among attested utterances in the learning corpus enables adults to generalize to novel utterances or to restrict generalization when unattested utterances are consistently absent from the learning corpus. Finally, I will discuss some recent findings on the neural correlates of statistical learning and the prospects that such fMRI and fNIRS data will clarify the mechanisms of language learning.
October 31 – Marianna Nadeu (Penn State, Spanish Linguistics)
November 7 – PIRE undergraduate presentations – Foster Auditorium
November 14 – No CLS-Hispanic Linguistics Symposium
November 21 – No CLS-Psychonomics
November 28 – No CLS-Thanksgiving break
December 5 – Gerry Altmann (UConn)
Title: Representing objects across time: language-mediated event representation.
Abstract: Language is often used to describe the changes that occur around us – changes in either state (“I cracked the glass…”) or location (“I moved the glass onto the table…”). To fully comprehend such events requires that we represent the ‘before’ and ‘after’ states of the object. But how do we represent these mutually exclusive states of a single object at the same time? I shall summarise a series of studies, primarily from fMRI, which show that we do represent such alternative states, and that these alternative states compete with one another in much the same way as alternative interpretations of an ambiguous word might compete. These studies also show that whereas the representations of distinct but similar objects (e.g. a glass and a cup) interfere with one another in proportion to their similarity, representations of the distinct states of the same object interfere in proportion to their dissimilarity. This interference, or competition, manifests in a part of the brain that has been implicated in resolving competition. Furthermore, activity in this area is predicted by the dissimilarity, elsewhere in the brain, between sensorimotor instantiations of the described object’s distinct states. I shall end with new data (still too hot to touch) whose interpretation is a first step towards a brain mechanism for distinguishing between object types, tokens, and token-states.
[Prior knowledge of the brain is neither presumed, required, nor advantageous].
December 12 – Antonella Sorace (University of Edinburgh)
January 17 – Richard Page & Mike Putnam (Penn State, Department of German and Slavic Languages and Literatures) :The grammaticalization of the griege-passive in Pennsylvania German
In this study, we investigate the degree of grammaticalization of the griege-passive in Pennsylvania German as spoken by Anabaptists in Ohio. Parallel to the development of recipient passives in Continental German and English, Pennsylvania German is developing a passive construction using the verb griege ‘to get, receive’ as an auxiliary (Burridge 2006: 186-187). Examples taken from Burridge (2006: 186) are given in (1) and (2).
(1) Ich hab e Buch gewwe griegt.
I have a book given got
‘I got given a book’
(2) Mir griege gesaagt.
we get told
‘We get told’
As illustrated in (1) and (2), the degree of grammaticalization varies. In (1), the sentence licenses the object Buch which is assigned the role of patient as would be expected with the lexical variant of griege and illustrated by the sentence Ich hab e Buch griegt ‘I got a book’, (cf. German Ich habe ein Buch geschenkt bekommen “I got given a book” versus Ich habe ein Buch bekommen “I got a book”). In contrast, (2) shows a greater degree of grammaticalization due to the absence of an overt object. Similarly, the lexical verb griege assigns the role of recipient to the subject, but this is not always the case for subjects in griege passive constructions as illustrated in (3) (example from Burridge 2006: 186):
(3) Er hat sei Lewwe genumme griegt.
he has his life taken got
‘He got his life taken/He was killed.’
Once again, a parallel structure can be found in German in sentences like Sie haben den Ball weggenommen gekriegt ‘They got the ball taken away (from them)’ (Burridge 2006: 186). The forms in (2) and (3) can be taken as examples of semantic bleaching (desemanticization) and extension (i.e., the rise of new grammatical meanings via context-induced reinterpretation) that are hallmarks of grammaticalization (see discussion in Heine and Kuteva 2005). We note that in English it is possible for the subject of get-passive sentences to actually be assigned the role of patient whereas this is not grammatical in standard German, as shown in (4):
(4) English: The ball got taken away.
German: *Der Ball hat weggenommen gekriegt.
The ball has away-taken gotten
‘The ball got taken away’
The grammaticality of sentences such as (4) in Pennsylvania German has not been previously investigated to the best of our knowledge. In this study, we will systematically examine the grammatialization of griege-passives in Pennsylvania German along a cline illustrated by the sentences in (1) – (4). Reference Burridge, Kate. 2006. Language contact and convergence in Pennsylvania German. Grammars in contact, ed. by A. Y. Aihenvald and R.M.W. Dixon, 179-200. Oxford UP: Oxford and New York. Heine, Bernd and Tania Kuteva. 2005. Language contact and grammatical change. Cambridge UP: Cambridge.
January 24 – Giuli Dussias (Penn State, Department of Italian, Spanish and Portuguese) : Effects of the second language on syntactic processing in the first language
Past research has shown that adult native speakers of Spanish immersed in an English-speaking environment adopt processing routines of their second language (L2) when processing their first language (L1). Here we ask whether changes in processing routines can be triggered by overexposing bilingual speakers to particular structures so that bilinguals who have undergone changes when processing in their L1 ‘move back.’ We hypothesized that if the parser’s configuration is related to language exposure (e.g., MacDonald & Seidenberg, 2006), bilinguals’ parsing preferences are expected to change as a function of the frequency with which the relevant structure appears in an experimental session. We investigated this in the context of temporarily ambiguous relative clauses as in Arrestaron a la hermana del hombre que estaba enferma (Someone arrested the sisterFEM of the man who was illFEM). Here, the relative clause que estaba enferma (who was ill) can attach to the higher noun (hermana/sister) or the lower noun (hombre/man) in the complex noun phrase. Because the adjective enferma (ill) is marked with feminine grammatical gender, the correct interpretation is one where the relative clause attaches to hermana (also marked with feminine gender). Spanish-English bilingual speakers living in an English-speaking (US bilinguals) or a Spanish-speaking (Spain bilinguals) environment were recruited.
The study involved three phases. In phase 1, we carried out an eye-tracking study to determine the participants’ initial attachment preferences in their L1 (Spanish for both groups of speakers). In Phase 2, they participated in an ‘intervention’ study that exposed them to a biased sample of 120 relative clause constructions in their second language (English) over a 5-day period. Participants who showed initial attachment preferences in Spanish that favored low attachment received a biased sample of high attachment sentences; those whose initial attachment preferences favored high attachment received a low attachment treatment. In phase 3, eye-movement records were again collected to determine participants’ attachment preferences after the intervention study. The Spain bilinguals were tested two days after the intervention study had ended. The US bilinguals were, in addition, tested a week later to assess non-immediate effects of exposure. At pre-test, all participants were also administered the AX-CPT task (to measure sustained attention) as well as the Raven’s matrices and a standardized English language test (to match the US and Spain bilinguals).
Preliminary results reveal the following: (1) Changes in attachment preference occurred in both groups of speakers but were modulated by performance in the AX-CPT task; (2) “High attachers” showed evidence of low attachment preferences at post-test 1 and at post-test 2 despite the fact that the input in the intervention study was in their L2, and even when they were immersed in an entirely Spanish-speaking environment. This provides evidence for a high level of permeability between the bilinguals’ two linguistic systems; and (3) “Low attachers” showed evidence of a high attachment preference only at post-test 2, suggesting that learning to attach high requires consolidation of information. The findings will be discussed in terms of the role of statistical learning in sentence comprehension processes (e.g., Gennari & MacDonald, 2009; Wells et al., 2009) and the implications of the results for theories of cognitive control.
January 31 – Kaitlyn Litcofsky (Penn State, Psychology) : A behavioral and neurocognitive study of sentential codeswitching in Spanish-English bilinguals
A characteristic of bilingual speech is the occurrence of sentences that contain words of both languages. These codeswitches provide a window into how a bilingual’s two languages interact in conversation. The current project examines bilinguals’ online processing during comprehension of sentences that contain codeswitches. Previous psycholinguistic and neurocognitive research has shown that switching between isolated items incurs an asymmetrical processing cost where it is harder to switch into the dominant language than into the weaker language. This cost is thought to be related to the inhibition of the dominant language. In contrast, relatively little is known about the processing of codeswitches in a meaningful sentence context. In three experiments, sentential codeswitching was examined using a self-paced reading task and event-related potentials (ERPs) in two groups of Spanish-English bilinguals: those immersed in L2 English who codeswitch frequently in their daily life, and those immersed in L1 Spanish who do not codeswitch. Stimuli were 160 sentences that began in Spanish or English and either contained a codeswitch into the other language or not. Sentences that contained codeswitches were read more slowly than those without switches, but only when the sentences switched from the dominant to the weak language, not from the weak to the dominant language. Additionally, ERPs for both habitual and non-habitual codeswitchers showed a late positivity in response to codeswitched words as compared to non-codeswitched words, but again only when switching from the dominant to the weaker language. No effect of codeswitching occurred when switching into the dominant language. These results are in contrast to the language switching studies, and indicate that codeswitching in a meaningful sentential context requires different processing mechanisms. Given the appearance of the late positivity and consistency across habitual and non-habitual codeswitchers, it appears that the processing of sentential codeswitching may rely on fundamental sentence-level integration mechanisms related to activation of the weaker language.
February 7 – Dr. Krista Byers-Heinlein (Concordia University, Psychology) : Baby you’re bilingual: Acquiring two languages in infancy
Infants growing up in bilingual environments must build a language system that accommodates two languages. An important task for these infants is to discriminate and differentiate their languages. While it is easy to imagine an early bilingual environment that neatly packages the two languages in a way that facilitates this task (e.g. one-parent-one-language), most bilingual infants do not encounter their languages in this way. This talk will present evidence that bilingual infants typically hear their two languages in bilingual contexts: spoken by the same person, in the same situation, and/or within the same sentence. Experimental work is beginning to reveal how bilingual infants cope with the bilingual nature of their input, including language discrimination and the processing of code switched speech. These findings will be discussed in the context of the PRIMIR framework of infant speech perception and word learning.
February 14 – Jason Gullifer(Penn State, Psychology) : Identifying the role of language-specific syntax in bilingual word recognition: A two-pronged approach.
Bilinguals activate both of their languages in parallel despite the intention to read or speak in a single language. Yet bilinguals are apparently able to use one of their two languages without overt interference from the unintended language. A fundamental question in recent psycholinguistic research has been to determine the mechanisms that allow for successful language selection. One potential linguistic mechanism is the language cue. Languages differ in many respects (e.g., orthography, phonology, and syntax) and environments differ in the requirement of which language(s) may be in use. These differences could theoretically allow bilinguals to access one language selectively. This talk will explore the role of syntactic differences between Spanish and English in negotiating parallel activation. The approach exploits evidence from two methodologies to examine the role structure during word recognition. First, word naming during sentence reading will be used to measure the degree of parallel activation within various syntactic structures (specifically: active, passive, and dative structures). If certain structures function as a language cue, evidence for parallel activation should be reduced or absent for words read within those structures. Second, cross-language syntactic priming will be used to confirm whether the chosen syntactic structures are in fact represented in a language independent manner.
February 21 – Matthew Carlson (Penn State, Department of Italian, Spanish and Portuguese) : Did I just hear what I thought I heard? Effects of language-specific phonotactic constraints in bilinguals’ speech perception
Listeners utilize the structural properties of their language to help them interpret speech, leading to misperception of non-native sounds and sound sequences. For example, Spanish contains no words with initial sCV sequences, a phonotactic restriction well-known to influence cross-language borrowings. Words like snob are “repaired” by prepending an initial /e/ (esnob). Interestingly, similar repairs are apparent in on-line speech perception tasks. Native Spanish speakers confronted with a spoken token of snob report hearing esnob, even though the stimulus lacks the initial vowel, whereas speakers of languages that allow sCV sequences (e.g. French) do not (Cuetos et al., 2011; Hallé et al., 2008; to appear).
How does bilingual perception relate to these monolingual extremes? Second language users can develop native-like perception of sCV sequences—even when this conflicts with their first language (Parlato-Oliveira et al., 2010). However, the perceptual consequences of possessing conflicting phonotactic constraints are unclear. Does acquiring a language that allows sCV structures eliminate misperceptions, even in the more phonotactically restrictive language? If not, how does bilinguals’ speech perception shift in response to changes in linguistic context? We investigated these possibilities in bilinguals fluent in both Spanish and English via a vowel detection task (Cuetos et al., 2011) and an AX discrimination task using a pretest-posttest design. The pretests were conducted in a monolingual Spanish experimental enviromnent, after which half the participants then performed an English picture-naming task before repeating the Spanish vowel task. The performance of this group was compared to a group that performed the picture-naming task in Spanish. A monolingual English control group was included for comparison. The bilinguals’ pretest performance reflected the influence of Spanish phonotactics, but this varied with language dominance in ways consistent with influence from English phonotactics. Intriguingly, changes in performance at posttest depended on the language of the intervening task, again modulated by language dominance. This suggests that bilinguals can deploy distinct phonotactic constraints dynamically depending on shifting linguistic context.
February 28- Pablo Requena (Penn State, Department of Italian, Spanish and Portuguese) & Ji Sook Park (Penn State, Communication Sciences and Disorders) : Accessing meaning of L2 Words in beginning and advanced learners: An electrophysiological and behavioral investigation
March 7 – No CLS Meeting
March 14 – No CLS Meeting
March 21 – Alison Eisel Hendricks (Penn State, Department of Germanic & Slavic Languages and Literatures)
March 28 – Colleen Balukas (Penn State, Department of Italian, Spanish and Portuguese)
April 4 – Miguel Rámos (Penn State, Department of Italian, Spanish and Portuguese)
April 11 – No CLS Meeting
April 18 – Fengyang Ma (Penn State) : Accessing meaning of L2 Words in beginning and advanced learners: An electrophysiological and behavioral investigation
According to the Revised Hierarchical Model (Kroll & Stewart, 1994), second language (L2) learners initially access meaning of L2 words via the L1 whereas advanced learners access meaning directly. We tested this hypothesis with English learners of Spanish in a translation recognition task, in which participants were asked to judge whether English words were the correct translations of Spanish words. In each case, we gathered data on behavior and on the earliest time course of processing using ERPs. The critical conditions compared the ability of learners to reject distractors that were related to the translation in form or meaning when a long (750 ms) or short (300 ms) SOA separated the two words. For advanced learners, there were effects for semantic and form distractors in both measures at the long SOA, but at the short SOA, there were behavioral effects but only an N400 effect in the ERP record for semantic distractors. These results replicate Guo et al. (2012), suggesting that relatively proficient L2 speakers access the meaning of L2 words directly. For beginning learners, at the long SOA, there were semantic and form effects in both measures. At the short SOA, behavioral data were sensitive to distractor type, but the ERPs only revealed larger N400 and smaller LPC for translation distractors and no effect for semantic distractors. Overall, these data suggest that at early stages of L2 learning there is reliance on the L1 translation equivalent. Once proficient, they are able to retrieve the meanings of the words directly.
April 25 – Álvaro Villegas (Penn State, Department of Italian, Spanish and Portuguese)
May 2 – María C. Martín (Penn State, Psychology) : Different mechanisms of inhibitory control in bilingual lexical production
Past research shows that lexical access is non-selective with respect to language, allowing cross-languages interactions to occur in both comprehension and production (Dijkstra, 2005; Guo, Liu, Misra, & Kroll, 2011). A key question in bilingual research has been to understand the control mechanisms that allow bilinguals to select the language they intend to use. Language comprehension and production potentially differ in the way in which bilinguals achieve control of their two languages, e.g., in the time course of inhibition.
In past research, we have shown that cross-language inhibition in comprehension seems to be relatively short-lived (Martín, Macizo, & Bajo, 2010). In contrast, studies of lexical production have shown that inhibition of the language not in use can be long lasting (e.g., Misra, Guo, Bobb, & Kroll, 2012), suggesting that there are multiple mechanisms of control.
The present study explored the nature of the control mechanisms that underlie language selection in bilingual production and specifically whether there is evidence for both automatic and controlled selection processes. Relatively proficient Chinese-English bilinguals performed a picture naming task in language blocked or language mixed conditions (Guo et al., 2011). In one condition, they were instructed to name the picture, in another they also had to perform a concurrent updating task.
Results showed that the concurrent task affected performance differentially in the blocked and mixed conditions. Under mixed conditions that included the demanding updating task, bilinguals who were strongly L1 dominant were as slow to speak the L1 as the L2. The updating task did not eliminate the inhibition of L1 under mixed conditions. In contrast, introducing the updating task in the blocked conditions appeared to eliminate the inhibitory effect of L1 when it followed L2. Findings will be discussed in the context of models of bilingual control.
August 30 – No CLS Meeting
September 6 – Melinda Fricke (UC Berkeley, Department of Linguistics) : Language in Time: Phonetic Duration and the Sequential Nature of Phonological Encoding
Many studies have shown that word duration is reliably correlated with contextual probability in conversational speech (Jurafsky et al., 2001; Bell et al., 2003, 2009; Aylett & Turk, 2004, among others). Most explanations of this correlation focus on the lexical level of representation; the predictability of a word in a given context is typically hypothesized to affect its phonetic realization, either through listener-driven or speaker-driven processes. Listener-driven accounts focus on the amount of phonetic detail needed to recognize a word in its context, while speaker-driven accounts posit that the speed of lexical access has consequences for a word’s phonetic realization.
It seems intuitive that faster lexical access would be related to shorter word duration, but one aspect missing from existing speaker-driven accounts is an explicit mechanism linking higher contextual predictability to shorter articulatory duration. In this talk, I present three studies that help to lay the groundwork for such a mechanism. Results from a word-learning study with children indicate that difficulty in phonological encoding is reliably associated with longer articulatory duration, and results from studies of adult single-word production and conversational, connected speech are consistent with a model of language production that incorporates both lexical-phonological feedback and sequential encoding of segments (e.g. Sevald & Dell, 1994). I argue that the speed of phonological encoding, combined with the fact that encoding proceeds from left to right, can account for the present results and may provide the link between contextual predictability and articulatory duration.
September 13 – Ashley Roccamo (Penn State, Department of German) : Is Earlier Really Better? Comparing the Effectiveness of Pronunciation Training for Beginner and Intermediate Learners
Pronunciation in a second language (L2) is notoriously difficult to acquire. Even advanced L2 speakers often cannot acquire accurate pronunciation on their own (Grosser, 1997; Jilka, 1999; Munro & Derwing, 2008; Trofimovich & Baker, 2006). Despite its consequences for communication, however, L2 learners and teachers alike frequently ignore pronunciation during the stages of L2 acquisition. Yet pronunciation is a skill that can improve with focused training, as has been reported in a number of studies (Derwing, Munro & Wiebe, 1998; Elliott, 1997; Flege, 1989; Hardison, 2004; Saito & Lyster, 2011). Most of these training programs are introduced in more advanced stages of L2 proficiency (e.g., Counselman, 2010; Elliott, 1995, 1997; Lord, 2008), although a number of researchers have recently been calling for pronunciation training to begin as early as possible (e.g., Counselman, 2010; Derwing & Munro, 2013; Elliott, 1995, 1997; Eskenazi, 1999; Hardison, 2004; Neufeld & Schneiderman, 1980). Thus the question remains, at which stage in L2 acquisition a focus on pronunciation should begin.
The current study examines this issue further and tests whether an eight-week pronunciation training unit designed as a supplement to German language classrooms is more effective when implemented in the first or fourth semester. Training for the two experimental groups was divided into four two-week modules, each training a specific area of German pronunciation for just 10 minutes a day. Target areas were chosen both for their difficulty for L2 learners as well as their importance for communicating meaning, and included lexical stress, palatal and velar fricatives ([ç] and [x]), fricative and vocalized /r/, and the monophthongization of [e] and [o]. Identical pre- and post-tests were administered for all groups before and immediately after training. Randomly selected, matched samples from each participant’s pre- and posttests were rated by native-speaker listeners for accentedness and comprehensibility. Pre- and posttest ratings from the experimental and control groups are compared in order to compare the effectiveness of pronunciation training in early and intermediate semesters of German L2 learning. The results of these analyses will ascertain whether pronunciation training truly is more effective for beginner learners in the earliest stages of L2 acquisition.
September 20 – Merel Keijzer (University of Utrecht) : Cognitive and Language Control in Aging Dutch-English Late Bilinguals and Dutch Bilectals
Following the seminal work by Ellen Bialystok, the last decade has seen a host of studies on cognitive advantages as a result of early bilingualism in which early bilinguals are consistently reported as having enhanced cognitive control, notably inhibitory control (Fiszer, 2008). More recent work has sought to extend this effect to late bilinguals (cf. Fiszer, 2008; Tao et al., 2011; Luk et al., 2011), but with mixed results. This may be partly due to the large variability in exposure time to the L2, and, related to that, the lack of L1 and L2 proficiency measures. At the same time, the variable results are also – partially – ascribed to a difference in typological proximity of a bilingual’s languages under investigation. In that vein, questions have been raised concerning bilectals and whether the cognitive control advantage in these speakers is likely to be greater (due to increased demands on cognitive control with two closely related languages that have to be juggled) or smaller (due perhaps to the dialect not being perceived as a separate language cognitively).
The very few studies that have been done on the language and cognitive control of bilectals (cf. Kirk, 2012) have not been able to shed light on this matter. The aim of this study is to add to the growing research tradition of examining language and cognitive control in late bilinguals and well as bilectals. In order to do that four groups of speakers were examined: 1) Late bilinguals (L1 Dutch; L2 English) who were at least 15 when they started acquiring their L2. At the time of testing they had been immersed in an Anglophone environment for 42,6 years on average (n=69). 2) Bilectals (standard Dutch – Nedersaksisch, which is an eastern Dutch dialect spoken close to the German border)) (n=20). 3) Monolingual Dutch speakers were recruited to serve as controls (n = 60). 4) Since it has been pointed out that no true Dutch monolinguals exist with all Netherlands-based subjects having at least a basic command of foreign languages, an additional group of monolingual English speakers in Australia was recruited (n=60).
All participants were roughly subdivided over three age categories: a baseline group of middle-aged subjects (40-50), a ‘youngest old’ category (60-70) and an ‘oldest old’ group (71+). One exception was the bilectal group, which was solely made up of oldest old participants, i.e. aged 71 and older. Furthermore, participants in all groups were asked to fill in a (language) background and lifestyle questionnaire, and were subjected to various executive functioning (EF) and working memory (WM) measures in addition to standard processing speed and fluid IQ tests. Crucially, they completed various language proficiency tests in both their L1 and L2, varying from overall proficiency measures to a grammaticality judgment and receptive as well as productive vocabulary tests.
The results indicate a very large degree of variability in both cognitive and bilingual language control, in line with previous findings (Bedard et al., 2002; Borella et al., 2008). In addition, language and cognitive measures were correlated but this did not apply to all tests included in the battery. Predictor variables of success in control on both sides were L1 and L2 proficiency and educational level. Finally, on many occasions, the oldest old participants outperformed their younger old peers. Comparing the bilinguals and the bilectals, the two groups were found to be very distinct: the bilinguals were found to have a cognitive advantage (on some but not all the tasks of the test battery), whereas the bilectals performed very similar to the monolingual Dutch speakers with no cognitive advantage presenting itself. This itself leads to interesting questions regarding the cognitive status of a dialect. The results of a recently initiated continuous analysis, abandoning the idea of groups, but rather include all data in one larger regression model, will also be presented.
September 27 – Adele Goldberg (Princeton, Psychology) : Explain me something: how we learn what not to say
Although many constraints are motivated by general semantic or syntactic facts, in certain cases, formulations are semantically sensible and syntactically well-formed, and yet noticeably dispreferred (e.g., ??She explained me something; ??the afraid boy). Results from several experiments are reviewed that suggest that competition in context—statistical preemption–plays a key role in learning what not to say in these cases. I will also suggest a domain-general mechanism that may well underlie this process, and offer a speculative proposal as to why L2 learners may have more trouble avoiding these dispreferred utterances.
October 4 – MaryEllen MacDonald (UW-Madison, Psychology) : We Reap What We Sow: The Cascading Effects of Language Production on the Nature of Language and its Comprehension
Two critical questions in language research concerns why languages have the form that they have, which typically is a concern of language typologists and other linguists, and and why language comprehension works the way that it does, a major concern of psycholinguists. I will argue that we can find some important answers to both of these questions by investigating the central question of language production researchers: why people say things in certain ways and not others. Language production processes involve the conversion of communicative intents into spoken, signed, or written utterances, and this conversion requires an intricate interplay of attention, retrieval from long term memory, motor planning and temporary maintenance of an utterance plan. I’ll present evidence that the nature of these processes (many of which are not language-specific) yield biases in production planning for certain kinds of word orders over others. These biases to produce certain utterance forms over other viable alternatives have downstream consequences for language statistics, language typology, and language comprehension processes. I’ll trace these cascading effects from production biases and argue that a full account of comprehension and language typology will have to incorporate insights from language production.
October 11 – David Green (University College London, Division of Psychology and Language Sciences) : Language control
Communities differ in the way they use the languages at their disposal. In some, speakers code-switch between their languages within a conversational turn whereas in others they do not. Instead different languages are used in different realms of life. Might these different conversational demands shape the processes of language control within the individual speaker? I will explore and evaluate this possibility. Understanding the dynamics of language control contributes to our understanding of the mind as a control system, to our understanding of individual differences in executive control and to our understanding the patterns of speech recovery post-stroke.
October 18 – Fred Genesee (McGill University, Psychology) : Myths and Misunderstandings about Dual Language Acquisition in Young Learners
There has been growing interest in children who learn language in diverse contexts and under diverse circumstances. In particular, dual language acquisition has become the focus of much research attention, arguably as a reflection of the growing awareness that dual language learning is common in children. A deeper understanding of dual language learning under different circumstances is important to ensure the formulation of theories of language learning that encompass all language learners and to provide critical information for clinical and other practical decisions that touch the lives of all language learners. This talk will review research findings on dual language learning in both school and non-school settings, among simultaneous and sequential bilinguals, and in typically-developing learners and those with an impaired capacity for language learning. Key findings with respect to common myths and misunderstandings that surround dual language acquisition in young learners will be reviewed and discussed and their implications for both theoretical and practical matters will be considered.
October 25 – Ben Zinszer (Penn State, Psychology) : Age of L2 Onset and Left MTG Specialization for L1 Lexical Tones
Recent neuroimaging studies have revealed distinct functional roles of left and right temporal lobe structures in the processing of lexical tones in Chinese. In an evoked response potential paradigm, Xi et al (2010) elicited a greater mis-match negativity (MMN) to acoustically varying sets of intonated Chinese syllables when tone variants represented linguistically salient contrasts. Zhang et al (2011) further localized processing of Chinese lexical tones to the left middle temporal gyrus (lMTG) for linguistic processing and the right superior temporal gyrus (rSTG) for acoustic processing of tonal contrasts for native Chinese speakers. In the present study, we ask whether knowledge of a second language (English) modulates this pattern of activation in the perception of tonal contrasts. Twenty-five native Chinese speakers were recruited from undergraduate and graduate students at Beijing Normal University, China. Participants watched a silent film and listened to blocks of computationally manipulated /ba/ syllables which were varied to form within- and between-category deviants at equal acoustic intervals from a standard tone. Oxygenated hemoglobin levels in participants’ temporal cortices were measured by functional near-infrared spectroscopy (fNIRS). Three block conditions were presented: standard falling tones, standard falling tones randomly ordered with deviant falling tones (Within Category), and standard falling tones randomly ordered with rising tones (Across Category). Deviant blocks were alternated 10 times each with standard blocks and rest periods of equal duration. Blocks were analyzed for peak oxygenated hemoglobin levels, and a mixed-effects model was fit to these data, including effects of Category (Standard, Within, or Across), age of earliest exposure to English (spoken), and proficiency in English. Functional changes in oxygenated hemoglobin levels indicated a significantly greater response to Within Category contrasts in right STG, consistent with previous findings. However, the effect of Category in left MTG was significantly modulated by the age of participants’ earliest English exposure: Across Category activation exceeded Within Category activation only for participants exposed to English after 13 years of age. While previous research has established the importance of left MTG in the categorical perception of lexical tones, our findings suggest that the functional specialization of this region is sensitive to second language experience, even in the processing of native language.
November 1 – PIRE undergraduate presentations
November 8 – Greg Guy (NYU, Linguistics) : The Role of Lexical Frequency in Linguistic Variation
Lexical frequency has been argued (by, for example, Bybee 2007, Pierrehumbert 2001) to be a significant factor in the organization of linguistic structure and the operation of phonological processes, such that words that are used a lot are expected to behave differently in certain respects from those that occur more rarely. Variable processes in language are especially implicated in such models. This paper considers empirical evidence from a number of studies of linguistic variation that have looked at lexical frequency as a potential predictor, to investigate the validity and extent of frequency effects. The results are mixed: frequency does appear to affect some variable lenition processes, like -t,d deletion in English, but is unreliable in others, e.g. –s deletion in Spanish. It interacts significantly with morphological constraints: -t,d deletion increases with frequency in monomorphemes, but not in derived forms (regular past tense forms). And frequency effects can be found beyond the phonology: in morphological variation, for example, synthetic comparatives and superlatives in English are favored with frequent adjectival roots. In syntax, Spanish pro-drop shows no general frequency effect, but frequency systematically interacts with all other constraints on the process, magnifying them in high-frequency forms and attenuating them in low frequency forms. And there are cases where frequency fails to have any significant effect, or makes the wrong predictions. The data therefore suggest that frequency is a characteristic of entries in the mental lexicon that is available to speakers for constructing generalizations about linguistic operations, but frequency does not regularly predict or entail specific linguistic outcomes. This has implications for formal linguistic theories as well as for usage-based theories that emphasize frequency as an explanatory factor.
November 15 – No CLS Meeting
November 22 – Tamara Swaab (UC Davis, Psychology) : Understanding Individual Differences in Language Comprehension
Spoken language comprehension involves managing a set of interrelated cognitive tasks, including activation of stored phonological and semantic representations of words, activation or construction of syntactic structure representations, determination of how newly activated words relate to previously introduced information, and ultimately the construction of a representation of the meaning of the message. Whereas the processing of individual words and syntactic structures in isolation can proceed relatively automatically, the construction of a coherent representation of the overall message may rely more on controlled processing, requiring maintenance of previous context and rapid integration of incoming input in Working Memory (WM). This is especially the case for spoken language comprehension since listeners have no control over the rate of input, nor can they ‘‘re-experience’’ parts of the speech signal. I will present evidence from healthy adults and schizophrenia patients indicating that individual differences or impairments in the controlled maintenance of context predict which kind of language information is prioritized or processed during spoken language comprehension: the meanings of individual words or the integrated representation of the language context.
November 29 – No CLS Meeting
December 6 – Megan Zirnstein (Penn State, Psychology) : Investigating Semantic Prediction in Second Language Processing
The ability to predict upcoming words in a sentence or text is an important aspect of the language comprehension system. When predictions are correct, behavioral and ERP research has shown that processing load is reduced (Federmeier, 2007; Van Verkum, 2008). However, when predictions are false and do not fit with the sentence or discourse context, there are processing costs, often in the form of longer reading times (Van Berkum et al., 2005) or higher amplitude ERP effects (i.e., the N400 and a frontally-distributed positivity; Federmeier et al., 2007). In this talk, I will present an overview of what we currently know about how readers predict in their native language, and discuss implications for processing in a second language. I will also present data from a recent ERP study investigating prediction effects when bilinguals read in their second language.
December 13 – Angela Grant (Penn State, Psychology) : Working hard really does pay off: An fMRI investigation of lexical access in L2 learners
This study uses functional magnetic resonance imaging (fMRI) to investigate how the development of lexical access in second language (L2) learners may be influenced by individual differences in working memory (WM) and inhibitory control (IC). Models of bilingual processing suggest that (a) bilinguals must consistently use IC in comprehension and production and (b) highly proficient learners access concepts directly while less proficient learners access concepts only through the L1 (Green, 1998; Kroll & Stewart, 1994). The neural implication of these models is that less proficient bilinguals, compared with highly proficient bilinguals, will require more effort to inhibit their L1 in order to successfully retrieve words in the L2.
Our hypothesis based on the current neuroimaging literature is that lower proficiency learners will more strongly activate inhibitory control areas, such as the left inferior frontal gyrus (LIFG) and anterior cingulate cortex (ACC), in addition to areas associated with semantic retrieval, such as the left middle temporal gyrus (LMTG). Higher proficiency learners, by contrast, should utilize more efficient networks where the LMTG acts as a hub of semantic retrieval rather than in the LIPFC or ACC for cognitive control (Abutalebi, 2008; Yokoyama, 2009). Participants in our study completed measures of proficiency (TVIP; Dunn et al. 1986) WM (phonological letter number sequencing; Wechsler, 1997) and IC (flanker task; Emmorey et al. 2008) before an fMRI experiment in which participants were asked to make a language-specific lexical decision on L1 words, L2 words, and homographs (e.g.. pie, the Spanish translation of foot).
Behavioral results did not show any significant correlations between proficiency, WM, or IC. Although these measures did not correlate significantly with each other, when included as covariates in the fMRI data analysis, we observe a positive correlation between greater activation in our pre-specified regions of interest, working memory, and proficiency. This suggests that, contrary to our predictions, L2 learners with higher WM and proficiency are calling on the LIFG, ACC, and LMTG more when accessing L2 words, rather than less. Specifically, learners are utilizing the LIFG under high conflict conditions (when identifying homographs), and the ACC under normal conflict conditions (when identifying unambiguous Spanish words). Participants also utilized the LMTG and left precuneus more when accessing Spanish words compared with English words, areas associated with semantic and episodic retrieval, respectively. Results are discussed in the context of current neuroimaging models of second language acquisition and bilingualism.
December 20 – Weekly CLS Meeting
December 27 – Weekly CLS Meeting
January 11 – Lisa Heimbauer (Penn State, Psychology) : Investigation of Language-Related Cognitive Abilities in Non-human Primates
To date, my research has focused on the evolution of language-related cognitive abilities taking a comparative approach. Research projects investigating the speech perception abilities of a language-trained chimpanzee have shown similar performance as humans, providing evidence that speech perception abilities are rooted in general auditory mechanisms and most likely were present in a common ancestor of humans and chimpanzees. Additionally, the chimpanzee’s performance has stressed the importance of early immersion in a speech-rich environment to facilitate speech perception. Results of experiments investigating the sequence learning abilities of rhesus macaques have shown rudimentary, rule-like, visual learning by the monkeys, providing evidence that these processing abilities may have been present in an early primate ancestor.
January 18 – Miccio Travel Awardees 2011-2012 (Penn State)
Colleen Balukas, Joe Bauman, and Cari Bogulski will talk about their experiences as Miccio Travel Award winners during the 2011-2012 award cycle.
Colleen Balukas Naomi Nagy at the University of Toronto and James Walker at York University. Joe Bauman visited Prof. Jessi Aaron at the University of Florida. Cari Bogulski visited Dr. Lee Osterhout at the University of Washington.
More about the Adele Miccio Travel Award can be found here.
January 25 – Megan Zirnstein (Penn State, Psychology) : Reading a book makes finishing it easier: Context effects of enriched composition
Complement coercion, a form of enriched composition, occurs when two syntactically compatible, but semantically mismatching elements are combined in an expression (e.g., started + the book). The semantic mismatch triggers the coercion of one element from an entity into an event sense (e.g., started to read the book), resulting in processing costs when compared to control expressions. Some researchers believe that enriched composition is a primarily semantic process, and that it is the activation and selection of competing interpretations for the event (e.g., reading vs. writing the book) that cause processing delays. In addition, previous ERP work has shown that coerced nouns elicit an N400 response similar to that of semantically anomalous nouns (e.g., astonished the book; Baggio et al., 2009; Kuperberg et al., 2010). Coercion costs, then, should be sensitive to semantic manipulations inter- and intra-sententially. However, some behavioral research has shown that this may not be the case (Frisson & McElree, 2008; Traxler et al., 2005). I will present data from multiple experiments that, together, suggest that enriched composition is neither a purely semantically or syntactically driven process. In particular, these data demonstrate that event information in prior context can change the nature of the cost associated with coercion, but does not fully attenuate said cost.
February 1 – Holger Hopp (University of Mannheim) : Lexical effects on grammatical processing
In this talk, I explore the extent to which aspects of lexical representations and processing affect grammatical processing in late L2 learners. I will present data from two recent experiments on grammatical gender agreement (Experiment 1) and syntactic ambiguity resolution (Experiment 2).
In Experiment 1, twenty advanced to near-native L1 English speakers of German were tested on lexical gender assignment in production and grammatical gender agreement a visual-world eyetracking experiment. Performance on grammatical gender agreement in comprehension varied according to lexical gender assignment accuracy. Building on the lexical gender learning hypothesis by Grüter et al., (2012), I present a model to account for the dependencies between lexical and syntactic performance.
In Experiment 2, syntactic attachment preferences in 75 intermediate to advanced L1 German learners as well as 25 English native controls were tested. Unlike the natives who show syntactic attachment preferences, L2ers do not display clear preferences, which seems to point to the underuse of structural information in L2 parsing (e.g. Clahsen & Felser, 2006). However, once lexical processing factors – as independently assessed in a lexical decision task – are taken into account, robust and native-like structure-driven attachment preferences surface in adult L2ers, too.
I will highlight the role of lexical aspects on L2 grammatical performance and discuss the implications of these findings for L2 acquisition and L2 processing research.
February 8 – Mari Cruz Martin (Penn State, Psychology) : Cognitive control and inhibitory mechanisms in bilingual production
Past research shows that lexical access is non-selective with respective to language, allowing cross-language interactions to occur (Dijkstra, 2005; Guo, Liu, Misra, & Kroll, 2011). A core question in bilingual research has been to understand the control mechanisms that allow bilinguals to select the language they intend to use. The study I will present aims to investigate the scope and the time course of inhibition in bilingual production. Language comprehension and production potentially differ in the way in which bilinguals achieve control of their two languages, e.g., in the time course of inhibition. In past research, we have shown that cross-language inhibition in comprehension seems to be relatively short-lived (Martín, Macizo, & Bajo, 2010). In contrast, studies of lexical production have shown that inhibition of the language not in use can be long lasting (e.g., Misra, Guo, Bobb, & Kroll, 2012), suggesting that there are multiple mechanisms of control. The present study explores the nature of the inhibitory mechanisms that underlie language selection in bilingual production and specifically whether there is evidence for both automatic and controlled selection processes. Relatively proficient Chinese-English bilinguals performed a picture naming task in language blocked or language mixed conditions (Guo et al., 2011) under simple naming conditions or while performing a concurrent and continuous updating task. Preliminary results show that the concurrent task affected performance differentially in the blocked and mixed conditions. I compare these results with the earlier comprehension studies and discuss the implications for models of cognitive control in production.
February 15 – Debra Titone (McGill University) : What The Eyes Tell Us About Bilingual Language Comprehension And Production
A rich history of work has led to several “truths” about bilingual language processing: 1) bilinguals activate words from all known languages even in single-language contexts (cross-language competition), 2) bilinguals divide daily exposure to each language, which leads to altered lexical entrenchment for language-unique words in a first and second language (L1 & L2), and 3) virtually everything about bilingual comprehension and production is modulated by L2 history, experience, and ability. While many theories of bilingual language processing generally accommodate these truths, they differ in the role ascribed to non-linguistic capacities such as cognitive control. Some models ascribe a secondary role to cognitive control (e.g., BIA+), arguing that they should only play a role following the earliest stages of word processing. Other models, such as Green’s inhibitory control model and the bilingual advantages view posited by Bialystok and colleagues, ascribe a more central role to cognitive control in balancing within- and cross-language activation during bilingual production and comprehension.
In this talk, I highlight ongoing work from my laboratory that investigates whether individual differences in L2 ability and cognitive control relate to language comprehension and production processes among bilinguals. This work makes extensive use of eye-tracking, which has great temporal sensitivity and ecological validity for studying the earliest stages of both language comprehension and production. With respect to eye movement studies of sentence reading, we show that cross-language activation (interlingual homograph interference) is modulated by several factors including semantic bias of a sentence, L2 history, and, importantly, individual differences in cognitive control. Similarly, in eye movement studies of spoken language comprehension, we show that both within- and cross-language activation (generated by word-onset competition) is modulated by individual differences among bilinguals in L2 ability and cognitive control. With respect to language production, we show similar links between L2 ability and cognitive control when bilinguals produce L1 vs. L2 speech to describe a visual display while eye movements are monitored, and when they produce extended spontaneous speech in a monologue or dialogue context. Taken together, these studies suggest that cognitive control has much to do with bilingual language processing at the very earliest stages of comprehension and production, though much remains to be discovered about the specific kinds of cognitive control operations that are essential for specific bilingual language functions.
February 22 – Michele Miozzo (Columbia University) : The processing of word sounds in speech production
Word production in speaking culminates with the selection and articulation of word sounds. I will present a series of results – from neuroimaging (MEG) and acquired language deficits resulting from strokes – that shed light on the production of word sounds and their brain underpinnings. MEG correlates of picture naming show a fast activation of word sound processing, which starts about 150 ms after the picture presentation apparently simultaneously with the processing of the meaning of the depicted concepts. To the extent that these results suggest the concurrent activation of word sounds and meaning, they question the current view that, in picture naming, access to word semantics precedes access to word sounds. One of the implications of these results relates to the functional organization of brain mechanisms, as they suggest pathways directly connecting brain areas associated with the processing of visual object features and word sounds. On the other hand, the neuropsychological data provide strong support to the hypothesis of two distinct levels of processing in word-sound production, each associated with (at least partially) distinct brain mechanisms. At the phonological level, word sounds are encoded in an abstract form that does not specify context-specific features (e.g., phoneme length, aspiration) that are detailed at the following phonetic level of processing. I present results showing the each of these two levels of processing can be selectively damaged in conditions of acquired language deficits. Furthermore, results indicate that adjustments of word forms under the control of phonological grammar occur at the phonological level.
March 1 – Carla Contemori (Penn State, Spanish Linguistics) : The comprehension and production of relative clauses across-populations
To date, my research has focused on the acquisition of relative clauses in monolingual typically developing (TD) children and monolingual individuals with neurodevelopmental language disorders, such as Down Syndrome (DS). A Relative Clause is a dependent clause that modifies a noun by making it more specific or adding additional information about it. Even though relative clauses emerge very early in the speech of TD children, some types of relatives are harder for them to comprehend and to produce. In this talk, I will focus on the production of the “harder” type of relative clauses, and I will compare TD children and individuals with DS, showing which solutions the two populations find to solve the problem of producing the syntactic structures.
March 8 – No CLS Meeting (Spring Break)
March 15 – Susanne Gahl (Univeristy of California, Berkeley): Conversational Speech and Psycholinguistics
One and the same word will sound somewhat different each time it is spoken, even if it is spoken by one and the same speaker. In fact, the actual pronunciation of words will often differ significantly from their “citation form” or “dictionary pronunciation”. In this talk, I explore the respective roles of lexical retrieval times and of factors related to speech perception in pronunciation variation, both in conversational speech and in the lab. I argue that insights gained in psycholinguistic experiments, such as picture naming and lexical decision tasks, can shed light on lexical access and retrieval in conversational speech, and that current models of language production can usefully be extended to conversational speech.
March 22 – Cari Bogulski (Penn State, Psychology): Are bilinguals better learners? A neurocognitive investigation of the bilingual advantage
Current research demonstrates that bilinguals are often advantaged relative to their monolingual counterparts on tasks that require cognitive control. A few past studies have also identified a bilingual advantage in the realm of word learning. However, these documented benefits of bilingualism are largely correlational, with little known about the underlying mechanisms that map language use to cognition and learning. One possibility is that all of the documented benefits of bilingualism reflect the effects of the constant mental juggling a bilingual at any age must exercise, as both of a bilingual’s two languages appear to be active even when one language alone is required. Alternatively, different aspects of language use may map onto different types of cognitive consequences. The proposed research seeks to uncover the way that the use of multiple languages affects language learning as well as learning in other domains. The planned experiments will test the scope of the bilingual advantage in foreign language vocabulary learning by using electrophysiological measures that may provide a more sensitive index of the time course of early learning, compare the learning of linguistic and nonlinguistic information, and determine whether the bilingual advantage can be seen in the learning of a signed vs. spoken language. The goal of this set of experiments is to identify the cognitive mechanisms that underlie foreign vocabulary learning, and thus, to identify how learning a foreign language can be strategically enhanced.
March 29 – Tim Poepsel (Penn State, Psychology): The Effects of Bilingualism on Statistical Learning
A bilingual learner faces the task of learning and differentiating two languages. Recent research suggests that constantly switching between and inhibiting languages results in a language learner with distinct differences in comparison to a monolingual. For example, bilinguals show advantages in both linguistic and non-linguistic tasks measuring cognitive control (i.e., task switching, inhibition, attentional control). Recent work in statistical learning has begun to explore how learners detect and acquire multiple linguistic systems. Relatively few studies, however, have examined possible differences in the statistical learning mechanism or outcomes as a result of a learner’s bilingualism. In a series of statistical learning tasks (i.e., speech segmentation and cross-situational word learning), we compared the performance of English monolinguals from Penn State University and Chinese-English bilinguals from Beijing Normal University. Preliminary results show faster and more robust learning for bilinguals in comparison to monolinguals, as well as differential effects of L1 identity on learning outcomes. These results suggest an advantage for bilinguals in statistical learning tasks, as well as an influence of L1 patterns on the learning of additional languages in sequential and unbalanced bilinguals.
April 5 – Jose Ignacio Hualde (University of Illinois, Urbana-Champaign): Phonological awareness and conventionalization in sound change
In the standard neogrammarian view a distinction is made between regular, biomechanically-induced, sound change and psychologically-based analogy. In a sense, however, all sound change has a psychological aspect, even when its origin is in biomechanics, since at some point phonological recategorization is required for sound change to take place (e.g. /p/ > /b/). In Labovian sociolinguistic research a distinction is also made between change from below and change from above related to speakers’ awareness.
In this presentation I will consider the role of phonological awareness in regular sound change drawing from my recent acoustic research on intervocalic consonant lenition in a number of languages (including Spanish, Italian and Basque). I will argue that, at an initial stage, lenition applies as the neogrammarians envisioned: across morphological boundaries and without regard to lexical identity. At this initial stage the process may be below speakers’ consciousness, and yet may operate as a conventionalized reductive process in the speech community, beyond biomechanical reduction. A number of factors may cause awareness of the phenomenon and its phonologization. It is at this stage that word- and morpheme-boundaries start to matter as conditioning environments and we also find lexical effects.
This research has also revealed the existence of important individual differences in phenomena such as intervocalic consonant voicing, correlated in part with the sex of the speaker. I will discuss the possible eventual conventionalization of sociolinguistic variation from biomechanic biases in lenition processes perhaps through the social construction of these individual differences in speech.
April 12 – Amelia Dietrich (Penn State, Spanish Linguistics): Phonological awareness and conventionalization in sound change
The work I will be presenting includes data collected during my PIRE-sponsored visit to Dr. Teresa Bajo’s lab at the Universidad de Granada, Spain. Verb subcategorization bias, often just called verb bias, is a usage-based property assigned to verbs based on the frequency with which that verb occurs with any of its allowable subcategorization frames. The most frequently co-occurring subcategorization frame, often based on corpus analyses of naturalistic data, is considered to be that verb’s bias (e.g., Spanish: Dietrich & Balukas, 2012; English: Gahl, Jurafsky & Roland, 2004). Recent research with monolingual speakers (e.g. Wilson and Garnsey, 2009) as well as proficient second language (L2) speakers (e.g., Dussias & Cramer Scaltz, 2008) has demonstrated that verb bias guides the initial selection of a structural analysis during online processing. Research with bilingual populations has shown that bilingual lexical access is non-selective (e.g. Schwartz, Kroll & Diaz, 2007), and that translation equivalents of verbs in different languages do not necessarily share the same bias (Dussias, Marful, Bajo & Gerfen, 2011). In light of these discoveries, the goal of the present study is to investigate whether verb bias information from Spanish (the native language, L1) is activated and used during sentence processing in English (L2) by examining how verbs with same and different biases in a bilingual’s two languages impact initial syntactic analysis. Given that verb bias is based on usage frequency information, which presumably is developed through language experience, we furthermore investigate the role that immersion in the second language has on a bilingual’s processing strategies.
April 19 – No CLS Meeting
April 26 – Cheryl Frenck-Mestre (University of Provence): Models of second language processing: Psycholinguistic and neurolinguistic perspectives
Numerous models of second language (L2) acquisition have been proposed, from either a linguistic (Clahsen & Felser, 2005; Herschenson, 2000; Schwartz, 1998;) or neurolinguistic perspective (McLaughlin et al., 2010; Osterhout et al., 2008; Steinhauer, White & Drury, 2009; Ullman, 2001). These models propose contrasting views as concerns the role of the native language in adult L2acquisition, the ultimate level of attainment that can be achieved, whether distinct neurophysiological responses are indicative of distinct stages of L2 processing capacity, and, indeed, whether the cortical areas involved in L2 processing are the same as in native language processing. In the present talk, I will present an overview of these models and their predictions as concerns online syntactic processing. Data from various ERP and eye-movement experiments run in my laboratory will be presented which, overall, question the idea of any strict series of neurophysiological responses linked to levels of L2 competence (Carrasco & Frenck-Mestre, in prep; Foucart & Frenck-Mestre, 2011, 2012), highlight the importance of using complementary methods to capture processing capacity (Foucart & Frenck-Mestre, 2012) and pinpoint protracted areas of processing difficulty and how they relate to the convergence of grammatical features across the L1 and L2 (Carrasco & Frenck-Mestre, in prep; Foucart & Frenck-Mestre, 2011, 2012).
August 31 – Patricia Román (Penn State, Spanish and Linguistics) : The Nature of Non-intentional Inhibition in Memory and Language
The ability to suppress irrelevant information or stop prepotent responses is crucial for the efficient processing and interaction with our environment. The so-called mechanism of inhibition has been observed to play an important role in different domains such as attention, memory and, more recently, in language. One focus of interest is whether inhibition is an unitary mechanism acting on different representations (e.g. motor, mnemonic, linguistic, etc.) and under different tasks (an unitary view of inhibition) or whether there are multiple inhibitory processes. Those that support this last view consider that inhibition could be separated in terms of its dependence or independence on control processes (e.g. Nigg, 2000). This debate has promoted research from different approaches (developmental, neuroimaging, or clinical, for example) that support both views. Here we provide behavioral and neurophysiological data that favor an unitary view on inhibition as a control mechanism over memory and language.
September 7 – Courtney Johnson-Fowler (Penn State, German and Linguistics) : Learning and using grammatical gender in L2 German: results from two experiments
Learning grammatical gender in a second language is a particularly difficult task for L2 learners. Much of this difficulty can be attributed to a lack of understanding of what role gender plays in the language. In the first of two experiments exploring grammatical gender, first-semester students were given in-class treatments based on the principles of Processing Instruction (VanPatten, 2004) in order to highlight the meaning associated with grammatical gender in German. We wanted to test whether, if given tasks that require the participant to pay attention to the gender in order to correctly solve a problem, L2 learners can begin to better acquire the German gender system. Even L2 speakers who can learn and use the gender system in German and who achieve high proficiency in the language in general still face challenges in terms of gender processing. The predictive use of grammatical gender in the processing of a language has been shown to differ greatly between L1 and L2 speakers (e.g., Scherag, Demuth, Rösler, Neville, & Röder, 2004; Lew-Williams & Fernald, 2010). Even speakers who have lived in an immersion environment for a long period of time and speak at an advanced level tend to ignore gender cues during processing (Hopp, 2011). Is the L2 speaker’s inability to fully use gender in the same predictive way as a native speaker the result of learning the language too late in life or simply a matter of needing additional processing time to achieve the same result? In the second experiment we designed a task which first gave participants a gender prime followed by a sentence presented using RSVP. This simple sentence included an adjective between the definite article and the final picture in order to increase processing time. By measuring RT data between L1 and L2 speakers, it was determined that many L2 speakers were indeed able to use gender in a more native-like manner.
September 14 – Gary Dell (University of Illinois) : What Freud Got Right About Speech Errors
Most people associate Sigmund Freud with the assertion that speech errors reveal repressed thoughts, a claim that does not have a great deal of support. I will introduce some other things that Freud said about slips, showing that these, in contrast to the repression notion, do fit well with modern theories of language production. I will illustrate using an interactive two-step theory of lexical access during production, which has been used to understand aphasic speech error patterns.
September 21 – Tamar Gollan (University of California, San Diego) : Bilingualism in Aging & Dementia: Evidence for Language-Specific Control Mechanisms
A fundamental characteristic of language is that it provides multiple ways to express the same ideas, and therefore speaking presents the challenge of choosing between competing alternatives. Bilinguals provide a unique source of evidence about how speakers gain control over these selection challenges, given that they often face direct competition between languages. Current research suggests that bilinguals manage this competition with domain-general mechanisms of cognitive control. By implication, proposals that the language system may be equipped with its own specialized processing mechanisms are rejected. I will present a series of studies that question this basic claim by demonstrating that bilinguals with impairments in executive control (due to aging and Alzheimer’s disease) exhibit relatively intact ability to do what bilinguals do best. This dissociation invites a psycholinguistic model that is equipped with at least some domain-specific control mechanisms, and that does not attribute all the consequences of bilingualism to mental juggling of two languages. These data also provide unique insights about why retrieval sometimes fails when people speak, and has practical applications for diagnosis of cognitive impairment in an increasingly multi-lingual society.
September 28 – No talk: The Third Workshop on Immigrant Languages in America at Penn State
October 5 – Edith Kaan (University of Florida) : Investigating first-language effects in second-language sentence processing
There is ample evidence that advanced second-language learners activate lexical information in both languages even in contexts where only one language is relevant. It is still unclear to what extent syntactic properties, such as word order, are activated in one language while processing the other. To test this, we compared native English speakers with native Dutch, advanced second-language learners of English. Tasks included a self-paced reading task, manipulating agreement and word order so as to create cross-linguistic ambiguities for the second-language learners. L2 learners showed less sensitivity to grammaticality manipulations than native English speakers. Effects of native language word order were not found during on-line reading, but could be observed in performance on an end-of-sentence statement verification task. Theoretical implications of these findings will be discussed. In addition, the effects of proficiency and cognitive control will be considered.
October 12 – Maria Polinsky (Harvard) : Learning from heritage languages
One of the main points I make in this talk is that now that we have learned a fair amount about heritage languages time has come for linguists to learn from them about the overall design of natural language. Both linguistic theorizing and experimental studies of language development rest heavily on the notion of the adult, perhaps linguistically stable, native speaker. Native speaker competence and use are typically the result of normal first language acquisition in a predominant monolingual environment, with optimal and continuous exposure to the language. In this talk, I discuss the case of heritage speakers, i.e., bilingual speakers of an ethnic or immigrant minority language whose first language does not typically reach native-like attainment in adulthood. I present an overview of heritage speakers’ linguistic system and discuss several competing factors that shape this system in adulthood. The examination of the linguistic knowledge of heritage speakers allows us to question long-held ideas about the stability of language before the so called critical period for language development, and the nature of the linguistic system developing under reduced input conditions.
October 19 – No talk: The 31st Second Language Research Forum in Pittsburgh
October 26 – No talk: No regular CLS meeting (Mental Lexicon Conference in Montreal and Hispanic Linguistics Symposium)
November 2 – PIRE Undergraduate Student Research Presentations
Sara Carter – Processing of English Verb Bias in the Spanish L2 Immersion Environment
Sarah Fairchild – Determiner-noun codeswitching in Welsh-English bilinguals
Emma Hance – Does script influence novel word learning? A comparison of same-script and different-script bilinguals
Thomas Holt – Categorical Representation in Chinese Monolinguals
Melissa Magro – /s/ lenition in the speech of Spanish-speaking children from Granada
Jesse Martz – Overhearing a second language abroad as an adult: Learning with no intention
Elizabeth Mormer – Production of subject-verb agreement in English by Swedish-English bilinguals
Clair Pelella – The comprehension of codeswitches among speakers who don’t codeswitch
Emily Sabo – Verb-bias and plausibility: Do L2 speakers use these sentential cues in the same way that L1 speakers do?
November 9 – Ben Zinzser (PSU, Psychology) : Effects of language switching on statistical learning in speech segmentation
Previous empirical research has demonstrated the ability of infant and adult learners to track the statistical regularities in a novel language, relying on these patterns to discover underlying structures in the input. In speech segmentation specifically, learners are sensitive to transitional probabilities between phonetic units (such as syllables), identifying words within which transitional probabilities are relatively high and between which transitional probabilities are low. Recent studies suggest that this abi containing partially overlapping phonetic inventories unless an explicit cue to the change in input is provided. In the first experiment, we replicate the results of Gebhart et al (2009), demonstrating that learners fail to segment a second language when they are exposed to two different input streams for 5.5 minutes each sequentially. In the successive experiments, we attempt to reconcile the results of Experiment 1 with a previous study which indicated that explicit language cues were not always necessary to parse both languges (Weiss, Gerfen, & Mitchel, 2009). We vary language exposure (duration) and frequency of language switching, revealing that under certain combinations of these parameters, both languages may be learned in the absence of explicit language cues. Finally, we explore some preliminary data indicating that the effects of these training conditions are not uniform between monolingual and bilingual adults.
November 16 – Nick Henry (PSU, German and Linguistics) : Effects of language switching on statistical learning in speech segmentation
November 23 – No talk: Thanksgiving Break
November 30 – Giulia Togato (University of Granada, Psychology)
December 7 – Mike Putnam & John Lipski (PSU, German and Spanish
Linguistics)
December 14 – Roxana Botezatu (PSU, Communication Sciense and Disorders) : An electrophysiological study of bilinguals’ reading strategy transfer: The contribution of spelling-sound consistency and orthographic similarity to the activation of phonology
The study examined whether the orthographic transparency of bilinguals’ first (shallow L1 – Spanish; deep L1 – Chinese) language has an impact on the reading strategy they adopt in an orthographically deep second (L2 – English) language. Highly proficient Spanish-English and Chinese-English bilinguals and English monolingual controls made rhyme judgments of visually presented English words while behavioral and EEG measures were recorded. The spelling-sound consistency and orthographic similarity of semantically unrelated rhyming and non-rhyming prime-target pairs were varied systematically. To manipulate consistency, graphemically dissimilar primes and targets that either matched or did not match in consistency were compared in both the rhyming (consistent/consistent: WHITE-FIGHT versus inconsistent/consistent: HEIGHT-FIGHT) and non-rhyming conditions (consistent/inconsistent: SCALE-LEAK versus inconsistent/inconsistent: WORK-LEAK). Orthographic similarity was manipulated by comparing pairs that matched in consistency, but were either graphemically dissimilar (WHITE-FIGHT; WORK-LEAK) or graphemically similar (RIGHT-FIGHT; STEAK-LEAK). Results suggest that bilinguals with a shallow L1 orthography may treat words in a deep L2 orthography as equally inconsistent in the absence of converging cues from orthography and phonology, whereas bilinguals with a deep L1 orthography may treat words in a deep L2 orthography as equally consistent.
January 13 – Richard Page (Penn State, German and Linguistics) : Exceptionality and Open Syllable Lengthening in West Germanic
In the nineteenth century, the Neogrammarians hypothesized that sound change is regular and attributed exceptions to sound changes to factors such as analogy and dialect mixture. The explanation of both regularity and exceptionality in phonological change remains one of the primary challenges in historical linguistics regardless of theoretical framework (see Kiparsky, 1994; Labov, 1994, 2011; Bybee, 2003; Blevins & Wedel, 2009; Bermudez-Otero, 2010). This paper investigates the lengthening of stressed short vowels in open syllables in Middle Dutch, Middle English and Middle High German.
Of particular interest is the regularity of Open Syllable Lengthening in Middle Dutch versus the large number of exceptions in Middle English and Middle High German. Previous work has attributed the exceptions in Middle English and Middle High German to intervening sound changes, analogy or dialect mixture. I will argue that Open Syllable Lengthening (OSL) is motivated by speech perception and a reparsing of tonic short vowels as long by listeners (Kavitskaya, 2002). Unlike many regular sound changes that have an articulatory basis, OSL obeys structure preservation and is not reductive. In this regard, OSL is similar to so-called sporadic sound changes, such as metathesis and dissimilation, which have long been recognized as irregular and admitting exceptions. The regularity of OSL in Middle Dutch is accounted for by the loss of contrastive vowel length and the reanalysis of vowel length as a predictable feature of lexical stress.
January 20 – Young Language Science Scholar Event: Kara Morgan-Short (University of Illinois at Chicago) : External and internal factors and their interactions in adult second language acquisition
Learning a second language as an adult is arguably one of the most difficult learning tasks that one can undertake. Yet, it is of great importance in our increasingly global society. In order to fully understand second language development in adults, we must understand not only the processes involved in acquisition but also how these processes are affected by external and internal factors. In this talk, I report the results from a series of four studies aimed at elucidating the role of both external and internal factors in the developing (neuro) cognitive underpinnings of adult second language acquisition of grammatical structures. In each of the studies, adults learned an artificial second language that was modeled after and consistent with natural language. Participants learned to speak and comprehend the second language in order to refer to the pieces and moves of a made-up chess-like computer game. In the first two studies, the effect of an external factor—the condition under which the artificial language was learned—is examined in light of learners’ performance as well as their neurocognitive processes (as revealed by event-related potentials). In a subsequent study, the role of internal factors is addressed. Specifically, this experiment explores whether individual differences in cognition can predict learners’ development in the artificial second language. performance measures and measures of the neural representation of the acquired artificial language (as revealed by functional magnetic resonance imaging). The final study explores the potential interaction between external and internal factors by examining how learners’ individual differences in cognition affect their performance under different training conditions. Results from these studies suggest that adult second language learners can achieve and retain processes similar to that of native speakers, though only when they attain high proficiency. Furthermore, attainment of high levels of proficiency and native-like processing appear to depend on certain factors, including linguistic structure, the condition under which the language is learned, and individual differences in cognition. The implications of these results will be considered in the context of both theoretical and applied questions related to successful second language acquisition, and future research directions, including longitudinal research with natural languages, will be discussed.
January 27 – FOUR QUESTIONS FOR LANGUAGE SCIENTISTS:
David A. Rosenbaum
Department of Psychology
Penn State University
Though I am not a language scientist, I have rubbed shoulders with some of them over the years. Despite my ignorance of this field, but encouraged by the open-mindedness and welcoming attitude I see among you, I’d like to offer the following four questions and answers for your consideration:
1. Are you actually studying language per se? Not necessarily.
2. Why is there more than one language? To promote incomprehension.
3. Would your research benefit from the statistical technique of bootstrapping? Quite possibly.
4. What limits foreign language learning? Incomplete goals.
February 3 – Katharine Donnelly Adams (Penn State, Psychology): The importance of multi-dimensional reading interventions in addressing the summer reading gap
The overarching purpose of this study was to evaluate the effects of the research-based reading program, RAVE-O, for children (N=60; ages 6;10-9;0) in a local community within a summer school setting. The working hypothesis underlying this intervention was that the intensive 4-week summer reading intervention described in this talk would reduce the achievement gap by focusing instruction on the componential processes involved in fluent reading and reading comprehension. Three treatment conditions were used to evaluate the effectiveness of various repeated reading techniques (repeated reading alone, listening during repeated reading, and accelerated repeated reading) on subsequent tests of fluency and reading comprehension. The treatment conditions were evaluated according to several demographic variables known to moderate the achievement gap: socio-economic status, language background, and reading disability. Results confirm the efficacy of multi-componential reading interventions in reducing the summer achievement gap. Importantly, results indicate that all the varied applications of the treatment conditions differentiated typical readers from their RD peers with moderators having differential impacts depending on specific treatment condition. In other words, learner characteristics appear to influence responses to specific reading instruction practices.
February 10 – John Lipski (Penn State, Spanish and Linguistics): How many “grammars” per “language”?: mapping the psycholinguistic boundaries between Spanish and Palenquero
Bilingual speakers—even those who frequently engage in code-switching—are normally aware of what language they are speaking at any given time, and can correctly identify exemplars of each language. This is not always the case for bi-dialectal speakers, including a national standard language and a regional dialect (e.g. standard Italian vs. regional “dialects,” also Spanish-Portuguese in northern Uruguay). There is, however, no widely accepted consensus on the degree of morphosyntactic similarity between genealogically related and partially cognate systems that mark the psycholinguistic threshold of identification as distinct languages. One possible testing ground involves heavily restructured languages such as creoles in contact with their historical lexifier languages. The present study presents data from the creole language Palenquero, spoken in San Basilio de Palenque (Colombia), which has been in contact with its principal lexifier, Spanish since the formation of the community in the late 17th century. The Palenquero language, known locally as Lengua ri Palengue (LP), exhibits a number of key grammatical features found in no variety of Spanish. Mutual intelligibility between Spanish and LP is in general quite low; monolingual Spanish speakers may recognize individual words in LP, but cannot accurately parse LP syntactic structures, which include: absence of grammatical gender, marking of nominal plural with the preposed particle ma rather than the (multiply-agreeing) suffix /-s/, invariant verbs with preverbal tense-mood-aspect particles, negation by clause-final nu, absence of definite articles, a single set of obligatorily overt pronouns (all different from Spanish), marking possession by postposing the possessor. Most of the differences between Spanish and LP are categorical and binary; it is therefore not unreasonable to assume that Palenqueros psycholinguistically partition Spanish and LP according to such parameters, that they are able to identify given configurations as belonging to either Spanish or LP, and that utterances containing both quintessentially LP and uniquely Spanish structures will be acknowledged as mixed. Palenqueros do not exhibit a “post-creole continuum,” i.e. a systematic cline of intermediate variants spanning the linguistic distance between the creole language and its lexifier language. These facts notwithstanding, linguists who have studied contemporary LP have noted the frequent introduction of indisputably Spanish elements, ranging from individual items such as conjugated verbs or preverbal clitics to more complex morphosyntactic constructions. Opinions as to the nature of this apparent mixing—rarely substantiated by empirical data—include decreolization, language attrition, code-switching, interference from Spanish, performance errors, and the possibility that such configurations have been an integral part of LP since its origins. Equally difficult to extract from available studies are Palenqueros’ implicit and explicit notions of “canonical” LP as well as their awareness of putative deviations from any loci of inter-speaker acceptance. The present study is based on experiments conducted in San Basilio de Palenque, using stimuli extracted from natural speech samples. The stimuli included utterances entirely in Spanish, entirely in LP (as described by native speakers, e.g. Pérez Tejedor 2004, Simarra Obeso et al. 2008, Simarra Reyes et al. 2008), and containing what might be considered Spanish-LP morphosyntactic mixing. In the first experiment respondents were asked to classify stimuli as Spanish, LP or mixed. All-Spanish and all-LP utterances were almost always identified accurately but there was considerable diversity in reactions to putatively mixed stimuli. Responses were subjected to a variationist analysis to determine the factors that influence language classification. Language-specific pronouns, presence or absence of feminine gender marking, and speaker status (young, older traditional, LP language teacher) were the most significant factors, while conjugated verbs, preverbal clitics, definite articles, and other “Spanish-like” elements were not significant predictors of “mixed” responses. In a second experiment the same participants close-shadowed all-Spanish, all-LP, and nominally mixed stimuli. The rationale of such tasks is that “when listeners hear a sentence that exceeds the capacity of their short-term memory, they will pass it through their own grammar before repeating it” (Gullberg et al. 2009: 34). Previous work (e.g. Marslen-Wilson 1973, Vinther 2002) has shown that in sentence repetition tasks, respondents’ errors frequently reflect their own grammars, i.e. what they would have said instead of what was actually said. No language switches or other grammatical alterations were made for all-Spanish and all-LP stimuli, but there were numerous spontaneous “corrections” of mixed utterances, almost always resulting in all-LP combinations. There was also a strong correlation between “mixed” judgments and spontaneous correction by the same respondents during shadowing. The overall results suggest that code-switching as commonly defined is not explicitly accepted by Palenqueros. They also demonstrate an asymmetry between perception and production: “grammars” and “languages” are not psycholinguistically coterminous for LP-Spanish bilinguals. Despite apparently clear-cut distinctions between Spanish and LP, grammatically-defined boundaries have been partially supplanted by a more amorphous duality based on a combination of key lexical items, phonotactic profiles, and acknowledgment of known speakers as “true” Palenqueros. The linguistic situation of San Basilio de Palenque demonstrates the challenges for scholars and teachers seeking to define and delimit bilingualism in the absence of community-wide literacy, accepted canonical standards, and a rapidly evolving metalinguistic awareness.
February 17 – Karsten Steinhauer (McGill University): On syntax, brain potentials and critical periods in L2: Facts and myths
In this talk I will address syntactic processing and ERPs in both L1 and L2 (in 2 parts). The first part will demonstrate that (and why) some of the most influential ERP papers on syntax in native speakers have major methodological problems, which affect in particular early ERP effects that have been taken to support ‘syntax-first’ models (such as ELANs in Friederici’s 2002 model; for discussion see Steinhauer & Drury, in press).
Interestingly, the absence of such early ERP components in L2 learners has been interpreted as strong evidence for critical periods in L2 syntax acquisition (e.g., Weber-Fox & Neville, 1996, and Hahne & Friederici, 2001). However, the few ERP studies that have argued *against* the critical period hypothesis typically relied on flawed data as well (e.g., Rossi et al., 2006).
Therefore, in part 2 of my talk I will discuss various problems in L2 research and present some ERP data in this domain that my students and I collected. I will argue that ERP evidence for critical periods in L2 morphosyntax is weak and show how L2 ERPs systematically change with increasing L2 proficiency (including ‘native-like’ ERP profiles at very high levels of L2 proficiency; e.g., Steinhauer et al., 2009). These changes are modulated by factors such as L1 background (co-activation and transfer) and the type of L1 exposure (classroom vs. immersion; e.g., Morgan-Short et al., in press).
February 24 – Alvaro Villegas (Penn State, Spanish): L2 online processing of the Spanish verb mood
The literature agrees: the acquisition of the Spanish subjunctive mood is difficult for second language (L2) learners of Spanish. Although the acquisition of the subjunctive has been studied from a variety of perspectives (Collentine, 1997; Farley, 2004; Gudmestad, 2006; Isabelli, 2007; Lubber’s Quesada, 1998), no studies to date have examined whether the knowledge that advanced speakers of Spanish have acquired about the subjunctive is used during on-line processing. In this presentation, we examine whether highly proficient L2 speakers of Spanish can use the information they have learned about the Spanish subjunctive – even if that information is below the native speaker mark – to predict its occurrence in subordinate clauses. Data from L1 Spanish monolinguals living in Spain, and from Spanish-English and English-Spanish speakers living in the United States were collected. Analyses show that the monolinguals and the Spanish-English bilinguals are able to correctly predict verbal mood in subordinate clauses, replicating previous findings in the literature (e.g., Demestre & García-Albea, 2004). However, the English-Spanish group living in the U.S. did not show the same sensitivity.
March 2 – Aroline Seibert Hanson (Penn State, Spanish): Working memory effects on L2 processing of Spanish clitics
Research suggests that L2 learners show initial difficulty with learning how to interpret the pre-verbal clitic structures in Spanish (e.g. Liceras, 1985). Whether this difficulty in processing OVS structures is based on a learner’s L1 or proficiency level has just begun to be examined (e.g. Seibert Hanson & Sagarra, 2010). In addition, Havik et al. (2009), comparing English- and German-Dutch learners, found a similar difficulty with OV interpretation, highlighting working memory capacity (WMC) as a mitigating factor. Romanian, like Spanish, allows for preverbal direct object clitic sentence structures (e.g. O caută băiatul, “Her-OBJ looks for the boy-SUBJ”). If the L1 is the determining factor in early success with preverbal clitics, then L1 Romanian speakers may be more successful with such structures in Spanish than L1 English speakers, who do not possess this structure in their L1. Contrariwise, if there is some level of proficiency that must be reached universally to properly process preverbal clitic structures, there should be no differences between learners based on their L1. Additionally, variation within language may be due to WMC.
To test this, 65 L1 English learners of Spanish and 71 L1 Romanian learners of Spanish, matched for proficiency, and 35 Spanish monolinguals completed a WM task and a task in which they heard sentences with preverbal clitics in Spanish (e.g. Lo besa la niña, “Him-OBJ kisses the girl-SUBJ”) and choose from four pictures the one that was most accurately described. One-way ANOVAs revealed significant differences among learner participants based on proficiency level, but not on L1. However, the OVS accuracy means for the Romanian-Spanish learners were greater than the means for the English-Spanish learners, indicating that other factors are involved. Results from further analyses will be revealed, showing the role WMC plays in the variance among the participants. The present data suggest that working memory capacity plays an important part in determining the accuracy of interpretation of OVS structures at early stages of acquisition, and that language experience influences when L1 transfer takes place during language acquisition.
March 9 – No talk: Spring Break
March 16 – Mike Putnam (Penn State, German) and Joe Salmons (University of Wisconsin-Madison): Morphosyntactic issues in Heritage German
The acquisition and resulting grammar of heritage languages represent significant challenges to current theories of learning/acquisition and associated grammatical formalisms. In our talk, we present new results from ongoing work by our team.
To frame the project, we first revisit recent proposals by Montrul (2002, 2004, 2008, 2009) and Polinsky (1997, 2006, 2008) who draw a clear distinction between incomplete acquisition, i.e., the situation of early/sequential bilinguals, andL1 attrition, i.e., the situation of the loss of certain performance-oriented skills that do not affect the core elements of the competence grammar. We note some complexities and difficulties with the notion of incomplete acquisition, and are working to develop a straightforward way of addressing heritage language acquisition from the standpoint of feature activation, and with a focus on variation and change within the community. From there, we present findings on two morphosyntactic phenomena in heritage German grammar, drawing data from a wide range of settings from free conversation to grammaticality judgments.
First, we take a closer, detailed look at the passive voice construction inventory in Moundridge German — a moribund German-language speech enclave (Eastern Palatinate) with ca. 40-50 remaining speakers of various degrees of fluency in the dialect. Data from Moundridge German (Putnam & Salmons, under review) shows a reduction in possible passive voice constructions in the final stage of the MG-grammar. These findings make an interesting contribution to theoretical debates regarding the modeling of the loss of (syntactic) grammatical categories. Based on our findings, we show how these data suggest that a neutralization approach is preferable to null parse strategies in accounting for syntactic ineffability (Legendre et al. 2006, Legendre 2010).
Second, we explore the presence or absence of parasitic gap constructions in a set of heritage German dialects spoken near Sheboygan, Wisconsin, descendants of mid-19th century German who still speak local varieties. Parasitic gap or ‘multiple gap’ constructions represent a interesting domain of investigation here on two grounds: (1) There is a sharp contrast between the two grammars in contact. English allows parasitic gaps in a multitude of environments, whereas it is generally argued that German lacks these constructions altogether or licenses them in only very specific contexts (e.g., Parker (1999), Kathol (2001), Chocano & Putnam (in press)). (2) Null elements (such as gaps) have been claimed to be rarely attested in heritage language grammars (cf. Polinsky & Kagan 2007). Our pilot data challenges the expected lack of null elements/gapping structures in heritage grammars: Our speaker show a range of patterns, from close adherence to German-like avoidance of gaps to various degrees of gapping. Results overall correlate with proposals for the complexity of parasitic gap constructions.
In both case studies, we can identify a cline from full L1/native-like grammars to considerably reduced grammars. They also reveal complex patterns of interaction between German-like patterns and English-like patterns in the heritage grammar.
March 23 – Shana Poplack (University of Ottawa): Borrowing vs. code-switching in diachronic perspective
According to received wisdom, other-language words are introduced into recipient-language discourse by a bilingual speaker, gain in frequency, become linguistically integrated, diffuse throughout a community of bilingual and eventually monolingual speakers, and finally achieve dictionary attestation and native status. But how just how do they get from there to here?
Previous historical research on lexical borrowing deals with the product, i.e. attested loanwords, and the few empirical synchronic studies that treat the process as it occurs spontaneously in the bilingual community are necessarily silent on the diachronic trajectory that borrowed forms follow. In this paper we address these issues by tracking the evolution of English-origin material in a unique data set on Quebec French collected over a real-time period of 61 years, and panning nearly a century and a half in apparent time. From these corpora, we extracted close to 19,000 tokens of lone English-origin items and 2,000 multiword fragments of English.
Through detailed quantitative analyses of those items that persisted over the entire duration, and others that were short-lived, we address three widespread beliefs about the processes underlying code-switching and borrowing:
1) Other-language incorporations are introduced as nonce forms and gradually increase in frequency and diffusion
2) Other-language incorporations are introduced in donor-language phonological, morphological and syntactic form, but they (or some subset thereof) are gradually integrated into recipient-language structure, in tandem with increases in frequency and diffusion
3) At least in the earliest stages, and possibly throughout, code-switching cannot be distinguished from borrowing.
Results provide little evidence in favor of any of these hypotheses. Surprisingly few other-language items persist, even over the relatively brief period of time studied here, let alone increase in frequency or diffusion. Linguistic integration is abrupt, not gradual. Speakers all but categorically integrate lone other-language items at first mention, while never treating multi-word fragments of the other language in this way. This in and of itself is evidence that the two classes of other-language item—single-word vs. multi-word—are demonstrably different. But they also differ wildly by their word-class and grammatical constitution, their overall frequency of occurrence, and the relative propensity of a given speaker and a given community to use one rather than the other. We explore the implications of these results for understanding the processes by which other-language incorporations achieve the status of native items, and their consequences for theories of code-switching and borrowing.
March 30: – Hiram Smith (Penn State, Spanish): Habitual marking in Palenquero creole?
Habitual aspect in Palenquero, a Spanish-based creole spoken in San Basilio de Palenque, Colombia, has been described as being expressed by the preverbal marker asé (from Spanish hacer ‘do’), as in the example (1) below. (Schwegler 1992: 224, Schwegler & Morton 2003, cf. Davis 1997: 27-30).
(1) Ahora nu. Majaná asé salí cu sei u siete u ocho majaná. Today NEG. Kids HAB go out with six or seven or eight kid. ‘Not these days. Kids go out with six or seven or eight kids’.
This pilot study analyzes variable use of asé, using typological insights from grammaticalization theory (Bybee et al. 1994) and the variationist method (Labov 1966) to uncover distributional patterns. The data for this study were taken from sociolinguistic interviews and conversations with 10 speakers recorded* during July, 2010 and May, 2011 (thanks to funding from the Center for Language Science) in San Basilio de Palenque, including male and female participants, ranging from high-school age to older speakers.
The distributions suggest that tense-aspect marking in Palenquero is not captured by a one-to-one mapping of form and meaning. Habitual is expressed by both asé and ‘zero’, although it has been suggested that in creole languages zero marking indicates present tense on stative verbs and past tense for non-statives (Bickerton 1975, 1981, 1984). Asé, on the other hand, is more closed associated with frequentative meaning than with habitual. This is consonant with the suggestion in Bybee et al. (1994) that habitual meaning develops out of frequentative meaning. These preliminary data suggest that grammaticalization theory can neatly account for the distributional patterns seen in the synchronic creole data. Other questions are raised, though, as to whether the monogenetic view of grammaticalization should be assumed in working with creole languages, and if so, what is its place in creole studies. Based on these questions, I will also discuss directions for further research.
*Thanks to Colleen Balukas, Amelia Dietrich, and in particular, Dr. John Lipski for generously providing some of their recordings.
April 6 – Keith Johnson (University of California, Berkeley): Two studies on compensation for coarticulation
One of the fundamental processes of speech perception is a contextual normalization process in which segments are “parsed” so that the effects of coarticulation are reduced or eliminated. For example, when consonants are said in sequential order (e.g. the [ld] in “tall dot” or the [rg] in “tar got”) the tongue positions for the consonants interact with each other. This “coarticulation” is undone in speech perception by a process that is called compensation for coarticulation. The basis of this process is a source of much controversy in the speech perception literature.
I studied the compensation for coarticulation process in two ways. The first set of experiments examines the role of top-down expectations, finding that the compensation effect is produced when people think they hear the context, whether the context is present or not. This dissociation of the context effect from any acoustic stimulus parameter indicates that at least a portion of the compensation effect is driven by expectations. The second set of experiments examines the role of articulatory detail in the compensation effect, finding that the compensation effect is driven at least partly by detection of particular articulation patterns. This set of experiments looked at perception in context of the “retroflex” and “bunched” variants of English “r” and found that this low-level articulatory parameter is crucial for the compensation effect. The overall picture that emerges is one of listeners who make use of fine-grained articulatory expectations during speech perception.
April 13 – No talk.
April 20 – Tim Poepsel (Penn State, Psychology): Is Steve gayer than Dave: the role of /s/ in cueing the stereotype of gay sounding male speech.
Can certain sound patterns and lexical properties evoke the perception of gay-sounding male speech (GSMS)? Further, are the sound patterns and lexical properties associated with GSMS invariant across languages? Previous naturalistic research with English monolinguals has demonstrated that listeners are able to accurately determine sexual orientation from speech alone, although a thorough understanding of the acoustic and lexical correlates of GSMS remains elusive. Most studies of the stereotype to date have relied on explicit comparisons of recorded speech from self-identified samples of gay and straight speakers; this research has identified a number of potential acoustic correlates, among them longer sibilant duration, higher peak sibilant frequency, and greater pitch range. Here we use an experimental approach to investigate how the manipulation of a single acoustic cue, sibilant duration, and several lexical properties (e.g., frequency, word length, presence of /s/) influences the perception of a male speaker’s sexual orientation. In a series of experiments, we present evidence from English monolinguals, as well as Chinese-English bilinguals, indicative of deeply encoded acoustic and lexical biases, whose manifestation correlates positively with age of acquisition and proficiency in English.
April 27 – Lauren Perotti (Penn State, Spanish): Grammatical gender processing in L2 speakers of Spanish: Does cognate status help?
One important finding in current literature is that native speakers of Spanish use gender marking on Spanish articles, such as la and el, to facilitate processing of upcoming Spanish nouns (e.g. Lew-Williams & Fernald, 2010). Conversely, native English speakers (for whom grammatical gender is absent in their first language) who are highly proficient speakers of Spanish and who have demonstrated mastery of the Spanish grammatical gender system, do not behave like native Spanish speakers in this respect. One question, however, is whether this result is modulated by the cognate status of the words. Cognates are words that are similar in form and meaning in the two languages (such as guitar in English and guitarra in Spanish). Here we investigate whether English learners of Spanish can more easily access gender information when words are cognates. We used an experimental eye-tracking technique known as the Visual World Paradigm. In this technique, participants hear instructions to click a picture displayed on a computer screen while their eye movements are being recorded. English-Spanish participants at Penn State were recruited as well as a group of monolingual speakers of Spanish (i.e., the control group) at the University of Granada. Preliminary results show that for the monolingual speakers, cognate status of words does not modulate grammatical gender processing. English-Spanish participants are expected to use grammatical gender only when words are cognates in English and Spanish. This research has important implications because it touches on critical aspects of language learning.
August 26 – Darren Tanner (Penn State, Linguistics) : ERP and reaction time evidence for comprehension/production asymmetries in agreement processing
Real-time language processing often requires us to establish grammatical dependencies between units that are grammatically and temporally discontinuous. While we generally achieve this with remarkable speed and accuracy, errors do occur, and these errors can provide rich evidence regarding the structure of the human language processing system. For example, behavioral studies have shown that individuals have difficulty processing subject-verb agreement in both comprehension and production when the subject noun phrase (NP) contains conflicting cues about grammatical number. Specifically, plural nouns embedded in modifying phrases of singular NPs (e.g., ‘The key to the wooden cabinets…’) lead to an increase in agreement errors in production and a reduction of ungrammaticality effects in comprehension (‘agreement attraction’). Moreover, most studies have found similar profiles of attraction effects in both comprehension and production, leading researchers to propose that the same cognitive mechanisms are responsible for the establishment of agreement dependencies in both tasks (e.g., Nicol, et al, 1997; Severens, et al, 2008; Wagers, et al, 2009). However, here I present evidence from four comprehension experiments investigating agreement attraction in native English speakers which show important asymmetries between comprehension and production. These results indicate that the mechanisms responsible for interference when speaking and reading are at least partly distinct.
Experiment 1 used event-related brain potentials (ERPs) to study agreement attraction. Participants read sentences containing subject NPs with embedded prepositional phrase (PP) modifiers that were either grammatical or ungrammatical (i.e., they showed either correct or incorrect verbal agreement with the singular head noun) and which contained either a singular or plural embedded noun (e.g., ‘The key to the wooden cabinet(s) is/*are…’). Results showed that ungrammatical verbs elicited P600 effects, typical of processing grammatical anomalies, but that the P600 was significantly smaller following a plural attractor noun (i.e., an attraction effect). Importantly, there was no difference in brain responses between the two grammatical conditions (‘The key to the cabinet(s) is…’). This suggests that, unlike in production, attraction in comprehension differentially affects correct and incorrect outcomes (cf. Staub, 2009).
Experiment 2 expanded on Experiment 1 to investigate structural effect in attraction interference, as behavioral research on language production has shown that the syntactic complexity of the modifying phrase impacts attraction rates. These studies have shown that plural attractor nouns embedded in relative clause (RC) modifiers (e.g., ‘The key that opened the cabinets…’) lead to significantly fewer errors than those embedded in PPs (i.e., attractor number and modifier structure interact: Bock & Cutting, 1992; Solomon & Pearlmutter, 2004). In Experiment 2, participants read sentences that were created in a 2 (grammaticality: grammatical, ungrammatical) by 2 (attractor number: singular, plural) by 2 (modifier structure: PP, RC) design. Results replicated the ungrammaticality and attraction effects from Experiment 1. There was an additional effect of modifier structure, such that P600s were reliably larger following RCs than PPs. However, this effect did not interact with attractor number, suggesting an asymmetry with production. Experiment 3 followed-up on this result using self-paced reading methodology and a similar experimental design. Results replicated previous self-paced reading studies’ findings of ungrammaticality and attraction effects in ungrammatical sentences, but reading times at the verb were significantly faster following RCs than PPs. Again, this effect did not interact with attractor number. Experiment 4 investigated the relationship of these structural effects with agreement processing by eliminating the need to process verb agreement. Sentences contained an invariant modal verb and were created in a 2 (attractor number) by 2 (modifier structure) design (e.g., The key to the cabinet(s) might be… ). Results were in-line with Experiment 3, such that reading times at the modal were faster following RCs than PPs, suggesting that structural effects are independent of the need to process agreement.
These results show important asymmetries between agreement dependency formation in comprehension and production. Unlike in production, attraction in comprehension differentially impacts grammatical and ungrammatical sentences, and attraction interference in comprehension is not ‘clause-bound’ as it is in production. Instead, there is a facilitation effect for verb integration following more complex clausal modifiers. These results conflict with complexity-based integration metrics for comprehension (Grodner & Gibson, 2005) and are more in-line with anti-locality effects in verb integration (Vasishth & Lewis, 2006). I argue that the present results are compatible with a content-addressable working memory architecture of sentence comprehension (Lewis et al, 2006).
September 2 – Paula Fikkert (Radboud University Nijmegen) : Learning sounds and words: Evidence from children’s perception and production
In this talk I will present an overview of several studies we have carried out in the Baby Research Center in Nijmegen to study the acquisition of various phonological contrasts by Dutch children, using evidence from infant speech perception, word recognition and word production in the second and third year of life.
One important asymmetry that has caused major misunderstandings in the field of phonological acquisition is the gap between children’s knowledge as displayed in perception experiments and the knowledge children bring to the task of language production. For example, while children by the end of their first year of life show knowledge of the sound system of their native language (they seem to know the speech sounds of their language, its phonotactics, stress pattern, etc.), it takes them quite some time before they show that same knowledge in their own productions. Infant speech perception researchers have therefore claimed that perception research provides a better way of tapping children’s grammatical knowledge.
The situation is even more complex: Infants show improved sensitivity to native language contrasts in their first year of life (e.g., Kuhl et al. 2006). However, they show decreased sensitivity to the same contrasts in word-learning experiments in the beginning of the second year of life (e.g., Stager & Werker 1997), although they are still able to discriminate these contrasts in other tasks. This suggests that next to the discrimination of sound contrasts in the pre-lexical stage, in the lexical stage of development another level of perception develops which ignores many phonetic details. We assume that discrimination is based on phonetic properties while word comprehension involves matching those properties to stored phonological representations of words in the mental lexicon. The reduced sensitivity to certain contrasts in word learning might be caused by the nature of early lexical phonological representations.
On the assumption that children use the same lexical phonological representations for word comprehension and word production, we expect to find similar problems in both areas: contrasts that are difficult in comprehension (and hence affect their representation) should also cause problems in production. We show that this is indeed the case. Under such an account there are no major asymmetries between perception and production: both are tightly connected.
September 9 – Natasha Tokowicz (University of Pittsburgh): The Consequences of Translation Ambiguity
In this talk, I describe a body of work exploring translation ambiguity, which occurs when a word in one language has more than one translation into another. For example, the Spanish word “muñeca” translates to both “doll” and “wrist” in English. Our research demonstrates that such ambiguity leads to: (1) slower translation, (2) less accurate translation, and (3) less robust word learning. Furthermore, knowledge that a pair of words share a translation in a later-learned second language impacts the level of perceived relatedness between those words in your first language. For example, native English speakers who learn Spanish as a second language may consider the words “doll” and “wrist” to be more related than native English speakers who do not know Spanish. These findings will be discussed in terms of the ways that the relationship among word meanings across languages influences language learning, processing, and representation.
September 16 – Josef Fruehwald (University of Pennsylvania): Using Speech Community Data as Phonological Evidence
Classically, patterns of systematic alternations or static distributions in the description of a language have constituted the lion’s share of phonological evidence. More recently, laboratory studies have been added to the collection of evidentiary tools for phonological investigation. Both of these approaches have provided the foundations of modern phonological theory, thus their utility is unquestionable. However, they both utilize controlled conditions on data collection which decontextualize language from its natural setting. The goal of this paper is to reaffirm the utility of naturalistic observations of a speech community to phonological investigation, in line with the variationist field of research beginning with Labov (1963).
First, I will briefly outline my assumptions about how observable phonetic variation and change can be related to phonological representations and processes. I will adopt the modular feed-forward approach to phonology (Pierrehumbert, 2006; Bermudez-Otero, 2010), language specific phonetic implementation (Liberman and Pierrehumbert, 1984; Kinston and Diehl, 1994; Boersma & Hamann, 2008), and potentially language specific phonetic alignment constraints (Cohn, 1993; Zsiga, 2000). My model of phonetic change takes place in the language specific implementation of relatively stable phonological objects.
Then, I will walk through an interesting case study, drawn from the Philadelphia Neighborhood Corpus, currently in development (Labov & Rosenfelder, 2011). As of this writing, the corpus consists of 287 sociolinguistic interviews conducted between 1973 and 2010. Speakers in the corpus have dates of birth ranging from 1889 to 1991. The vowel systems of these speakers have been automatically measured (Evanini, 2009, Labov & Rosenfelder 2010), producing 712,822 observations. The Philadelphia Neighborhood Corpus is unique in its size, time depth, focus on a single speech community, emphasis on the sociolinguistic interview, and ethnographic detail.
Specifically, I will focus on a sound change in /ey/. The diachronic raising of /ey/ in non-word-final position was identified early on as a new and vigorous change in Philadelphia, bringing “snake” into close phonetic proximity with “sneak”. Utilizing the large volume of observations of /ey/ (45,322), we can investigate this diachronic pattern within many different syllabic, segmental, and morphological contexts. As a result of this detailed diachronic investigation, we can identify and specify an active, synchronic phonological process whereby /ey/ is phonologically peripheralized when followed by a consonant in the same word. This phonological process presumably entered into the dialect during a dialectal realignment of Philadelphia from the South to the North.
Importantly, this phonological process would not have been identifiable without diachronic data. When only looking at speakers grouped together into 10 year date-of-birth bins, no particular pattern of note is evident.
I will conclude with some speculation as to how more social dimensions than geography and diachrony may be leveraged for phonological investigation.
September 23 – PIRE Undergrad Presentations
October 7 – Phil Baldi (Penn State, Linguistics and CAMS): Rethinking the shift from fusional to isolating typology: structural, pragmatic and sociolinguistic dimensions*
This paper considers a series of far-reaching syntactic changes in the history of Latin, from Proto-Indo-European up to the Romance period. Among others, the following issues are discussed:
1. The shift from SOV to SVO word order
2. The erosion of distinctive nominal inflection
3. The rise of prepositional usage
4. The change in subordination type from non-finite to finite
Despite the broadly structural nature of these changes, we will demonstrate that a pure structurally-based account is inadequate to account for the facts, and that a ranked multi-leveled approach which depends primarily on pragmatic, but also functional, structural and finally typological processes provides a satisfactory set of generalizations. We also discuss the issue of morphological complexity from the perspective of L2 learners, where it has been shown that adult language learners have difficulty with inflection, the very feature which divides fusional from isolating languages.
*Or, “Why the Soviet Union fell
October 14 – Jorge Valdes Kroff (Penn State, Spanish): The Benefits of Networking: Expanding Statistical Analyses and Piloting an ERP Study
I will share my experiences as a Miccio Award recipient and a PIRE Grad Fellow during Spring ’11. As a Miccio Award recipient, I visited the lab of Dr. John Trueswell a the University of Pennsylvania, where I presented my work on the processing of grammatical gender in code-switched speech using an eye-tracking methodology known as the visual world paradigm (Cooper, 1974; Tanenhaus et al., 1995). Beyond sharing the general benefits of visiting another research lab, I will also present new graphs of the eye-tracking data based on new approaches that I learned during my visit. As a PIRE Grad Fellow, I visited the lab of Dr. Teresa Bajo at the University of Granada (Spain). My main focus was to work on an ERP study examining the interaction of verb bias and plausibility in Spanish-English bilinguals. I will explain from the ground up how I compiled the experiment with the help of Dr. Bajo & Dr. Pedro Macizo and present preliminary results from 12 participants.
Ashley Roccamo (Penn State, German): Can Motivation Influence the Perception and Production of German Front-Rounded Vowels?
Traditional models of speech perception suggest that learners of a second language (L2) cannot correctly perceive the differences between native and non-native sounds, which leads to errors in pronunciation (Best, 1994; Flege, 1995). Previous research has also found that motivational factors can affect second language acquisition (Dörnyei, 2005). This paper investigates the abilities of native English speakers in their first three semesters of L2 German to correctly perceive and produce the front-rounded vowels [ø], [œ], [y] and [ʏ]. Perception and production are also discussed in light of individual motivational factors, such as motivation to learn the language, the importance of a native-like accent, and desire to improve accent. Data from an AX perception task, a listen-and-repeat production task, and motivational questionnaire combine to give insights into how these three factors interact. Results indicate that native speakers of American English in the first three semesters of German struggle to correctly produce front-rounded vowels, despite fairly accurate perception of contrasts. They have not yet formed accurate L2 categories in which to place non-native segments, and therefore production of the novel sounds is sporadic and imprecise. Participants’ motivation levels have a significant influence on their perception skills in German. These findings provide a better understanding for German teachers who wish to improve their students’ pronunciation of novel segments.
October 21 – Emily Coderre (University of Nottingham): Exploring the Cognitive Effects of Bilingualism: Neuroimaging Investigations of Lexical Processing, Executive Control and the Bilingual Advantage
In an increasingly multilingual world, the study of the cognitive effects of bilingualism has gained much attention in the past few decades. This talk will discuss some of the upsides and downsides of the cognitive consequences of bilingualism by focusing mostly on performance on the Stroop task. I will present behavioural and EEG data showing that bilinguals experience delayed lexical access in their L2 (the downsides), but also enhanced cognitive control (the upsides), and discuss how these processes are modulated by proficiency and language script.
October 28 – Ben Zinszer (Penn State, Psychology) and Jason Gullifer (Penn State, Psychology)
T.B.A.
November 4 – Roxana Botezatu (Penn State, CSD): Do bilinguals transfer spelling-sound consistency expectations from a shallow L1 when reading in a deep L2? An ERP investigation
Past research suggests that word reading skills transfer across writing systems and that bilinguals experience competition between their two print-sound systems. Languages that share a script, such as English and Dutch, differ in their spelling-sound consistency, or the number of pronunciations available for given letter clusters (e.g., consistent: the “-air” in hair, pair; inconsistent: the “-ost” in most, cost). These differences have been shown to affect word-reading strategies. The present research investigated transfer of word-reading strategies in bilinguals who read a second language (i.e., English) that differs in spelling-sound consistency from their first language (i.e., Dutch). Proficient Dutch-English bilinguals made rhyme judgments of visually presented word pairs, while the spelling-sound consistency of the stimuli was varied systematically. Participants were assigned to either a Dutch-English (Experiment 1) or English (Experiment 2) rhyme judgment task. Experiment 1 tested whether word-reading strategies transferred more robustly when both languages were used within a single task than when the task was preformed in the L2 (Experiment 2). My talk will evaluate both behavioral and electrophysiological data to better understand the effects of spelling-sound consistency, orthographic overlap and language context on bilinguals’ granularity (reading unit size) preference.
November 11 – Eleonora Rossi (Penn State, Psychology): Merging ERPs and fMRI data: a combined approach to bilingual language processing
Despite the fact that Event Related Potentials (ERPs) and functional Magnetic Resonance Imaging (fMRI) allow investigating language processing in real time, they differ in how they do it: ERPs permit to look at language processing with a very high temporal resolution, while fMRI informs on the neural location of language processes. In this talk I will present preliminary data from a series of (old and new) experiments utilizing ERPs and fMRI as convergent measures to investigate language processing in bilinguals. First, I will present (ERPs and fMRI) results on the morpho-syntactic processing of clitic pronouns in native Spanish speakers and English-Spanish bilinguals. Second, I will present data from a set of experiments trying to answer the question of which are the consequences for the L1 when a bilingual speaker is engaged in processing the L2.
Thursday, November 17: 4:00 PM, Berg Auditorium – Special CLS Lecture: Ellen Bialystok (York University): Reshaping the Mind: The Benefits of Bilingualism
A growing body of research has shown that bilingualism enhances aspects of executive control and leads to better performance on a range of cognitive tasks for children and young adults. More recently, this advantage has been shown to extend into older age, demonstrating slower cognitive decline for bilinguals with healthy aging. The present talk will focus on new research that examines changes in the brain that underlie some of these differences. I will report evidence from younger and older adults showing that the regions used by bilinguals to perform certain cognitive tasks are somewhat different than those used by monolinguals, and that the crucial areas in the front part of the brain necessary for performing these tasks are more intact in bilinguals. The presentation will also describe research investigating the memory and cognitive performance of individuals diagnosed with dementia, including Alzheimer’s disease, where these protective effects continue to exert an influence on bilinguals.
November 18 – No official CLS meeting but we will use this time in conjunction with Ellen Bialystok’s visit
December 2 – Mark Seidenberg (University of Wisconsin- Madison): LANGUAGE LEARNING, PLASTICITY, AND THE “ACHIEVEMENT GAP”
There is enormous underutilized potential to bring modern research on the behavioral and brain bases of language and cognition to bear on critical issues in education. I will present research concerning the seemingly intractable “achievement gap” in reading between African American children and whites. This gap is not wholly explained by SES or environmental factors, and it increases during the first few years of schooling. One neglected factor is differences in language background. Building on research on first and second language learning and neuroplasticity, we have begun to examine how differences between home and school dialects affect children’s classroom experiences. Other factors aside, children who speak a “nonstandard” dialect such as African American English face a more complex learning environment than children who speak the “standard” dialect: they are learning to accommodate the standard dialect while acquiring reading, math and other skills. Because all children are assessed against the same achievement standards, a “gap” results. This research also suggests ways in which the impacts of dialect differences could potentially be ameliorated.
December 9 – Roger Boada (University Rovira i Virgili): Translation ambiguity between Spanish and Catalan: preliminary results
Translation ambiguity has been focus of interest in recent psycholinguistics research. A series of experiments will be presented in which the effect of translation ambiguity was examined using Spanish-Catalan bilinguals. We used a translation recognition task and a translation priming paradigm with lexical decision to look at the effect of several variables: cognate status, dominance of the translation (whether the presented translation is the most frequent one or any other), semantic similarity between the multiple translations (i.e. source of the ambiguity), concreteness, language dominance (Spanish or Catalan) and translation direction (L1-L2 vs. L2-L1). Materials were obtained from an ambiguity database currently in progress. Results showed that translation process was affected by ambiguity so that it was harder to translate words with multiple translations than single-translation ones. In addition, we observed that this effect was found in both cognates and non-cognates although it was larger in the latter. Translation process was hindered when both dominant and subordinate translations were presented, but some differences emerged between them when a translation recognition task was used. In contrast, these differences were not found in a priming experiment indicating that the effect of translation dominance may be due to strategic factors. An exploratory regression analysis on translation recognition data suggests that there was no difference regarding response times between the related (i.e. polysemic) and the unrelated (i.e. homonym) translation pairs. The overall pattern of results indicated that balanced bilinguals such as Spanish-Catalan ones do not show differences in terms of language dominance or translation direction.
Jan 14 – Second Annual Young Language Science Scholar: Anna Papafragou (University of Delaware): Spatial Language and Spatial Representation
The linguistic expression of space draws from and is constrained by basic, probably universal, elements of perceptual/cognitive structure. Nevertheless, there are considerable cross-linguistic differences in how these fundamental space concepts are segmented and packaged into sentences. This cross-linguistic variation has led to the question whether the language one speaks could affect the way one thinks about space – hence whether speakers of different languages differ in the way they see the world. This talk addresses this question through a series of cross-linguistic experiments comparing the linguistic and non-linguistic representation of motion and space in both adults and children. Taken together, the experiments reveal remarkable similarities in the way space is perceived, remembered and categorized despite differences in how spatial scenes are encoded cross-linguistically.
Jan 21 – Mike Putnam (Penn State, German): The Syntax and Semantics of Excessivity: Evidence from Germanic
Click here for more information
Jan 28 – Rhonda McClain (Penn State, Psychology): Using translation as a means to test models of bilingual production
When bilinguals prepare to speak a single word, information about words in both languages is active and appears to compete. The process of initiating speech in each language can be triggered by a range of different events. For example, a bilingual might be asked to name a picture in one language or the other, to speak a word in one language or the other that best fits a definition, or to translate a word in one language into the closest equivalent in the other language. Very little past research has considered how the events that initiate speech planning might differentially reveal this fundamentally competitive process. In the present study, Spanish-English and Chinese-English bilinguals performed a word translation task under conditions in which a word was presented alone and translated into the other language or under conditions in which a picture distractor provided context during translation. Translation performance for the Chinese-English bilinguals was assessed behaviorally and using Event Related Potentials (ERPs). Preliminary results showed that when pictures were present, Chinese-English bilinguals were slower to translate into Chinese, their L1, than into English, their L2. At the same time, ERP data showed that translation from L1 to L2, in the forward direction of translation, was more sensitive to the presence of a semantically related picture than translation from L2 to L1, in the backward direction of translation. I discuss the implications of these results for models of bilingual speech planning and for claims about the ability of proficient bilinguals to exploit semantics directly in the L2.
Feb 4 – Eleonora Rossi (Penn State, Psychology): Second language learners are not native speakers but they process some aspects of the syntax as if they were: Evidence from behavioral and ERP data
Some theories of second language (L2) processing (Clahsen & Felser, 2006) claim that late L2 learners never acquire full access to syntactic representations available to native (L1) speakers. Alternative theories account for differences between L2 and L1 processing in terms of reduced availability of cognitive resources in the L2 (McDonald, 2006).
To test these hypotheses we utilized a morpho-syntactic structure that differs between English and Spanish. Spanish clitic pronouns -SpCls- (but not English pronouns) are marked for gender and number and appear before a finite verb. In two experiments, we tested L1 Spanish speakers (Exp1: n=20; Exp2: n=20), and L1 English proficient L2 learners of Spanish (Exp1: n=20; Exp2: n=8).
Experiment 1: The on-line processing of SpCls (in the correct and incorrect position) was tested with a self-paced reading task. Results showed that L1 speakers produced longer RTs at the incorrect clitic site for singular masculine clitics. Other clitics elicited longer RTs on the following word. L2 learners showed an effect at the clitic site but no spillover effect, suggesting that L2 speakers are able to access the morpho-syntactic representation of clitics and exploit it to resolve ungrammaticality. However, cognitive constraints may limit their ability to use this information to make predictions about upcoming information.
Experiment 2: We used Event-Related Potentials to examine the real-time course of SpCl processing. Participants read sentences in which clitics varied in correctness for gender, number, or both. Initial results revealed that both L1 and L2 showed a larger positivity in the 500-700 ms window for the incorrect clitics suggesting sensitivity to the online processing of morphosyntactic information.
Taken together, these results suggest that there are not hard constraints preventing late bilinguals from accessing L2 grammatical information, but rather cognitive consequences of processing the non-native language distinguish the L2 users from native speakers.
Feb 11 – Amelia Dietrich (Penn State, Spanish): “Suto hablamos asina”: A description of the patterns of code-switching in San Basilio de Palenque, Colombia
The village of San Basilio de Palenque, in the department of Bolivar, Colombia, is a village of approximately 5,000 people founded in the early 17th century by runaway slaves, cimarrones, escaped from the Spanish in the port city of Cartagena. In this initial phase, the cimarrones developed a creole language, called lengua palenquera (LP) based on the Spanish lexicon with phonology, a pronoun system, and a verbal system which are heavily influenced by the various languages these former slaves had carried with them from Africa.
The community has been bilingual from the beginning, always maintaining Spanish alongside LP for purposes of working in and interacting with other Colombian communities, and in more recent generations LP was almost lost due to social pressures from others who viewed LP as little more than poorly spoken Spanish. In the 1980s, an effort began to revive LP by teaching it in San Basilio’s public schools. The result is a present-day population of older lengua-Spanish and younger Spanish-lengua bilinguals, all of whom engage in frequent and seamless code-switching in their everyday interactions with one another and with language researchers arriving from abroad.
The present paper investigates the pattern of code-switching observed in San Basilio de Palenque, Colombia using the diagnostic features identified by Muysken (2000) and data collected by the researcher and colleagues during a trip to Colombia in July 2011. Using this data (and lack thereof, in some cases), it is observed that alternation between LP and Spanish is not as rampant as it has been treated in past work. Following the argument in Schwegler (1996), those alternations which do occur are seen as code-switches realized by highly proficient speakers of both languages. Where code-switching occurs, it follows Muyksen’s (2000) pattern of congruent lexicalization, where lexical items from both languages are inserted into a shared grammatical structure. It is also proposed that there are different processes at work across different generations of Palenqueros.
Additionally, the present data sheds new light on the renewed strength of lengua within the community and the success of the ethno-education program which has been developing over the past twenty years.
Feb 18 – Colleen Balukas (Penn State, Spanish): Late Negation in Palenquero Creole: Predictable from Prosody?
The present paper examines a number of issues concerning clause-final negation structure in Palenquero Creole, a Spanish-based creole language spoken in San Basilio de Palenque, Colombia. Although Palenquero derives the great majority of its lexical items from Spanish, its morpho-syntax differs in important ways from its lexifier language, perhaps most notably in terms of negation structure. That is, while in Spanish the negation marker no appears pre-verbally, as in “no voy a hablar más”, the preferred structure in Palenquero Creole places the negator nu post-verbally and more surprisingly, clause-finally, as in “í bae kondbesá má nu” (trans. for both: ‘I am not going to talk more’). Typologically, clause-final negation is quite marked, in that it occurs infrequently across different languages. From a functionalist perspective, this suggests that the structure is disfavored by pressures of language usage and processing (e.g. Bates & MacWhinney, 1989; Givón, 1991). In other words, late negation may be difficult to process. I aim to explore the extent to which the potential difficulty of late negation might be mitigated by prosodic cues, as compared to the realization of prosody in other negation structures and in non-negated utterances. As I continue this research in the future, I am also interested in determining to what extent the semantic (un)expectedness of the upcoming negation might influence prosody. Analysis for the current study was conducted using data from sociolinguistic interviews collected in Colombia in July 2010. Although the preliminary results are not conclusive, similar work on negation in Brazilian Portuguese (Schwenter, 2005; Armstrong & Schwenter, 2008) suggests that such an interaction between prosody and syntactic structure is indeed a plausible one, and that further examination of this and other data may reveal differential prosody in late negated clauses.
*Feb 21 & 22 – Teresa Bajo (Granada) and Sonja Kotz (Leipzig)
*Feb 23 – PIRE partner symposium and CLS graduate and undergraduate students poster session
Feb 25 – No regular CLS meeting
A bilingual’s cognitive architecture is highly interactive and dynamic, both within and across languages. In this talk, I will show that knowing two languages changes spoken language comprehension and yields co-activation of lexical items across both languages. Using eye-tracking and mouse-tracking data, I will suggest that bilinguals effectively recruit both bottom-up and top-down mechanisms to efficiently and seamlessly integrate information across modalities when resolving ambiguity during comprehension. Bilinguals’ domain-specific experience with cross-linguistic competition shows a relationship to domain-general executive function, suggesting that bilinguals may be particularly adept at inhibiting irrelevant information. One consequence of this greater inhibitory experience is a bilingual advantage in novel language learning — compared to monolinguals, bilinguals are better at learning a new language and show less competition from the native language when using a newly-learned language. These differences in language processing, language learning, and inhibitory control suggest fundamental changes to linguistic and cognitive function as a result of bilingualism.
Mar 4 – Emily Coderre, Walter J.B. van Heuven & Kathy Conklin (University of Nottingham) – video conference:Automaticity and Speed of Lexical Processing in the First and Second Language
Being able to process language quickly is a vital skill we rely on for human interactions. Electroencephalography (EEG) research indicates that lexico-semantic information is accessed within 200 ms (e.g. Dell’Acqua et al., 2007). Lower proficiency in a second language (L2) relative to a first (L1) leads to delays in lexical processing speed in L2, as proposed by the temporal delay assumption of the BIA+ model (Dijkstra & van Heuven, 2002; van Heuven & Dijkstra, 2010). The reduced frequency hypothesis (Pyers et al., 2009) proposes that the L1 is also delayed due to lower frequency of use relative to monolinguals The current study investigates these hypotheses in the context of automatic reading by directly comparing English monolinguals and Chinese-English bilinguals’ L1 and L2 on a Stroop task. Stimulus onset asynchrony (SOA) was manipulated to provide further information on the automaticity of word reading. Word stimuli elicited an N170 component, which is related to orthographic processing (Bentin et al., 1999), in all SOAs, indicating automatic lexical processing in all groups. At the N170 peak, differences emerged between words and control stimuli in monolinguals and bilinguals’ L1, signifying that early lexical processing occurred at the same latency for the native languages despite large orthographic differences. In bilinguals’ L2, however, word and control waveforms diverged significantly later, indicating delayed lexical processing. These results provide neurophysiological evidence for the temporal delay assumption but not the reduced frequency hypothesis, and confirm that early lexical processing is automatically activated but significantly delayed in the second language.
Mar 11 – No regular CLS Meeting (Spring Break)
Most research on language acquisition has assumed an idealized input which lacks any variability, other than the overall variety of sentences allowed by a particular grammar. However, it is well known that this idealization is just that, and that real language data is variable. The linguistic data that each child is exposed to is subject to all types of linguistic and extra-linguistic (dialect, gender, age, speech style) variation, which is not always categorical in nature. If we take this fact into consideration, the acquisition problem becomes much harder but also more realistic. Variability (within and across speakers), even though probabilistically constrained, can add more ambiguity to the input the child is exposed to, making some input unreliable for a particular grammatical generalization although perfectly reliable for learning some other property of the language. In this presentation I will present research that examines the effect of input variability on acquisition by comparing two highly similar dialects that differ minimally in terms of the frequency and reliability of a particular feature. By comparing the inputs and the rates of acquisition of this feature in the two varieties, we can begin to understand the complex relationship between input data and grammar acquisition.
Mar 18 – Michele Diaz (Duke University): The influence of novelty and context on hemispheric recruitment in processing metaphors
The right hemisphere’s role in processing metaphors has been debated. While clinical research suggests that damage to the right hemisphere impairs figurative language comprehension, results from imaging studies have been mixed. Additionally, the Graded Salience Hypothesis proposes that other factors such as novelty and context, rather than figurativeness per se, may influence hemispheric recruitment. In two separate fMRI studies, we examined how novelty and context influenced hemispheric recruitment in processing figurative and literal sentences. In experiment 1, all metaphors and novel stimuli elicited activation in bilateral frontal and left temporal regions. Additionally, all metaphors engaged the right temporal pole. In experiment 2, a main effect of figurativeness (metaphors > literal) was found in left hemisphere regions only. A main effect of congruence (congruent > incongruent) revealed activations in bilateral frontal and temporal regions, and left dorsal medial prefrontal cortex (DMPFC). In our first experiment, although an influence of novelty was found, the right hemisphere’s sensitivity to familiar metaphors suggests that even relatively familiar metaphors still require additional semantic integration. Our second study demonstrated that in the presence of additional context, metaphors and literal sentences did not differentially engage the right hemisphere. In contrast, processing coherent discourse compared to incoherent discourse, regardless of the figurative or literal aspect of the text, engaged right inferior frontal and temporal gyri, and left hemisphere regions including DMPFC. These results are partially consistent with the Graded Salience Hypothesis, highlight the strong influence of novelty and context on language, and suggest that in a wider discourse context, congruence has a much stronger role in right hemisphere recruitment than figurativeness.
Mar 25 – Jason Gullifer (Penn State, Psychology): The effect of syntactic constraints on parallel activation of words in the bilingual’s two languages
A finding in recent studies of bilingual word recognition is that it is impossible to restrict activation to one language alone, even when reading in sentence context. The activation of the language not in use persists under almost all conditions except when sentences are highly constrained semantically. Here we asked whether a similar effect of constraint would be observed when sentences are syntactically specific to one of the bilingual?s two languages. Proficient Spanish-English bilinguals read sentences in each language that contained a to-be-named cognate or matched control word. Half of the Spanish sentences contained syntax that was structurally specific. English sentences were translations of the Spanish sentences but were not syntactically unique to English. Results indicate that the cognate effect in word naming was not always reduced following syntactically specific sentence context in Spanish. The utilization of syntactic constraint appears to depend on language dominance. These findings have implications for claims about bilingual word recognition, language immersion, and language dominance.
Apr 1 – Tim Poepsel & Dan Weiss (Penn State, Psychology): Keeping it in context: statistical learning, mutual exclusivity and the problem of learning words
Word learning is complicated. Learners face high mapping ambiguity in cluttered visual and acoustic environments, and research with monolinguals has demonstrated a primacy effect in the learning of word-object mappings. Specifically, mutually exclusive mappings are preferred even in contexts where many-to-one mappings are available. The situation may be even more complicated for bilinguals, who must learn that two words, one from each of their languages, map to the same referent (e.g, “dog” and “chien” both refer to a four-legged furry creature). These observations present three clear questions for research in word learning: 1) how can we describe the learning mechanism necessary for overcoming the ambiguity encountered in typical learning environments; 2) how does a learner override the preference for mutual exclusivity in word learning to establish many-to-one word-object mappings, and 3) how do we relate the work on monolinguals to our understanding of word learning in bilinguals? The research presented here will address these questions through the lens of statistical learning.
Previous research has demonstrated that monolingual learners are sensitive to the distributional properties of linguistic input. Specifically, learners are able to detect structural boundaries (i.e., transitions between segments, syllables and words) and transitions between languages by tracking contextual and phonological cues embedded in acoustic input. Yu and Smith 2007 developed a cross-situational learning paradigm in which the possible referents of words are established via the statistical properties of their distribution in the input. They demonstrated that learners are able to track the co-occurrence statistics of words and possible referents across contexts, and thus successfully link words and referents. In the present research, we propose to investigate how the addition of contextual and phonological cues to the acoustic input offered in the cross-situational paradigm modulates learners’ established preference for mutually exclusive one-to-one mappings of words and objects. If the cross-situational learning mechanism is also sensitive to cues identifying the particular language set from which co-occurrence statistics are drawn, we hypothesize that such cues should allow learners to override mutual exclusivity and learn the many-to-one mappings evident in the linguistic knowledge of all bilinguals. To date, we have replicated the findings that 1) the cross-situational learning paradigm is able to support the learning of novel word-object mappings and 2) mutual exclusivity is an operant constraint on word-learning in bilingual contexts. We have begun a series of experiments investigating the strength of various contextual and phonological cues in overriding this constraint.
Apr 8 – Laurie Feldman (SUNY Albany): Morphological and form priming in L1 and L2: How do they differ?
Patterns of priming for regularly inflected and form similar prime-target pairs provide a window on morphological processing in English as a first and as a second language. I will present results from L1 speakers of Dutch, Serbian and Chinese as they perform a cross modal lexical decision task in English. These conditions permit a strict test of the claim that, because they lack the grammar (procedural knowledge) to analyze morphologically complex word forms, L2 users rely on lexical (declarative) knowledge to recognize morphologically complex words. Further, the inclusion of three L1s with different morphological and phonological structures presents the opportunity to detect different patterns of interaction between L1 and L2. Across levels of L2 proficiency, we ask whether command of inflectional morphology as revealed by magnitudes of morphological facilitation in L2 differ as a function of L1. Consistent with the claim that particular dimensions of similarity between various L1 and L2 (poverty of morphological complexity in Chinese compared to Dutch or Serbian) can affect processing in L2, cross modal findings show variation in the magnitude of facilitation across L1 speakers of Chinese and Serbian when proficiency is controlled. Also novel is a finding that Dutch L1-English L2 speakers appear to show greater morphological facilitation relative to a form control when English primes are pronounced in an American than in a Dutch accent; our generally less proficient Chinese participants did not show this pattern. Similarities and differences between patterns of morphological facilitation in L1 and L2 are discussed.
Apr 15 – Trace Poll (Penn State, CSD): Exploring the role of argumenthood in sentence processing
A number of studies have suggested that argument phrases in sentences are easier to process than non-arguments. The processing advantage of arguments is thought to arise from their lexical specification. Non-arguments, or adjunct phrases, in contrast, may depend on a syntactic linkage to the phrase they modify. This talk will explore whether preliminary data from a self-paced listening task support these hypotheses. The differentiation of argument and adjunct processing will also be considered in light of the procedural deficit hypothesis (Ullman & Pierpont, 2005), which suggests that specific language impairment stems from a weakness in the procedural memory system.
Apr 22 – Yolanda Gordillo (Penn State, Spanish): What can Adpositions tell us about ‘Media Lengua’?
This presentation focuses on Media Lengua de Imbabura (MLI), which is a mixed language spoken in several communities located in the Northern part of the Ecuadorian highlands. I will explore and describe the social factors that could have possibly leaded to the emergence of MLI, and it will also address the linguistic question of how MLI can inform our view on the classification of adpositions. If the definition of mixed language is based on a clear division between lexical and grammatical items, then looking at how adpositions are treated in MLI can give us some insight about the character of this part of speech. I intend to present a functional analysis of adpositions that would try to answer the following questions:
1. How are adpositions treated in MLI?
2. What can this treatment tell us about the general classification of adpositions?
In order to answer these questions I will be using a functionalist approach.
Apr 29 – Ricardo Otheguy (CUNY Graduate Center): Contact, leveling, and continuity in Spanish in New York
As in other locales in the United States, Spanish in New York has been studied for the most part in order to highlight the English-origin features found in it as a result of the forces of language contact. While there is no doubt that contact plays a real role in shaping Spanish in New York, the attention paid to this element has obscured two other important contributors to the form of Spanish in the City, namely (a) dialectal leveling and (b) structural continuity with Latin American Spanish. Almost as much as does the presence of English, it is the presence of ways of speaking Spanish other than one’s own that is giving form to Spanish in the City. Yet these two elements, language contact and dialectal leveling, are not enough. For these two considerations are about how Spanish in New York differs from the Latin American reference lects. But equally important are the ways in which Spanish in the City remains the same as the Spanish of Puerto Rico, Mexico, and the other places whose speech ways have been imported into the five boroughs of the Big Apple. In order to establish with some precision the boundaries of contact, leveling, and continuity, a micro analysis is needed of a single, variable linguistic feature, analyzed at the correct level of abstraction. The feature studied in the present project is the variable use of subject personal pronouns, that is, the alternation between presence and absence of the pronoun, as in canto ~ yo canto ‘I sing’, cantas ~ tú cantas, ‘you sing’, etc.. The project relies on the methods and theoretical machinery of variationist sociolinguistics, using variable hierarchies and constraint hierarchies to bring out the coexistence of contact, leveling and continuity as shapers of Spanish in the City. The project pays special attention to the Spanish of New York-raised, second-generation Latinos, and offers a commentary on the widely accepted notion of incomplete acquisition as a characterization of their linguistic competence.
Aug 27 – Trace Poll and Roxana Botezatu (Penn State, CSD): Report on the 2010 Adele Miccio Travel Awards
The 2010 recipents of the Miccio Travel Award will describe their experiences and give tips on how to apply and how to get the most from the opportunity.
Sep 3 – Nola Stephens (Penn State, Linguistics): Give and take: The roles of givenness and pronominality in child dative constructions
Givenness and pronominality are highly correlated in adult speech, and they tend to align in adult dative constructions such that THEME-first constructions (e.g., Give it to the man) are more likely when the THEME is given and pronominal, and RECIPIENT-first constructions (e.g., Give him the hat) are more likely when the RECIPIENT is given and pronominal (Bresnan & Nikitina 2009). While previous work has shown that givenness and pronominality are also correlated in child speech (e.g., Matthews et al. 2006), there has been little research considering how these two factors influence early syntax. The current studies reveal that both givenness and pronominality play a role in the dative construction choices of young English speakers. In two studies, I prompted four-year-olds to describe video clips in different discourse conditions: one condition with a given THEME, one with a given RECIPIENT, and a control condition where neither argument was given. Discourse condition strongly influenced word order. THEMES were more likely to be first in the THEME-given condition. This effect was categorical in Study 1 and highly robust in Study 2 (p < .0001). Similarly, RECIPIENTS were more likely to be first in the RECIPIENT-given condition, though less so (Study 1: p < .05; Study 2: ns). The differences between THEMES and RECIPIENTS and between Studies 1 and 2 are largely attributable to pronominality. Given arguments were generally pronominalized, and pronouns were generally ordered first. Importantly, children always ordered THEMES-pronouns first (Give it to the man), while they sometimes ordered RECIPIENT-pronouns last (Give the hat to him), hence the stronger effect of THEME-givenness. And given information was pronominalized more in Study 1 than Study 2, likely because Study 1 participants heard the given information mentioned and saw a picture of it, while Study 2 participants only heard the information. These studies highlight the continuity between child and adult language production and underline the importance of incorporating information about discourse status and the type of referring expressions into models of early syntactic development.
Sep 10 – Elina Mainela-Arnold (Penn State, CSD):In Pursuit of Explaining Individual Differences in Language Development
I will present a collection of our recent studies investigating cognitive underpinnings of poor language learning in children and adolescents. The first set of studies investigated the component skills involved in completing verbal working memory tasks. Many current theories argue that limitations in working memory capacity result in incomplete language learning. However, performance on the tasks used to measure working memory appear to involve several component skills that go beyond maintaining verbal computations in working memory. Our studies identified some of these component skills, and therefore, cast a doubt on usefulness of the construct of working memory in explaining individual differences in language development. I will also discuss emerging new work investigating the role of implicit learning mechanisms in explaining poor language development. I will finish with my vision of where this pursuit should be heading towards.
Sep 17 – Judith Kroll (Penn State, Psychology): BAM! Bilingualism reveals the architecture and mechanisms for language processing (Plenary to be given at AMLaP 2010)
Until recently, research on language and its cognitive interface focused almost exclusively on monolingual speakers of a single language. In the past decade, the recognition that more of the world’s speakers are bilingual than monolingual has led to a dramatic increase in research that assumes bilingualism as the norm rather than the exception. This new research investigates the way in which bilinguals negotiate the presence of two languages in a single mind and brain. A critical insight is that bilingualism provides a tool for examining aspects of the cognitive architecture that are otherwise obscured by the skill associated with native language performance. In this talk, I illustrate the ways in which bilingualism reveals the architecture and mechanisms for language processing and their neural basis.
*Sep 21 – Leibowitz Lecture: Susan Goldin-Meadow (University of Chicago, Psychology):How our hands help us think
Tue, Sep 21: 7:30 PM – Nittany Lion Inn Boardroom
*note: not a Center for Language Science talk
*Sep 22 – Daphne Bavelier (University of Rochester): Action Video Games as Exemplary Learning Tools
Wed, Sep 22: 4:00 PM – 108 Wartik Building
*note: not a Center for Language Science talk
Although the adult brain is far from being fixed, the types of experience that promote learning and brain plasticity in adulthood are still poorly understood. Surprisingly, the very act of playing action video games appears to lead to widespread enhancements in visual skills in young adults. Action video game players have been shown to outperform their non-action-game playing peers on a variety of sensory and attentional tasks. They search for a target in a cluttered environment more efficiently, are able to track more objects at once and process rapidly fleeting images more accurately. This performance difference has also been noted in choice reaction time tasks with video game players manifesting a large decrease in reaction time as compared to their non-action-game playing peers. A common mechanism may be at the source of this wide range of skill improvement. In particular, improvement in performance following action video game play can be captured by more efficient integration of sensory information, or in other words, a more faithful Bayesian inference step, suggesting that action gamers may have learned to learn.
Sep 24 – Viorica Marian (Northwestern University):Consequences of Bilingualism for Spoken Language Processing and Language Learning
A bilingual’s cognitive architecture is highly interactive and dynamic, both within and across languages. In this talk, I will show that knowing two languages changes spoken language comprehension and yields co-activation of lexical items across both languages. Using eye-tracking and mouse-tracking data, I will suggest that bilinguals effectively recruit both bottom-up and top-down mechanisms to efficiently and seamlessly integrate information across modalities when resolving ambiguity during comprehension. Bilinguals’ domain-specific experience with cross-linguistic competition shows a relationship to domain-general executive function, suggesting that bilinguals may be particularly adept at inhibiting irrelevant information. One consequence of this greater inhibitory experience is a bilingual advantage in novel language learning — compared to monolinguals, bilinguals are better at learning a new language and show less competition from the native language when using a newly-learned language. These differences in language processing, language learning, and inhibitory control suggest fundamental changes to linguistic and cognitive function as a result of bilingualism.
Oct 1 – Carol Miller (Penn State, CSD): Processing-based vs. knowledge-based language measures: What’s the difference and does it matter?
I will describe a pilot project that is designed to compare the usefulness of processing-based vs. knowledge-based language measures for predicting theory of mind ability in preschoolers. I will outline some of the theoretical issues concerning relationships between language and social cognition. The main aim of the presentation is to provoke discussion about -the theoretical and practical issues involved in the pilot study and a proposed “real” study.
Oct 8 – Karen Miller (Penn State, Spanish): Input Variability and Acquisition
Most research on language acquisition has assumed an idealized input which lacks any variability, other than the overall variety of sentences allowed by a particular grammar. However, it is well known that this idealization is just that, and that real language data is variable. The linguistic data that each child is exposed to is subject to all types of linguistic and extra-linguistic (dialect, gender, age, speech style) variation, which is not always categorical in nature. If we take this fact into consideration, the acquisition problem becomes much harder but also more realistic. Variability (within and across speakers), even though probabilistically constrained, can add more ambiguity to the input the child is exposed to, making some input unreliable for a particular grammatical generalization although perfectly reliable for learning some other property of the language. In this presentation I will present research that examines the effect of input variability on acquisition by comparing two highly similar dialects that differ minimally in terms of the frequency and reliability of a particular feature. By comparing the inputs and the rates of acquisition of this feature in the two varieties, we can begin to understand the complex relationship between input data and grammar acquisition.
Oct 22 – Juliana Peters (Penn State, Psychology): How language experience and immersion affect the relative dominance of a bilingual’s two languages and change language processing and its cognitive consequences
After an extended period of time in a second language (L2) environment, some bilinguals may become more proficient in the L2 than in the native language. This switch of language dominance can be observed under a variety of circumstances and at a range of different points in the lifespan, e.g., following immigration or after growing up in a household that speaks a minority language and then entering school in which instruction is delivered in the majority language. Although there has been some past research on immersion experience, there has been little attention paid to switches of language dominance, either for language processing or for their cognitive consequences. The present study examined language processing in native Spanish speakers living in the US who have become proficient in English as the L2. Immersed in a largely monolingual environment, some bilinguals have become dominant in English, thereby switching language dominance. Spanish-English bilinguals who have switched language dominance in this context were compared to Spanish-English bilinguals who maintained dominance in Spanish and also to native English speakers with Spanish as the L2, and to monolingual speakers of English. Participants were tested on a set of language processing tasks and also a set of cognitive measures. The language processing tasks included picture naming and verbal fluency, to assess performance at the lexical level, and a sentence processing task, to assess attachment preferences. The cognitive tasks included measures of working memory and inhibitory control. Preliminary results suggest a dissociation between the manifestation of language dominance at the lexical and sentential levels. Spanish-English bilinguals who have become English dominant for lexical-level tasks may retain some Spanish-specific preferences in sentence processing. Furthermore, there appear to be separable influences of switching language dominance and language immersion per se. The results of this study have theoretical implications for claims about the plasticity of the language system across an individual’s life experience. They also introduce a set of questions concerning the way in which past research has categorized bilinguals on the basis of native language status alone.
Oct 29 – John Lipski (Penn State, Spanish): How language experience and immersion affect the relative dominance of a bilingual’s two languages and change language processing and its cognitive consequences
In several regions of South America, Spanish is in contact with the Native American language Quechua, and beginning with the Spanish colonization in the 16th century a variety of stable as well as transitory interlanguage varieties have developed, in highland regions of Ecuador, Peru, Bolivia, and southern Colombia. Quechua (known as Quichua in Ecuador) is characterized by a three-vowel system, traditionally represented as /ɪ/-/a/-/ʊ/, and when Quichua speakers attempt to acquire the Spanish five-vowel system (/i/-/e/-/a/-/o/-/u/), the mid-high vowel distinctions (/i/-/e/ and /u/-/o/) are seldom completely mastered. Popular opinion holds that Quichua-dominant speakers actually interchange high and mid vowels, but in reality the situation is much more complex. Previous research has examined only elicited individual words in laboratory settings. The present research project involves spontaneous speech collected among elderly Quichua-Spanish bilinguals in northern Ecuador who acquired Spanish in late adolescence or early adulthood, who have received no formal education in any language, and who continue to speak more Quichua than Spanish on a daily basis. An examination of the vowel spaces of Quichua-influenced Spanish interlanguage reveals no simple transference but rather a complex array of expanded vowel spaces that correspond neither to Quichua nor to Spanish. In the expanded and still relatively amorphous vowel spaces corresponding to the Spanish /i/-/e/ and /u/-/o/ oppositions, preliminary results suggest emergent processes that have the cumulative effect of reducing the “entropy” of the vowel dispersions, but which also contribute to the popular perception of “mix and match” vowel confusion. The Quichua-Spanish vowel data are evaluated in the light of several models of vowel production/perception and second-language acquisition of phonological contrasts.
Nov 5 – Alison Eisel (Penn State, German): Phonological Regularities of German Grammatical Gender: A Study of a Introductory Textbook
Learning grammatical gender is one of the most difficult aspects of learning German as a second language. Many speakers continue to have difficulties in gender assignment and use even when they are overall very proficient in German. This study looks at three aspects of gender acquisition at very early stages of learning, previous foreign language experience, sensitivity to phonological regularities, and patterns in the classroom input. I will present data from a grammatical gender assignment task performed by second semester students of German, and an analysis of the patterns of gender in an introductory text book. Results show that all participants are sensitive to some phonological regularities, regardless of previous foreign language experience.
Nov 12 – Ping Li (Penn State, Psychology): Dynamic interaction and competition in two languages: Cognitive mechanisms and neural correlates
Research in our laboratory focuses on how the mental representation of L1 and L2 develops and how multiple linguistic systems compete as a function of age of acquisition and language proficiency. This talk will present a series of ongoing behavioral and fMRI studies that investigate (1) whether bilingual lexical activation is modulated by external contextual cues such as facial features of the interlocutor, (2) whether mathematical processing in L1 versus L2 involves distinct neural substrates as a result of difficulty of task and L2 proficiency, (3) whether working memory and executive control can distinguish faster L2 learners from poor L2 learners in a set of novel word and grammatical learning tasks, and (4) whether object naming in L1 shows negative transfer after learning of L2 as a function of the congruency of the naming patterns between the two languages. Implications of these studies are discussed in light of current theories of bilingualism and second language acquisition.
Nov 19 – Mike Putnam (Penn State, German): What’s in a √root? – Scalar properties of predicates in light of the projectionist vs. constructionist debate and their morpho-syntactic consequences
One of the long standing controversies in generative and experimental treatments of predicates (regardless of the theoretical approach) is the debate centering on the internal wealth of detailed information or lack thereof that predicates contain. To put it simply, do predicates – called √roots by Pesetsky (1995) – determine the argument structure of the syntax (i.e., a projectionist view) or, in contrast, is it the structure of the syntax that is responsible for endowing an impoverished √root with semantic information based solely on its position in the structure (i.e., a constructionist view)? In this presentation, I take a closer look at über-prefixing in German (and related languages). Following Risch (1995), McIntyre (2003), and Putnam (to appear), I demonstrate that scalar implicatures are lexicalized in √roots on a case by case basis (based largely on the aktionsart-type of the predicate) (see Rappaport-Havov 2009 for a similar proposal). Finally, I discuss the morpho-syntactic consequences of scalar properties in connection with these über-verbs in German and beyond. For example, consider the following examples from German:
(1) Sergej überißt *(sich) an Bananen.
Sergej over-eats Refl on BananenDAT
‘Sergej overeats on bananas/Sergej eats too many bananas.’
(2) Der Hund überbellt *sich/die Katze.
the dog over-barks REFL the cat
‘The dog outbarks the cat.’
Although both über-verbs involve a scalar event, example (1) requires the overt presence of a weak pronominal reflexive, whereas the second example (2) does not. The situation in German for example (1) contrasts with data from English – see (3) below – where the presence of a reflexive in a similar scalar context is ungrammatical:
(3) Richard overeats (*himself) (on pizza).
Contrary to most treatments of these verbs, I argue that the reflexive in these constructions does receive a distinct theta-role interpretation (i.e., Standard). Furthermore, I suggest that the presence of the weak reflexive in German and other related languages is the lexicalization of a pro-argument in a degree phrase (= DegP) situated within the verb phrase.
Nov 26 – No Meeting (Thanksgiving)
Dec 3- Guillaume Thierry (ESRC Centre for Research on Bilingualism; Bangor U., Wales): Cognitive Neurobilingualism: A window into the workings of the human mind
In this presentation, I introduce a new perspective on questions important in the field of bilingualism by exploiting the exquisite temporal resolution of Event-Related Potentials (ERPs). First, I demonstrate how late Chinese-English bilinguals unconsciously translate English words into their Chinese native equivalents, whether English words are presented auditorily or visually. Second, I provide evidence is support of the linguistic relativity hypothesis by showing an effect of language-specific colour terminology on colour discrimination in Greek-English bilinguals. Third, I show how millisecond-by-millisecond tracking of brain activity in relation to behavioural output enables us to establish the time of lexical access in picture naming by studying the cognate and lexical frequency effects in Spanish-English bilinguals and the Semantic Competitor Inhibition Effect (Howard et al. 2006) in monolinguals. In conclusion, I discuss how ERPs allow us to track crucial stages of mental processing well before a behavioural response is observed.
Dec 10 – Brenda Rapp (Johns Hopkins): Understanding the literate brain: The convergence of behavioral and neural investigations
Written language is an extraordinary human invention that has allowed for the communication and accumulation of knowledge across time and geography and, in so doing, has revolutionized human history. However, in evolutionary terms it has entered the human repertoire only very recently, without the opportunity to carve out its own territory within the human genome. This raises a number of questions, including: How has the brain accommodated written language processes and representations? How are these related to those of evolutionarily older skills such as spoken language, object recognition, working memory? What is that we know when we know how to read and write words? In this talk I will review findings from behavioral research involving cognitive neuropsychological studies of individuals with acquired written language deficits, as well as fMRI research with neurologically intact individuals. I will argue that both approaches make use of similar experimental logic, including such things as: dissociation, association, facilitation, parametric variation, analysis of similarity. These lines of research are beginning to converge in uncovering the functional architecture of the written language processing system and how it is related to other cognitive systems. Furthermore, both lines of research are increasingly able to reveal the richness and internal complexity of orthographic representations.
Jan 15 – CLS faculty (Penn State): Overview of Dual-Title Degree
Jan 22 – Bill Levine (University of Arkansas): Production and processing of restrictive relative clauses in pragmatically-appropriate context
Jan 29 – Jorge Valdes Kroff (Penn State): Visual World Redux: Taking a new look at auditory comprehension and Spanish-English bilinguals
Feb 5 – Florian Jaeger (University of Rochester): An Information Theoretic Perspective on Language Production
Feb 12 – Erin Tavano (USC): Processing scalar implicature
Feb 19 – (236 Chambers) Nicole Wicha (the University of San Antonio, Texas): Prediction and processing of gender-marked words in monolingual and bilingual sentences
Mar 5 – (201 Chambers) Daniel Adrover-Roig (University of Montreal):The impact of language learning on cognitive reserve
Mar 12 – No CLS : Spring Break
Mar 19 – Richard Page (Penn State): How did German get a crazy gender assignment rule?
Mar 26 – (101 Chambers) Bruce Tomblin (University of Iowa):Genetics of Developmental Language Impairment: Pathways to Cognitive Systems for Language
Apr 2 – Eleonora Rossi (Penn State): Combining evidence for a cognitive-linguistic approach to language. Data from aphasia and second language processing
Apr 9 – (236 Chamberse) Gillian Sankoff (University of Pennsylvania): Language transmission and language change across the life cycle
Apr 16 – Carrie Jackson (Penn State): The acoustics of syntactic disambiguation in second language German and English
Apr 23 – Cari Bogulski (Penn State): Vocabulary acquisition and inhibitory control: A paradox of bilingualism or two sides of the same coin?
May 7 – Heidi Lorimor (Bucknell University): What drives agreement
Aug 28 – Chip Gerfen (Penn State) – Evidence for inhibition in native language production during immersion in the second language
Sept 4 – No Meeting – Labor Day weekend
Sept 11 – Eleonora Rossi (Penn State) – The time course of clitic pronouns processing: Revisiting an ERP study
Sept 18 – Roxana Botezatu (Penn State): Noun Phrase Number and Gender Agreement in Spanish-English Bilingual Preschoolers
Sept 25 – Maria Cruz Martin (University of Granada): Inhibitory processes in bilingual language processing: Time course of inhibition and electrophysiological correlates
Oct 2 (Moore 254)- Keith Nelson (Penn State) – The Language Acquisition Rollercoaster: Observations From Diverse Methodologies and Learner Groups on Why Children Sometimes Slow Down and Sometimes Speed Along in Acquisition
Oct 9 – Arthur Wendorf (Penn State) – Fluency, Speech Rate and Oral Exams
Oct 16 – Rena Torres Cacoullos (Penn State) – Yo and I in New Mexico: Accounting for variation in evaluating convergence via code-switching
Oct 23 – Trace Poll (Penn State) – Precursors to Specific Language Impairment: Late and Typical Language Emergence
Oct 30 – John M. Lipski (Penn State) – “Re-mixing a mixed language: the emergence of a new pronominal system in Chabacano (Philippine Creole Spanish)”
Nov 6 – Giuli Dussias (Penn State) – Usage frequencies of complement-taking verbs in Spanish and English: Data from Spanish monolinguals and Spanish-English L2 speakers
Nov 13 – David Counselman (Penn State) – Perception or Production? Improving Students’ Spanish Pronunciation in the L2 Classroom
Nov 20 – Evelyn Duran Urrea (Penn State) – The syntax and prosody of code-switching in New Mexican Spanish-English Discourse
Nov 27 – No Meeting – Thanksgiving
Dec 4 – Janet Van Hell (Penn State & Radboud University) – Cross-language interaction and transfer is sign-speech bilinguals
Dec 11 – Jing Yang (HKU): The role of phonological working memory in Chinese reading development: Behavioral and fMRI evidence
Jan 15 – Karen Emmorey (SDSU) – The Psycholinguistic and Neural Consequences of Bimodal Bilingualism
Bimodal bilinguals, fluent in a signed and a spoken language, exhibit a unique form of bilingualism because their two languages access distinct sensory-motor systems for comprehension and production. When a bilingual’s languages are both spoken, the two languages compete for articulation (only one language can be spoken at a time), and both languages are perceived by the same perceptual system: audition. Differences between unimodal and bimodal bilinguals have implications for how the brain might be organized to control, process, and represent two languages. In this talk, I highlight recent results that illustrate what bimodal bilinguals can tell us about language processing and about the functional neural organization for language.
Jan 23 – Dan Weiss (Penn State) – Statistical Learning and the Curse of Dimensionality
Jan 30 – Susan Strauss (Penn State) – From vision to experience to cognition: A discourse-analytic study of the Korean verb pota ‘to see’ — [work in progress]
Feb 6 – Anna Engels (Penn State) – Some SLIC Stuff: Nuts and Bolts and Strong Magnetic Fields
Feb 13 – Jon-Fan Hu (Penn State) – Labels can override perceptual categories in early infancy: experimental and simulation studies
An extensive body of research claims that labels facilitate categorisation, highlight the commonalities between objects and act as invitations to form categories for young infants before their first birthday. While this may indeed be a reasonable claim, we argue that it is not justified by the experiments described in the research. We report on a series of experiments that demonstrate that labels can play a causal role in category formation during infancy. Ten-month-old infants were taught to group computer-displayed, novel cartoon drawings into two categories under tightly controlled experimental conditions. These findings demonstrate that even before infants start to produce their first words, the labels they hear can override the manner in which they categorise objects. Yet little is known regarding the nature of the mechanisms by which this effect is achieved. We further describe a neuro-computational model of infant visual categorisation, based on self-organising maps, that implements the unsupervised feature-based approach. The model successfully reproduces experiments demonstrating the impact of labelling on infant visual categorization reported in Plunkett et al. (2008). The results suggest that early in development, say before 12-months-old, labels need not act as invitations to form categories nor highlight the commonalities between objects, but may play a more mundane but nevertheless powerful role as additional features that are processed in the same fashion as other features that characterise objects and object categories.
Feb 20 – Jorge Valdes (Penn State) – Language-internal and Language-external processes in the formation of spatial prepositions in Papiamentu
Language-internal and –external processes in the formation of spatial prepositions in Papiamentu Papiamentu, a Romance-based creole, has a rich, established prepositional system in contrast to other creole languages (Kouwenburg & Murray, 1994). The great majority of these prepositions appear to be transparently derived from their Romance counterparts. However, I will examine two spatial prepositions—riba (>Sp., Port. arriba) and for di (>Port. fora de, Sp. fuera de)—which have semantically expanded to take on additional meanings not exhibited by their Romance counterparts. I will argue that these prepositions exhibit two different processes by showing language-internal processes at work in the expansion of riba and reviewing substratum influence (i.e. language-external) in the case of for di (Maurer, 2005). Finally, I highlight the need to examine lexemes individually as they ostensibly follow similar paths of grammaticalization. Creole languages in general offer a clear warning of attributing synchronic outcomes to one catch-all mechanism.
Feb 27 – David Counselman (Penn State) – Improving the Efficiency of Pronunciation Training in the L2 Classroom
March 20 – Carrie Jackson (Penn State) – Does the L1 make a difference in how learners process L2 sentences?
March 27 – Swathi Kiran (Boston University)– Bilingual Aphasia: Neural substrates, Cognitive Control and Rehabilitation
Bilingual aphasia, defined as a loss of one or both languages in bilingual individuals that results from left hemisphere damage, is of increasing interest worldwide because half the world’s population is bilingual. In the United States, the elderly Hispanic population is the fastest growing ethnic minority (Bureau of the Census, 2006). However, current research on bilingual aphasia cannot inform or recommend the optimal rehabilitation for bilingual aphasic patients (Roberts & Kiran, 2007). For instance, it is not known whether or not rehabilitating one of the patient’s languages is sufficient, nor to what extent cross-language transfer occurs after rehabilitation. Several factors contribute to the paucity of research in this area: the multitude of possible language combinations in a bilingual individual, the relative age of acquisition (AoA) and proficiency of the two languages of the bilingual individual, and the effect of focal brain damage on bilingual language representation. In this talk, I will focus on three broad issues, 1) what we understand about brain representation of two languages in normal and brain damaged bilingual individuals, 2) what we understand about the cognitive control of lexical access in bilingual aphasia through analysis of cross-language errors and 3) what we know about cross-language transfer subsequent to rehabilitation in one language. Using four experimental methodologies, fMRI, computational modeling, behavioral analysis of language production and single subject treatment designs, I will provide some insight into the complexities of bilingual aphasia rehabilitation and the various factors that contribute to cross-language transfer in these patients.
April 3 – Eleonora Rossi (Penn State) – The processing of clitic pronouns in L1 Spanish and L1 English L2 learners of Spanish
April 10 – K. Allen Davis (Penn State)
April 17 – Pierluigi Cuzzolin (University of Bergamo and Penn State) – My dad’s stronger than your dad, or, how languages make comparisons
April 24 – Arturo Hernandez (U of Houston) – Age of acquisition, language proficiency and the bilingual brain
What factors affect the coding of two languages in one brain? For over 100 years, researchers have suggested that age of acquisition (when) vs. proficiency (how well) in a particular language play a role in its neural representation. Recent work in my laboratory has explored the influence of these two variables in bilingual language processing using fMRI. Studies have also extended this work by looking at these two factors in monolinguals and in motor skill processing in athletes. The similarities across these domains provide compelling evidence of the link between language and motor skill learning. They are also consistent with an emergentist view in which neural representations arise from a series of interactions at multiple levels. The implications of this conceptualization of language for clinicians and educators alike will be discussed.
Nadine Martin (Temple U) – Temporal components of language processing: Implications for models of verbal STM, aphasia and treatment of language disorders.
What factors affect the coding of two languages in one brain? For over 100 years, researchers have suggested that age of acquisition (when) vs. proficiency (how well) in a particular language play a role in its neural representation. Recent work in my laboratory has explored the influence of these two variables in bilingual language processing using fMRI. Studies have also extended this work by looking at these two factors in monolinguals and in motor skill processing in athletes. The similarities across these domains provide compelling evidence of the link between language and motor skill learning. They are also consistent with an emergentist view in which neural representations arise from a series of interactions at multiple levels. The implications of this conceptualization of language for clinicians and educators alike will be discussed.
May 1 – Rosa Guzzardo (Penn State) – Spanish-English code-switching at the auxiliary phrase: An eye-tracking study
Aug 29 – Jill Morford (University of New Mexico) – Cross-language activation in ASL-English bilinguals
Sept 12 – Matt Goldrick (Northwestern) – Non-discrete selection: Consequences for mono- and multilingual phonetic processing
Theories of language production typically assume that at all levels of processing non-target representations associated with the target are partially activated. For example, at the lexical level, semantic associates are typically assumed to be activated (for target CAT, words like RAT, DOG, etc.) To cope with potential interference from these representations, theories typically incorporate selection mechanisms that serve to enhance target processing (e.g., boosting the activation of a node representing CAT). Over the past two decades, an extensive body of work (in both mono- and multilingual production) has shown that at the lexical level selection is not discrete. Although selection processes extensively enhance target activation, non-target representations (both within and across languages) remain partially active, influencing subsequent phonological processing (e.g., mixed semantic-phonological neighbors such as RAT facilitate phonological retrieval for target CAT).
In this talk, I’ll review recent evidence that non-discreteness extends to phonological and phonetic processing. In monolinguals, gradient variation in the activation of phonological representations influences the phonetic realization of targets. In multilingual production, interaction between the speaker’s sound systems at the phonetic level is modulated by gradient variation in the activation of phonological representations. I’ll discuss the implications of these findings for phonological and phonetic processing in both mono- and multilingual production.
Sept 19 – Janet van Hell (Penn State & Nijmegen) – Lexical and syntactic processing in bilinguals at different L2 proficiency levels: ERP and behavioral evidence
Sept 26 – Ping Li (Penn State) – Lexical organization and representation in the bilingual brain
Oct 3 – Barbara Malt (Lehigh) – Cross-Linguistic Diversity and the Development of the Bilingual Lexicon
Oct 10 – Pilar Pinar (Penn State and Gallaudet) – The phonological enemy effect in deaf learners of Spanish as an L3
Oct 17 – Brian MacWhinney (Carnegie Mellon University) – A Unified Model for First and Second Language Acquisition: An Alternative to Critical Periods
Despite a variety of logical and empirical problems, many researchers believe that language learning is limited by a critical period. The unified version of the Competition Model presents a way of accounting for age-related differences in language learning abilities that does not rely on critical periods, but instead on first language entrenchment, competition between multiple languages, and changing patterns of social integration into a new language community. The analysis has led to a variety of experiments designed to evaluate ways of improving L2 learning in adulthood.
Oct 31 – Inés Antón-Méndez (Utrecht) – Second language speakers and the art of turning thoughts into sentences.
Nov 7 – Taomei Guo (Penn State) – An electrophysiological investigation of reading words in a second language
Nov 19 – Marianne Gullberg (MPI Nijmegen) – The development of verb meaning in first and second language acquisition: Talking and gesturing about placement
Studies of both first and second language acquisition have largely focused on the acquisition of form over meaning. While comprehension studies indicate that language learners’ understanding is not always adult- or target-like, surprisingly little is known about the nature of the differences, the details of children’s and adult L2 learners’ semantic systems once forms are in use, and when and what changes take place. In this talk I will present three studies exploring what child and adult language learners’ gestures reveal about their verb meanings. The target domain is of that of placement (e.g., putting a cup on a table), which is lexicalized differently crosslinguistically. The first study shows how differences in placement verb meanings in Dutch and French are reflected in two distinct patterns of adult gesture use. The second study examines Dutch four- to five-year-old children’s acquisition of placement verbs demonstrating that their placement gestures change systematically as their placement verb meanings develop. The last study illustrates different gesture patterns in adult Dutch learners of L2 French depending on influences of the L1 and different degrees of semantic reorganization. Together the studies support the notion that speech and gesture form an integrated system as revealed (a) in robust crosslinguistic differences in gestural practices parallel to differences in speech, and (b) in similar parallel differences across modalities in development. The integrated nature of the systems further means that gestures open a new window on details of semantic representations; and that they can shed light on the process of acquisition by revealing shifts in such representations.
Nov 20 – Margaret Deuchar (Bangor) – Overcoming incommensurability in theories of code-switching
Research on code-switching has progressed to the extent that there are now several competing models attempting to account for the patterns found in conversational data from bilinguals. One of the goals of our research programme at Bangor is to critically evaluate these competing models rather than to work within only one theoretical framework. The purpose of this talk is to defend the goal of critical evaluation in the face of the argument that two theories are never comparable, or what philosophers of science have called ‘incommensurability’. I seek to show in particular that the critical evaluation of two theories which at first sight appear not to be conducive to comparison can lead to new insights, including the redefinition of concepts and the generation of new hypotheses.
An example of incommensurability in theories of code-switching may be found by considering the different views held by Poplack and Myers-Scotton regarding the proper scope of a theory of code-switching (see e.g. Poplack & Meechan, 1998, Myers-Scotton, 2002) vs borrowing. Here the problem of incommensurability arises because the notion of linguistic integration is key to the definition of borrowing for Poplack, while it is at best a hypothesis about borrowing for Myers-Scotton. We attempt a solution to this problem by critically examining the notion of linguistic integration in order to determine whether a clear line can be drawn between integrated and unintegrated donor-language items. The data we have used for this are English-origin verbs in data collected from Welsh-English bilingual speakers who speak mainly Welsh. We have subjected these to three tests of linguistic integration to see whether a clearcut distinction can be drawn between switches and borrowings. We show that the three tests have different results, and that the notion of a continuum between switches and borrowings is more defensible. Finally, we propose a new hypothesis to be examined in relation to the data, that the linguistic integration of donor-language items will be related to their frequency.
Nov 21 – Eleanora Rossi (Penn State) – Clitic production in italian agrammatism
Dec 5 – Marijt Witteman (Nijmegen) – Lexical and contextual factors in code-switching. A behavioral and (neuro)cognitive study
Dec10 – Laurie Stowe (Groningen) – Long Distance Dependencies: Beyond WH-Movement
One of the interesting phenomena in language is that one word (or phrase) can introduce a syntactic commitment for the occurrence of a word or phrase with particular syntactic characteristics which can occur much later in the sentence. WH-phrases are one of the most studied of these dependencies. These are particularly interesting because the commitment is for a missing element (trace or gap). That is the WH-phrase Which boy in Which boy did John tell Susan that he went to the movies with ___ yesterday? has to be paired with an unfilled NP position like that following with; note that without the WH-phrase this sentence would be ungrammatical in most varieties of English. Research using ERPs has shown that WH-phrases introduce a memory load which is carried until the commitment is filled and that there are also effects at the point at which the gap is located which are modulated by the distance over which integration with the WH-phrase must extend. There are a number of interesting issues about the processing of long-distance dependencies. First, it has not been clear whether there are specific processing routines for WH-dependencies, or if similar effects can be found for other types of syntactic commitments. I will discuss an experiment that involves the processing of the particle zai in Chinese, which introduces a commitment for a locative postposition. Compared to sentences with no specific commitment (copula constructions), these sentences show a sustained negativity similar to that found for WH-sentences. There are also signs of costs of integration across distance which are similar to those found in WH-constructions. This suggests that these processes are not specific to gap location and filling, but reflect more general processes regarding maintaining and resolving commitments. A second issue has to do with the extent to which the processing effects described above should be considered to be those of syntactic commitment and resolution or of semantic commitment and integration. This can be addressed manipulating the degree of semantic commitment that is embodied in the word or phrase which introduces the long distance commitment. For example, Chinese classifiers are similar to grammatical gender systems in that they introduce a commitment for a particular type of head noun, but it appears to be much more semantic in nature than the syntactic commitment introduced by grammatical gender. Nevertheless distance to the point of integration induces a positivity which is similar to that found for the zai construction, in which the semantic constraint is considerably less detailed. The primary difference is that the effect is much larger for the classifier commitments. Likewise, manipulating the degree of semantic constraint of a WH phrase modulates the size of the maintenance effect over intervening material. These results suggest that the semantic aspect of the commitment may be as important as the syntactic aspects in the brain processes which are reflected in these two ERP effects.
Dec 12 – Maya Misra (Penn State) – Electrophysiological evidence for complex interactions between orthography and phonology during reading
Jan 25 – Chip Gerfen (Penn State, Spanish, Italian, and Portuguese) – One language, two phonologies: a first look at processing in Andalusian Spanish
Feb 8 – Jason Gullifer (University of Massachusetts, Amherst) – Processing Reverse Sluicing: A contrast with processing filler-gap dependencies
Feb 15 – David Rosenbaum (Penn State University, Psychology) – Action planning and language planning
Feb 21 – Ping Li (Penn State) – Lexicon as a Dynamical System – Neural and Computational Mechanisms
Feb 22 – Carrie Jackson (Penn State University, German and Linguistics) – The processing of wh-questions in Dutch-English bilinguals
March 7 – Anat Prior (Carnegie Mellon University) – The bilingual advantage in executive control: Beyond spatial attention.
Bilingual chlidren, as well as older adults, exhibit advantages over their monolingul peers in tasks that rely on executive control. However, until recently, studies comparing bilingual and monolingual college students found mixed results, and a less consistent bilingual advantage. Most studies examining this population have used tasks that rely on spatial visual attention, such as variations of the Simon task, the ANT task and the anti-saccade task. In this talk, I will describe a new study that compared the performance of monolingual and bilingual college students on three executive control tasks, and investigated possible bilingual advantages beyond the domain of spatial attention. Possible implications of the results for the locus of the bilingual executive advantage will be explored.
March 17 – Kathy Midgley (Tufts and Université d’Aix-Marseille) – Masked Repetition and Translation Priming in Second Language Learners: A Window on the Time-Course of Form and Meaning Activation using ERPs
Words provide the central interface between form and meaning during language comprehension. Describing the nature of form-meaning interactions at the level of individual words is therefore one of the major goals of contemporary research on language comprehension. Part of that general endeavor involves describing exactly when semantic information becomes available during visual word recognition, and the nature of the form-level processing that is necessary for that to occur. I’d like to present some elements of response to these specific questions as well as address the question of the interrelation of the two languages of language learners at the word level. I will present a study using event-related potentials (ERPs) to examine the time-course of visual word recognition in second language learners using a masked repetition priming paradigm as well as other data from bilingual studies run in our lab that may shed light on these topics.
March 21 – Helena Ruf (University of Wisconsin-Madison) – Syntactic priming of word order among native and non-native speakers of German
March 24 – Laurence Leonard (Purdue University) – Variability in the Use of Tense and Agreement Morphology by Children with Specific Language Impairment: A Crosslinguistic Perspective.
Children with specific language impairment (SLI) often show an uneven profile within the area of morphosyntax. For example, in English, the use of tense/agreement morphemes stands out as an area of special weakness. In Swedish, both word order and the use of tense can be problematic. These weaknesses are resolved only gradually. Thus far, the theoretical frameworks that might account for the findings constitute only partial solutions. Some provide a very insightful description of the difficulty but do not explain the systematic, incremental changes seen over time; others provide a plausible account of the gradual change but lack the precision necessary to explain the differences across languages. An alternative view that incorporates the empirically supported claims of the previous approaches will be offered. The alternative assumes that many of the characteristics of the SLI profile, including crosslinguistic differences in the profile, can be traced to details in the input, and that children’s ability to interpret successively larger grammatical units in input sentences can lead to the gradual, incremental changes seen in the children’s morphosyntactic use. Observations supporting these assumptions will be provided, and their theoretical as well as clinical implications will be discussed.
March 28 – Philip Baldi (Penn State University, Classics and Ancient Mediterranean Studies) – What do historical linguists do and how is it relevant to cognitive linguistics?
April 4 – Richard Page (Penn State University, German and Linguistics) – The gender of English loanwords in Pennsylvania German
April 7 – Natasha Tokowicz (University of Pittsburgh) – Two is not better than one: The consequences of multiple translation equivalents for processing and learning
April 8 – Natasha Tokowicz (University of Pittsburgh) – Using hierarchical regression analyses in psycholinguistic investigations: A mini-tutorial
April 11 – Ann Bradlow (Northwestern University) – Bi-directional talker-listener adaptation in speech communication
Speech communication involves a chain of events that ideally aligns mental representations in the talker with those in the listener. Links in the chain can be “broken” at many points, particularly in cases where the talker and listener approach each other with non-optimally aligned linguistic sound systems (e.g. when they do not come from the same native language background) or when the listener’s access to the speech signal may be blocked by a hearing impairment or the presence of background noise. I will present a series of studies that aimed to understand how talkers and listeners repair these breakdowns in order to achieve talker-listener alignment. The first study examined talker adaptation to the listener. Specifically, we conducted a series of acoustic-phonetic comparisons of “clear speech” across languages with various phonological structures. A second study focused on the other side of the talker-listener channel by examining listener adaptation to the talker. In particular, we investigated listener adaptation to foreign-accented speech. Both of these studies examined talker-listener adaptation under laboratory conditions in which the talker and listener did not interact directly. A third study examined talker-listener interactions under more natural conditions of spontaneous, dialogue recordings. In this study we examined communicative efficiency and phonetic convergence in English conversations between pairs of native English talkers and in conversations between one native and one non-native talker of English. Together, these studies build a picture of speech communication as a bidirectional process of talker-listener alignment even in the case of communication between interlocutors who do not share a “mother tongue.”
April 18 – Susan Bobb (Penn State University, Psychology) – The Processing of Grammatical Gender in Simple German Nouns by Second Language Learners
April 25 – Giuli Dussias (Penn State University, Spanish, Italian, and Portuguese) – Grammatical gender is processing Spanish-English code-switches: A visual world study
May 2 – Taomei Guo (Penn State University, Psychology) – Processing noun plurality in sentences using ERPs
Sept 7 – Aaron Mitchel (Penn State) – Resolving competition in statistical learning
Sept 14 – Carol Hammer (Penn State) – Early Language and Literacy Development of Bilingual Preschoolers
Sept 21 – Jared Linck (Penn State) – The role of inhibition in bilingual language production: an investigation of cross-language retrieval induced forgetting
Sept 28 – Carrie Jackson (Penn State) – Proficiency level and the interaction of lexical and morphosyntactic information during L2 sentence processing
Oct 5 – Lisa Goffman (Purdue) – Motor and language influences on normal and disordered speech production in children
Oct 12 – Giuli Dussias (Penn State) – Using the visual world to study codeswitching
Nov 2 – Gerrit Jan Kootstra (Radboud) – Exploring cogntive aspects of codeswitching: an experimental approach.
Nov 9 – Xu Xu – The representaton of mental verbs
Nov 30 – Elina Mainela-Arnold (Penn State) – Cognitive Control in Children with SLI
Dec 4th – Dr. Janet van Hell (Penn State & Nijmegen) – The Neurocognition of Codeswitching: Evidence from Event-related Brain Potentials
Dec 6th – John Trueswell (UPenn) – The allocation of visual-spatial attention during event perception,event labeling and verb learning
Dr. Trueswell will present a series of eye tracking experiments that explore how visual-spatial attention is allocated during the perception of simple and complex events. Eye movements were recorded during a variety ofdifferent tasks, including event description, passive viewing, and thecomprehension of novel verbs (e.g., “Oh look! Mooping!”). The results show that there is a tight temporal (and sometimes causal) relationship between the allocation of visual-spatial attention and the rapid linguistic choices speakers make when describing events (linguistic choices that include Subject/Object assignment and manner vs. path description). Dr. Trueswell also shows that children as young as three years of age are sensitive to the characteristics of these speakers’ eye gaze patterns, and use them in conjunction with linguistic evidence to infer verb meaning.