• hero-1.jpg
  • hero-2.jpg
  • hero-3.jpg
  • hero-4.jpg
  • hero-5.jpg
  • hero-7.jpg
  • newhero-1.jpg
  • newhero-5.jpg
  • newhero-6.jpg
  • newhero-7.jpg
  • CLS_Hero_1_Fa16.jpg
  • CLS_Hero_2_Fa16.jpg
  • CLS_Hero_3_Fa16.jpg
  • CLS_Hero_4_Fa16.jpg
  • CLS_Hero_5_Fa16.jpg

Fall 2018

August 24, 2018 - Rena Torres Cacoullos (Penn State)

Title: Code-Switching and Grammars in Contact: Connected but not Conflated

Abstract: Despite elusive evidence, it is widely held that code-switching promotes grammatical convergence. This talk puts forward quantitative diagnostics of grammatical similarity and difference by using structural variation in speech. The poster child for convergence has been variable subject pronoun expression in Spanish toward English, which is classified as a non-null subject language (e.g., Heine and Kuteva 2005:70; Otheguy and Zentella 2012). Variation patterns are uncovered in a 300,000-word corpus capturing the spontaneous bilingual speech of members of a long-standing community in northern New Mexico (Torres Cacoullos and Travis 2018). Approximately 10,000 tokens of the variable are extracted from this bilingual corpus, and from comparable monolingual corpora of both Spanish and English. The most direct test of the hypothesis of convergence via code-switching is by comparing bilinguals’ own use of the two languages. Four independent analyses of the linguistic conditioning of variable subject expression show that the bilinguals’ Spanish and English differ from each other and align with their respective monolingual benchmarks. Moreover, comparisons in the presence and absence of code-switching reveal that bilinguals maintain Spanish-particular patterns even in the context of proximate use of English. 

August 31, 2018 - Manuel Pulido-Azpíroz (Penn State)

Title: Desirable Difficulties in Learning L2 Collocations: The Role of Native Language Regulation

Abstract: An aspect of language that shows large variability in learning success is that of collocations, a specific type of multi-word unit. Collocations are predictable and expected for the native speakers of a language (e.g., “catch a cold”, “run a business”). But for L2 speakers, learning incongruent collocations (those that have partially overlapping lexical make-up across the two languages) is highly problematic. Prior learning studies have suggested that input conditions that cause interference should be avoided. An alternative approach inspired in the literature on desirable difficulties is that, to learn and process incongruent L2 collocations efficiently, L2 speakers must learn to inhibit the equivalent L1 collocations. I will report on two studies: The first study tested the counter-intuitive prediction that learning would be improved when practice conditions induce L1-related interference. In a second study I propose a new approach to quantify within-language competition in collocations.

September 7, 2018 - Eric Pelzl (Penn State)

Title: What Is and Isn’t Hard About Learning Lexical Tones: Research with Advanced Second Language Learners of Mandarin Chinese

Abstract: Anyone with experience learning a new language likely knows something about the challenge of adapting to unfamiliar sounds. Tone languages like Mandarin Chinese present an interesting instance of this challenge. In Mandarin, pitch (F0) patterns on syllables are an integral part of words. For example, the syllable ma with a high-level tone is ‘mom’, with a rising tone is ‘hemp’, with a low tone is ‘horse’, and with a falling tone is ‘scold’. For learners from non-tonal language backgrounds, tones present several challenges: learners of course need to (1) hear the differences between tone categories (high, rising, low, falling), but they also need to treat these new categories as essential for word identity by (2) encoding them in memory, and (3) using them in real-time word recognition. Cute examples like ma might make it seem as if the integral nature of tones will be obvious to learners, but in practice syllables with a neat set of four-way tone contrasts are rare. For word recognition in natural speech, most tones on most words are neither necessary (due to context) nor sufficient (due to homophones). 

With these thoughts in mind, I will present a series of behavioral and ERP studies examining the tone abilities of advanced (i.e., successful) L2 learners of Mandarin. How well do these successful L2 learners master tones and what difficulties, if any, do they encounter? We will see that advanced learners perform with virtually the same mastery as native speakers on challenging tone identification tasks, but that the introduction of minimal context (a second syllable) disproportionately affects learners. When it comes to (tone) word recognition, the same learners who excel at tone identification display wide variability. While some individuals excel, learners as a group are often insensitive to tone cues—even when they know the appropriate tones for the tested words. At the same time, learners often lack explicit tone knowledge for frequent words, suggesting they are not able to efficiently encode lexical tone representations in long-term memory. Nevertheless, as noted, all the learners included in this research are rightly described as successful. Considering this, I will finish by considering whether accurate L2 tone perception even matters at all.

September 14, 2018 - Navin Viswanathan (Penn State)

Title: Studying the Perception of a Variable Speech Signal:  Past Studies and Current Directions

Abstract: Human listeners demonstrate remarkably robust speech perception despite being faced with a speech signal that is highly variable. In this talk, I will briefly consider the sources of variability that affect the speech signal and highlight the central issue for any account of speech perception. I will then provide an overview of the kind of questions we address in our lab by discussing past work in different domains of speech variability. Finally, I will report on a new paradigm that we are working on that takes seriously the collaborative nature of spoken language processes.

September 21, 2018 - Katherine Kerschen (Penn State)

Title: The Effect of Abstract Word Training on Productive Vocabulary Knowledge in a Second Language

Abstract: Vocabulary is an essential component of L2 learning, yet many adult learners struggle to learn vocabulary and, in particular, to develop productive vocabulary knowledge, which is the ability to spontaneously produce words in the appropriate context. Prior research has demonstrated that this is in part due to difficulty in mapping new word forms to already-known concepts (Jiang, 2000). Certain word-level variables, such as concreteness, have also been shown to affect lexical acquisition and processing, with abstract words being more difficult to acquire in the L2 (de Groot & Keijzer, 2000). However, these concerns are not unique to L2 acquisition. Persons with acquired language disorders such as aphasia often have problems with lexical retrieval, i.e. with connecting forms to meanings, especially with more complex items such as abstract words (Kiran et al., 2009). With these potential parallels in mind, we have explored whether models and methodologies developed in the field of aphasia research can fruitfully be applied to aspects of L2 learning.

In this talk, I will report on two studies which investigated the effect of abstract word training on the development of productive vocabulary knowledge in the L2. In the first study, a word training paradigm initially developed to treat lexical retrieval deficits in patients with aphasia was used to train L2 learners on abstract words in specific context-categories (e.g., restaurant). The training, which was conducted in individual sessions with the investigator, led to increased productive knowledge of both the trained abstract words and untrained concrete words within that category, paralleling previous findings from aphasia research (Sandberg & Kiran, 2014). The second study implemented a modified version of the training paradigm in a low-intermediate L2 classroom and found a similar pattern of gains. These findings not only indicate that techniques from communication disorders research can successfully be applied in an L2 context, but they also open up new avenues for exploring the underlying conceptual representations of abstract and concrete words in the L2 lexicon. 

September 28, 2018 - Deborah Morton and her Language in Africa Students (Benin trip; Penn State)

Title: Travel to Benin: Language in Africa Embedded Course Report

Abstract: One aspect of the study abroad opportunities available to Penn State students are courses called Embedded Courses, where a semester of class content can be used (in part) as preparation for a travel experience. Dr. Morton, with the support of the Linguistics Program, the African Studies program, and the Center for Language Science developed such a course called Language in Africa, which was offered in Spring 2018. The course also involved an optional embedded portion, which was two weeks of travel in Benin, West Africa, where Dr. Morton has been doing research regularly since 2006. During this presentation, Dr. Morton, her students, and Dr. Frances Blanchette who accompanied them, will share about the embedded course experience from their different perspectives. The discussion will focus on what happened during this particular embedded course experience, including goals behind the planning of the course and travel time, how the students were affected by the experience, and what they learned (including from a project they were required to complete on multilingualism in Benin). We will also present aspects of the research Dr. Blanchette and Dr. Morton were able to plan and conduct as a part of the travel experience.

October 12, 2018 - Annie Olmstead (Penn State)

Title: Phonetic Learning Through Exposure to Dialectical Variants of English

Abstract: The literature on phonetic learning for speech has shown that listeners are quite flexible in their perception of speech sounds. Specifically, research has shown that the boundaries separating similar speech sounds can be altered for a particular listener through repeated exposure to manipulated speech tokens. This particular form of learning is often cited as supporting listeners’ abilities to understand unfamiliar accents and to learn new phonetic inventories in the process of learning a second language. In this talk, I will discuss work examining whether the information supporting phonetic learning for speech is actually available in naturally produced speech and whether this information may support learning more effectively than manipulated tokens. 

October 26, 2018 - Roger Beaty (Penn State)

Title: Creative Cognition and Brain Network Dynamics

Abstract: How do people come up with new and useful ideas? In this talk I will discuss a series of behavioral and neuroimaging experiments on creative thinking. The talk will focus on research examining the contributions of fundamental cognitive processes (e.g., memory retrieval and cognitive control) to performance on various creative tasks involving idea generation, such as divergent thinking and novel metaphor production. I will also highlight research exploring the roles of large-scale functional brain networks and their dynamic interactions during creative task performance, with a focus on mapping these network dynamics to specific cognitive processes. The talk will conclude with some potential future directions to further isolate cognitive and neural systems that support creative thinking in the arts, sciences, and everyday life.

November 2, 2018 - Alex McAllister (Penn State)

Title: Does She Sound Strange or Do They Sound Strange? Exploring the Effects of Dialectal Diffuseness in Perceptual Learning

Abstract: Imagine that you’re walking down the street and a stranger asks you a question. You ́re pretty sure she ́s a native speaker of English, but something about the way she produces certain sounds is unfamiliar to you. If you had to make a choice, would you attribute this novel phonetic variation to something particular to the speaker (an idiolect), or a dialect of English you've never heard? What if you lived in a large city, and regularly encountered different dialects of English? Would that change your decision?

In this talk, I will explore how the perceiver ́s linguistic environment affects the cause she assigns to novel phonetic variation. Specifically, I will ask how being a member of a speech community in which multiple dialects of Spanish are spoken affects the perceiver ́s likelihood of attributing novel phonetic variation to a previously unencountered dialect of Spanish.

To investigate how dialectal diversity may affect the causes we assign to phonetic variation, I will present a preliminary analysis of a perceptual learning study testing two Spanish speaking populations. Spanish-English bilinguals from Penn State, a dialectally diffuse community of Spanish speakers, and UC Riverside, a relatively homogenous dialectal group, were exposed to two Spanish speakers whose coda-/s/ productions were replaced with /ʃ/, creating a quasi-novel dialect of Spanish. To test for the cause assigned to this phonetic variation, generalization effects in perceptual learning are examined.

This line of research aims to contribute to the field of sociolinguistics by investigating how phonetic variation becomes attributable to indexically-linked groups of speakers. Although it is well established that perceivers use top-down level knowledge to make predictions of the cause of phonetic variation (Drager 2011; Hay et al. 2006; Hay & Drager 2010; Koops et al. 2008), how these links are formed is not well understood. This research aims to fill that gap, and does so by investigating how the perceiver’s environment influences the likelihood of these correlations forming. 

November 9, 2018 - Elisabeth Karuza (Penn State)

Title: Statistical Learning at Work in a Complex and Changing Environment

Abstract: Learners are highly sensitive to pairwise statistical associations embedded in sensory input (e.g., what is the probability one element will follow another in time?). However, it remains an essential question how we use this information to build up complex knowledge systems (e.g., language), particularly in the face of noise or competing signals. Drawing on insights from functional neuroimaging, I will discuss the interplay between high-level association areas and sensory-specific cortex in a dynamic learning context. I will show that prefrontal cortex, a slow-to-mature area associated with cognitive control, underpins sequential pattern learning in adults, raising the possibility that they recruit a sub-optimal learning system relative to children. Through a series of behavioral experiments, I will then demonstrate that tools from network science offer a novel and largely untapped means of probing how learners scale up pairwise associations to gain knowledge of broad-scale patterns in their environment.

November 30, 2018 - Clara Martin (Basque Center on Cognition, Brain and Language; San Sebastian, Spain)

Title: Language Comprehension in Accented Speech

Abstract: I will present a series of experiments aiming to define how language comprehension is modulated when a native listener is facing a non-native speaker. In the project, we explored this question regarding different fundamental aspects of sentence comprehension: syntactic, semantic and pragmatic processing. Regarding syntactic processing, we showed that the way the brain reacts to grammatical errors depends on the linguistic status of the speaker, as well as on the frequency of errors in real life. Regarding the semantic level, we showed that processing of dialectal synonyms, cognates, and semantic violations differ when listening to native and non-native speakers, and we also revealed that anticipatory processes are affected by the speaker’s accent. Finally, regarding pragmatics, we showed that ironic statements are processed differently when uttered by a native or non-native speaker. In its whole, this project provides important advances on our knowledge on sentence/discourse comprehension. The results we obtained are relevant at the pragmatic level, providing a better understanding on how native listeners manage to comprehend foreign speakers, and at the theoretical level, showing how sentence comprehension is recalibrated on-line depending on external cues such as the speaker’s accent.