• hero-1.jpg
  • hero-2.jpg
  • hero-3.jpg
  • hero-4.jpg
  • hero-5.jpg
  • hero-7.jpg
  • newhero-1.jpg
  • newhero-5.jpg
  • newhero-6.jpg
  • newhero-7.jpg
  • CLS_Hero_1_Fa16.jpg
  • CLS_Hero_2_Fa16.jpg
  • CLS_Hero_3_Fa16.jpg
  • CLS_Hero_4_Fa16.jpg
  • CLS_Hero_5_Fa16.jpg
You are here: Home / News & Events / Katrina Connell (Penn State) - Suprasegmental information in spoken word recognition: The eye as a window to the mind

Katrina Connell (Penn State) - Suprasegmental information in spoken word recognition: The eye as a window to the mind

When Aug 30, 2019
from 09:00 AM to 10:30 AM
Where Moore 127
Add event to calendar vCal
iCal

Suprasegmental information in spoken word recognition: The eye as a window to the mind


When recognizing spoken words, listeners use the acoustic information available in the speech signal to identify the intended word (and meaning) of the speaker in the mental lexicon. The information used in the acoustic signal includes not only segmental information, but also suprasegmental information; however, how suprasegmental information is utilized in the word recognition system is poorly understood in comparison to segmental information. In this talk, I will discuss how visual world eye tracking can be used as a window to the mind that allows us to investigate spoken word recognition in a way that is unique to eye tracking by introducing some of my recent work. 

I will first discuss a study that investigated a phonological alternation which has been widely seen as completely neutralizing in perception. Visual world eye tracking was used to test if implicit sensitivity to the differences between the surface forms influences native listeners’ eye movement patterns, even if they cannot consciously access this for identification tasks. The results tentatively suggest that listeners may be sensitive to this difference in perception, despite inability to explicitly identify the underlying forms.I will then introduce a second study in which visual world eye tracking was again used, this time to investigate if English-speaking learners of Mandarin Chinese are similar to native listeners in their use of tonal information in recognizing spoken words, as suggested by previous priming work.  The results suggest that while learners are similar to native listeners in some regards, they differ in ways that could have a substantial impact on their overall comprehension.