• hero-1.jpg
  • hero-2.jpg
  • hero-3.jpg
  • hero-4.jpg
  • hero-5.jpg
  • hero-7.jpg
  • newhero-1.jpg
  • newhero-5.jpg
  • newhero-6.jpg
  • newhero-7.jpg
  • CLS_Hero_1_Fa16.jpg
  • CLS_Hero_2_Fa16.jpg
  • CLS_Hero_3_Fa16.jpg
  • CLS_Hero_4_Fa16.jpg
  • CLS_Hero_5_Fa16.jpg
You are here: Home / News & Events / CLS Speaker Series / Fall 2020 / Roger Beaty (Penn State) - Using Computational Semantic Models to Assess Verbal Creativity

Roger Beaty (Penn State) - Using Computational Semantic Models to Assess Verbal Creativity

When Oct 16, 2020
from 09:00 AM to 10:30 AM
Where ZOOM Virtual Room (Link will be provided)
Add event to calendar vCal
iCal

Using Computational Semantic Models to Assess Verbal Creativity

Conducting creativity research often involves asking several human raters to judge responses to verbal creativity tasks. Although such subjective scoring methods have proved useful, they have two inherent limitations—labor cost (raters typically code thousands of responses) and subjectivity (raters vary on their perceptions of creativity)—raising classic psychometric threats to reliability and validity. In this talk, I attempt to address these limitations by capitalizing on recent developments in automated scoring of verbal creativity via semantic distance, a computational method that uses natural language processing to quantify the semantic relatedness of texts. Five studies compared the top performing semantic models (e.g., GloVe, continuous bag of words) previously shown to have the highest correspondence to human relatedness judgements. We assessed these semantic models in relation to human creativity ratings from a canonical verbal creativity task and novelty/creativity ratings from two word association tasks. We find that a latent semantic distance factor—comprised of the common variance from five semantic models—reliably predicts human ratings across all creativity tasks, with semantic distance explaining over 80% of the variance in creativity and novelty ratings. We also replicate an established experimental effect in the creativity literature and show that semantic distance correlates with other creativity measures, demonstrating convergent validity. I conclude by describing an open platform that can efficiently compute semantic distance, and I discuss potential applications of semantic distance for assessing creative language use.