Search
Previous slide
Next slide
Roger Beaty (Penn State) – Using Computational Semantic Models to Assess Verbal Creativity
October 16, 2020
9:00 am
ZOOM Virtual Room (Link will be provided)

Roger Beaty (Penn State) – Using Computational Semantic Models to Assess Verbal Creativity

Using Computational Semantic Models to Assess Verbal Creativity

Conducting creativity research often involves asking several human raters to judge responses to verbal creativity tasks. Although such subjective scoring methods have proved useful, they have two inherent limitations—labor cost (raters typically code thousands of responses) and subjectivity (raters vary on their perceptions of creativity)—raising classic psychometric threats to reliability and validity. In this talk, I attempt to address these limitations by capitalizing on recent developments in automated scoring of verbal creativity via semantic distance, a computational method that uses natural language processing to quantify the semantic relatedness of texts. Five studies compared the top performing semantic models (e.g., GloVe, continuous bag of words) previously shown to have the highest correspondence to human relatedness judgements. We assessed these semantic models in relation to human creativity ratings from a canonical verbal creativity task and novelty/creativity ratings from two word association tasks. We find that a latent semantic distance factor—comprised of the common variance from five semantic models—reliably predicts human ratings across all creativity tasks, with semantic distance explaining over 80% of the variance in creativity and novelty ratings. We also replicate an established experimental effect in the creativity literature and show that semantic distance correlates with other creativity measures, demonstrating convergent validity. I conclude by describing an open platform that can efficiently compute semantic distance, and I discuss potential applications of semantic distance for assessing creative language use.