Can LLMs Simulate Human Behavioral Variability? A Case Study in the Phonemic Fluency Task
Published in Proceedings of the 15th Workshop on Cognitive Modeling and Computational Linguistics, 2026
Large language models (LLMs) are increasingly explored as substitutes for human participants in cognitive tasks, but their ability to simulate human behavioral variability remains unclear. This study examines whether LLMs can approximate individual differences in the phonemic fluency task, where participants generate words beginning with a target letter. We evaluated 34 distinct models across 45 configurations from major closed-source and open-source providers, and compared outputs to responses from 106 human participants. While some models, especially Claude 3.7 Sonnet, approximated human averages and lexical preferences, none reproduced the scope of human variability. LLM outputs were consistently less diverse, with newer models and thinking-enabled modes often reducing rather than increasing variability. Network analysis further revealed fundamental differences in retrieval structure between humans and the most human-like model. Ensemble simulations combining outputs from diverse models also failed to recover human-level diversity, likely due to high vocabulary overlap across models. These results, together with converging evidence from different cognitive and linguistic tasks, highlight key limitations in using LLMs to simulate human behavior.
Recommended citation: Qiu, M., Brisebois, Z., & Sun, S. (2026). Can LLMs simulate human behavioral variability? A case study in the phonemic fluency task. Proceedings of the 15th Workshop on Cognitive Modeling and Computational Linguistics, 250–263.
