Loading…

Perils and opportunities in using large language models in psychological research

The emergence of large language models (LLMs) has sparked considerable interest in their potential application in psychological research, mainly as a model of the human psyche or as a general text-analysis tool. However, the trend of using LLMs without sufficient attention to their limitations and r...

Full description

Saved in:
Bibliographic Details
Published in:PNAS nexus 2024-07, Vol.3 (7), p.pgae245
Main Authors: Abdurahman, Suhaib, Atari, Mohammad, Karimi-Malekabadi, Farzan, Xue, Mona J, Trager, Jackson, Park, Peter S, Golazizian, Preni, Omrani, Ali, Dehghani, Morteza
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The emergence of large language models (LLMs) has sparked considerable interest in their potential application in psychological research, mainly as a model of the human psyche or as a general text-analysis tool. However, the trend of using LLMs without sufficient attention to their limitations and risks, which we rhetorically refer to as "GPTology", can be detrimental given the easy access to models such as ChatGPT. Beyond existing general guidelines, we investigate the current limitations, ethical implications, and potential of LLMs specifically for psychological research, and show their concrete impact in various empirical studies. Our results highlight the importance of recognizing global psychological diversity, cautioning against treating LLMs (especially in zero-shot settings) as universal solutions for text analysis, and developing transparent, open methods to address LLMs' opaque nature for reliable, reproducible, and robust inference from AI-generated data. Acknowledging LLMs' utility for task automation, such as text annotation, or to expand our understanding of human psychology, we argue for diversifying human samples and expanding psychology's methodological toolbox to promote an inclusive, generalizable science, countering homogenization, and over-reliance on LLMs.
ISSN:2752-6542
2752-6542
DOI:10.1093/pnasnexus/pgae245