Skip to main content
LLMs Special Issue

Whither developmental psycholinguistics?

Author
  • Victor Gomes (University of Pennsylvania)

Abstract

Large Language Models (LLMs; e.g., GPT-n) have attracted the attention of psycholinguists who see a potential for solutions to ancient problems in them. This paper argues that, thus far, LLMs have not, in fact, suggested any new solutions, but instead just appear to by virtue of their sheer size and “double” opaqueness (both as models and as products). In the realm of cross-situational word learning, LLMs run into the same issues that long-discussed "global models" do in accounting for the rapidity and low-resourced nature of language acquisition. In the realm of meaning, they run into largely the same issues as the long-established conceptual theories they are often compared to. In neither case do they appear to represent a true resolution to known issues, and as such broadly encouraging the use of LLMs in devel-opmental psycholinguistics is a gamble. This paper then argues that LLMs come with a range of imme-diate costs (to privacy, labor, and the climate) and so encouraging their use is not simply a low-risk gam-ble. These costs should be kept in mind when deciding whether to conduct any research with LLMs, whether it is to prove that they have some capacity or lack it. One way of keeping these costs in mind is to learn about them and talk about them with each other, rather than deciding that ethical questions are solely under the purview of some other discipline(s)

Keywords: word learning, concepts, connectionism, language acquisition

Downloads:
Download PDF
View PDF

Published on
2024-11-07

Peer Reviewed