Skip to main content
LLMs Special Issue

Learning and communication pressures in neural networks: Lessons from emergent communication

Authors
  • Lukas Paul Achatius Galke (University of Southern Denmark (SDU), Odense, Denmark)
  • Limor Raviv (LEADS group, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands)

Abstract

Finding and facilitating commonalities between the linguistic behaviors of large language models and humans could lead to major breakthroughs in our understanding of the acquisition, processing, and evolution of language. However, most findings on human–LLM similarity can be attributed to training on human data. The field of emergent machine-to-machine communication provides an ideal testbed for discovering which pressures are neural agents naturally exposed to when learning to communicate in isolation, without any human language to start with. Here, we review three cases where mismatches between the emergent linguistic behavior of neural agents and humans were resolved thanks to introducing theoretically-motivated inductive biases. By contrasting humans, large language models, and emergent communication agents, we then identify key pressures at play for language learning and emergence: communicative success, production effort, learnability, and other psycho-/sociolinguistic factors. We discuss their implications and relevance to the field of language evolution and acquisition. By mapping out the necessary inductive biases that make agents' emergent languages more human-like, we not only shed light on the underlying principles of human cognition and communication, but also inform and improve the very use of these models as valuable scientific tools for studying language learning, processing, use, and representation more broadly.


Keywords: language acquisition, language evolution, emergent communication, large language models, learning pressures, learning biases

Downloads:
Download PDF
View PDF

Published on
2024-11-07

Peer Reviewed