Please note that this newsitem has been archived, and may contain outdated information or links.
9 December 2024, Language Evolution & Learning, Tessa Verhoef
Human cognition constrains how we communicate. Our cognitive biases and preferences interact with the processes that drive language emergence and change in non-trivial ways. A powerful method to discern the roles of cognitive biases and processes like language learning and use in shaping linguistic structure is to build agent-based models. Recent advances in computational linguistics and deep learning sparked a renewed interest in such simulations, creating the opportunity to model increasingly realistic phenomena. These models simulate emergent communication, referring to the spontaneous development of a communication system through repeated interactions between individual neural network agents. However, a crucial challenge in this line of work is that such artificial learners still often behave differently from human learners. Directly inspired by human artificial language learning studies, we proposed a novel framework for simulating language learning and change, which allows agents to first learn an artificial language and then use it to communicate, with the aim of studying the emergence of specific linguistics properties. I will present two studies using this framework to simulate the emergence of a well-known language phenomenon: the word-order/case-marking trade-off. I will also share some very recent findings where we test for the presence of a well-known human cross-modal mapping preference (the bouba-kiki effect) in vision-and-language models. Cross-modal associations play an essential role in human language understanding, learning, and evolution, but our findings reveal that current multimodal language models do not align well with such human preferences.
Please note that this newsitem has been archived, and may contain outdated information or links.