Please note that this newsitem has been archived, and may contain outdated information or links.
4 June 2024, Computational Linguistics Seminar, Tanise Ceron
Due to the widespread use of large language models (LLMs) in ubiquitous systems, we need to understand whether they embed a specific worldview and what these views reflect. Recent studies report that, prompted with political questionnaires, LLMs show left-liberal leanings. However, it is as yet unclear whether these leanings are reliable (robust to prompt variations) and whether the leaning is consistent across policies and political leaning. In this talk, I will present the results of our study where we propose a series of tests which assess the reliability and consistency of LLMs' stances on political statements based on a dataset of voting-advice questionnaires collected from seven EU countries and annotated for policy domains. We then evaluate LLMs ranging in size from 7B to 70B parameters and observe to what extent they are consistent in terms of political worldview and political orientation. Finally, I’ll discuss the importance of taking these biases into account, and how they raise relevant design questions in use case applications.
Please note that this newsitem has been archived, and may contain outdated information or links.