Please note that this newsitem has been archived, and may contain outdated information or links.
25 April 2024, Computational Linguistics Seminar, Paul Röttger
Much recent work seeks to evaluate values and opinions in large language models (LLMs), motivated by concerns around real-world LLM applications. For example, politically-biased LLMs may subtly influence society when they are used by millions of people. Such real-world concerns, however, stand in stark contrast to the artificiality of current evaluations using multiple-choice surveys and questionnaires: real users do not ask LLMs survey questions. In my talk, I will present recent work in which we challenge the prevailing constrained evaluation paradigm for values and opinions in LLMs. I will also outline the steps we are now taking to build more realistic unconstrained evaluations for political values and opinions in LLMs.
Please note that this newsitem has been archived, and may contain outdated information or links.