Please note that this newsitem has been archived, and may contain outdated information or links.
16 April 2014, Computational Social Choice Seminar, Justin Kruger
Abstract
Crowdsourcing is an important tool, e.g., in computational
linguistics and computer vision, to efficiently label large
amounts of data using nonexpert annotators. The individual annotations collected need to be aggregated into a single collective annotation. The hope is that the quality of
this collective annotation will be comparable to that of a
traditionally sourced expert annotation. In practice, most
scientists working with crowdsourcing methods use simple
majority voting to aggregate their data, although some have
also used probabilistic models and treated aggregation as a
problem of maximum likelihood estimation. The observation that the aggregation step in a collective annotation exercise may be considered a problem of social choice has only
been made very recently. Following up on this observation,
in this talk I will show how the axiomatic method, as practiced in social
choice theory, can make a contribution to this important domain and develop an axiomatic framework for collective
annotation, focusing amongst other things on the notion of
an annotator's bias. This theoretical study is complemented
with a short discussion of a crowdsourcing experiment using data from dialogue modelling in computational linguistcs.
This is joint work with Ulle Endriss, Raquel Fernández, and Ciyang Qing.
For more information, see http://www.illc.uva.nl/~ulle/seminar/ or contact Ulle Endriss (ulle.endriss at uva.nl).
Please note that this newsitem has been archived, and may contain outdated information or links.