The crowd is made of people: Observations from large-scale crowd labelling

ACM SIGIR Conference on Human Information Interaction and Retrieval |

Published by ACM

DOI

Like many other researchers, at Microsoft Bing we use external “crowd” judges to label results from a search engine—especially, although not exclusively, to obtain relevance labels for offline evaluation in the Cranfield tradition. Crowdsourced labels are relatively cheap, and hence very popular, but are prone to disagreements, spam, and various biases which appear to be unexplained “noise” or “error”. In this paper, we provide examples of problems we have encountered running crowd labelling at large scale and around the globe, for search evaluation in particular. We demonstrate effects due to the time of day and day of week that a label is given; fatigue; anchoring; exposure; left-side bias; task switching; and simple disagreement between judges. Rather than simple “error”, these effects are consistent with well-known physiological and cognitive factors. “The crowd” is not some abstract machinery, but is made of people. Human factors that affect people’s judgement behaviour must be considered when designing research evaluations and in interpreting evaluation metrics.