When Are Search Completion Suggestions Problematic?
- Alexandra Olteanu ,
- Fernando Diaz ,
- Gabriella Kazai
Computer Supported Collaborative Work and Social Computing (CSCW) |
Organized by ACM
CSCW 2020 Honorable Mention
CSCW 2020 Honorable Mention
Télécharger BibTexProblematic web search query completion suggestions—perceived as biased, offensive, or in some other way harmful—can reinforce existing stereotypes and misbeliefs, and even nudge users towards undesirable patterns of behavior. Locating such suggestions is difficult, not only due to the long-tailed nature of web search, but also due to differences in how people assess potential harms. Grounding our study in web search query logs, we explore when system-provided suggestions might be perceived as problematic through a series of crowd-experiments where we systematically manipulate: the search query fragments provided by users, possible user search intents, and the list of query completion suggestions. To examine why query suggestions might be perceived as problematic, we contrast them to an inventory of known types of problematic suggestions. We report our observations around differences in the prevalence of a) suggestions that are problematic on their own versus b) suggestions that are problematic for the query fragment provided by a user, for both common informational needs and in the presence of web search voids—topics searched by few to no users. Our experiments surface a rich array of scenarios where suggestions are considered problematic, including due to the context in which they were surfaced. Compounded by the elusive nature of many such scenarios, the prevalence of suggestions perceived as problematic only for certain user inputs, raises concerns about blind spots due to data annotation practices that may lead to some types of problematic suggestions being overlooked.
Failures of imagination: Discovering and measuring harms in language technologies
Auditing natural language processing (NLP) systems for computational harms remains an elusive goal. Doing so, however, is critical as there is a proliferation of language technologies (and applications) that are enabled by increasingly powerful natural language generation and representation models. Computational harms occur not only due to what content is being produced by people, but also due to how content is being embedded, represented, and generated by large-scale and sophisticated language models. This webinar will cover challenges with locating and measuring potential harms that language technologies—and the data they ingest or generate—might surface, exacerbate, or cause. Such harms can range from more overt issues, like surfacing offensive speech or reinforcing stereotypes, to more subtle issues, like nudging users toward undesirable patterns of behavior or triggering…