human-ai interaction

Guidelines for Human-AI Interaction

AAAI 2020 Tutorial – Guidelines for Human-AI Interaction

Share this page

Organizers: Adam Fourney (opens in new tab), Besmira Nushi (opens in new tab), Dan Weld (opens in new tab), Saleema Amershi (opens in new tab)

Location: Hilton New York Midtown, NY (room: Sutton South)

Date | Time: Saturday, February 8, 2020 | 8:30 am – 10:15 am

Tutorial Slides (opens in new tab)

Considerable research attention has focused on improving the raw performance of AI and ML systems, but much less on the best ways to facilitate effective human-AI interaction. Due to their probabilistic behavior and inherent uncertainty, AI-based systems are fundamentally different from traditional computing systems and mismatches between AI capabilities and user experience (UX) design can cause frustrating and even harmful outcomes. Therefore, the development of and deployment of beneficial AI systems affording appropriate user experiences requires guidelines to help AI developers make informed decisions with respect to model selection, objective function design, and data collection. This tutorial will introduce the audience with a comprehensive set of guidelines for building systems and interfaces designed for fluid human-AI interaction. The guidelines were validated through a rigorous, 3-step process described in the CHI 2019 paper, Guidelines for Human-AI Interaction (opens in new tab). They recommend best practices for how AI systems should behave upon initial interaction, during regular interaction, when they’re inevitably wrong, and over time. Most importantly, the tutorial will also reflect upon the research and engineering challenges whose solutions can enable the implementation of such guidelines for real-world AI systems.

This is the first time this tutorial is being organized and we hope it will promote an inter-community discussion on how to build and deploy human-centered machine learning. The audience needs to be familiar with basic concepts in AI and ML, such as training and validation, optimization techniques, and objective functions. For feedback and questions please reach us at [email protected] (opens in new tab).

Schedule

8:30 – 9:15: Introduction to human-AI interaction guidelines (by Saleema Amershi and Adam Fourney)

An introduction to user interaction guidelines as they relate to AI-based systems. This will include a discussion of the limitations of traditional guidelines in supporting AI and an overview of the new guidelines for human-AI interaction. The human-AI interaction guidelines will be explained through real-world examples from our user studies and follow-up evaluations.

9:15 – 9:45: Implications for ML and engineering (by Besmira Nushi)

A summary of implications and pre-requisites that human-AI collaboration in general and the presented guidelines in particular pose on algorithm design, engineering, and tool infrastructure. Mapping current active challenges in machine learning with the presented guidelines.

9:45 – 10:15: Algorithmic challenges for human-centered AI (by Dan Weld)

Decision-theoretic fundamentals of mixed-initiative and adaptive interfaces. Principles of explicable, legible, predicable and transparent AI. Algorithms for explaining learned classifiers and UI tradeoffs in communicating explanations.

Relevant Literature

Horvitz, Eric. “Principles of mixed-initiative user interfaces.” In Proceedings of the SIGCHI conference on Human Factors in Computing Systems, pp. 159-166. ACM, 1999. Pdf (opens in new tab)

Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh et al. “Guidelines for human-AI interaction.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 3. ACM, 2019. Pdf (opens in new tab)

Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. “Will You Accept an Imperfect AI?: Exploring Designs for Adjusting End-user Expectations of AI Systems.” In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, p. 411. ACM, 2019. Pdf (opens in new tab)

Gagan Bansal, Besmira Nushi, Ece Kamar, Daniel S. Weld, Walter S. Lasecki, and Eric Horvitz. “Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff.” In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, pp. 2429-2437. 2019. Pdf (opens in new tab)

Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. “Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance.” In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, vol. 7, no. 1, pp. 2-11. 2019. Pdf (opens in new tab)

Daniel S. Weld, and Gagan Bansal. “The challenge of crafting intelligible intelligence.” Communications of the ACM 62, no. 6 (2019): 70-79. Pdf (opens in new tab)

Besmira Nushi, Ece Kamar, and Eric Horvitz. “Towards accountable ai: Hybrid human-machine analyses for characterizing system failure.” In Sixth AAAI Conference on Human Computation and Crowdsourcing. 2018. Pdf (opens in new tab)

Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, and Hanna Wallach. “Manipulating and measuring model interpretability.” arXiv preprint arXiv:1802.07810 (2018). Pdf (opens in new tab)

Resources on the HAI Guidelines

Learn the guidelines

Use the guidelines in your work

Find out more