Techniques for ML Model Transparency and Debugging
Without good models and the right tools to interpret them, data scientists risk making decisions based on hidden biases, spurious correlations, and false generalizations. This has led to a rallying cry for model interpretability. Yet the concept of interpretability remains nebulous, such that researchers and tool designers lack actionable guidelines for how to incorporate interpretability into models and accompanying tools. This panel brings together experts on visualization, machine learning and human interaction to present their views as well as discuss these complicated issues.
- Date:
- Speakers:
- Gonzalo Ramos, Daniel S. Weld, Matthew Kay, Rich Caruana
- Affiliation:
- Microsoft Research, University of Washington, University of Michigan, Microsoft Research
-
-
Gonzalo Ramos
Principal Researcher
-
Rich Caruana
Senior Principal Researcher
-
Steven Drucker
Partner Research Manager
-
-
Series: Microsoft Research Faculty Summit
-
-
Quantum Computing and Workforce, Curriculum, and Application Development: Case study
Speakers:- Krysta M. Svore,
- Martin Roetteler
-
-
-
-
Crowd, Cloud and the Future of Work: Updates from human AI computation
Speakers:- Besmira Nushi,
- Vani Mandava
-
-
-
-
-
Empowering People to Achieve More: How Useful a Concept is Productivity?
Speakers:- Brendan Murphy
-
-
Productivity in Software Development
Speakers:- Neel Sundaresan,
- Margaret-Anne Storey,
- Prem Kumar Devanbu
-
-
-
-
-
-
-
Accessible Virtual Reality
Speakers:- Eyal Ofek
-
Calendar.help: A Virtual Meeting Scheduling Assistant
Speakers:- Pamela Bhattacharya
-
Visual Studio IntelliCode
Speakers:- Mark Wilson-Thomas
-
Microsoft Teams: Collaborate with Any Researcher Anywhere
Speakers:- Jethro Seghers
-
Project Alava: Programming Webs of Microcontrollers
Speakers:- James Devine,
- Teddy Seyed
-
AI in PowerPoint
Speakers:- Kostas Seleskerov