Addressing Fairness, Accountability, and Transparency in Machine Learning

Publié

Par , Partner Research Manager

Posted by Microsoft Research

Machine learning and big data are certainly hot topics that emerged within the tech community in 2014. But what are the real-world implications for how we interpret what happens inside the data centers that churn through mountains of seemingly endless data?

Hanna Wallach (opens in new tab)For Microsoft machine learning researcher Hanna Wallach (opens in new tab) (@hannawallach (opens in new tab)), opportunity lies outside the box. As an invited speaker at the NIPS 2014 workshop on Fairness, Accountability, and Transparency in Machine Learning (opens in new tab), Wallach spoke about how her shift in research to the emerging field of computational social science led her to new insights about how machine learning (opens in new tab) methods can be applied to analyze real-world data about society.

Spotlight: On-demand video

AI Explainer: Foundation models ​and the next era of AI

Explore how the transformer architecture, larger models and more data, and in-context learning have helped advance AI from perception to creation.

Her talk, "Big Data, Machine Learning, and the Social Sciences,” now available online (opens in new tab), focuses on the four keys that she says lie at the heart of the matter: data, questions, models, and findings.

"Within computer science, there’s a lot of enthusiasm about big data at the moment," Wallach says. "But when it comes to addressing bias, fairness, and inclusion, perhaps we need to focus our attention on the granular nature of big data, or the fact that there may be many interesting data sets, nested within these larger collections, for which average-case statistical patterns may not hold."

A researcher at Microsoft Research New York City (opens in new tab), Wallach also is a core faculty member in the recently formed Computational Social Science Initiative at the University of Massachusetts Amherst.