Dynamics Insights Apps AI

Business Applications Applied AI

Explainability

Partagez cette page

Authors:
Alejandro Gutierrez Munoz (opens in new tab), Tommy Guy, (opens in new tab) Sally Kellaway

Trust and understanding of AI Models’ predictions through Customer Insights

AI models are becoming a normal part of many business operations, led by advancement in AI technologies and the democratization of AI. While AI is increasingly important in decision making, it can be challenging to understand what influences the outcomes of AI models. Critical details like the information used as input, the influence of missing data, and use of unintended or sensitive input variables can all have an impact on a model’s output. To use AI responsibly and to trust it enough to make decisions, we must have tools and processes in place to understand how the model is reaching its conclusions.

Microsoft Dynamics 365 Customer Insights goes beyond just a predicted outcome and provides additional information that helps better understand the model and its predictions. Using the latest AI technologies, Customer Insights surfaces the main factors that drive our predictions. In this blog post, we will talk about how Customer Insights’ out-of-the-box AI models enable enterprises to better understand and trust AI models, as well as what actions can be taken based on the additional model interpretability.

Explainability information on the results page of the Customer Lifetime Value Out of box model, designed to help you interpret model results.

Figure 1: Explainability information on the results page of the Customer Lifetime Value Out of box model, designed to help you interpret model results.

What is model interpretability and why is it important?

AI models are sometimes described as black boxes that consume information and output a prediction – where the inner workings are unknown. This raises serious questions about our reliance on AI technology. Can the model’s prediction be trusted? Does the prediction make sense? AI model interpretability has emerged over the last few years as an area of research with the goal of providing insights into how AI models reach decisions.

AI models leverage information from the enterprise (data about customers, transactions, historic data, etc.) as inputs. We call these inputs features. Features are used by the model to determine the output. A way to achieve model interpretability is by using explainable AI, or model explainability, which are a set of techniques that describe which features influence a prediction. We’ll talk about two approaches: local explainability that describes how the model arrived at a single prediction (say a single customer’s churn score) and global explainability that describes which features are most useful to make all predictions. Before we describe how a model produces explainability output and how you should interpret it, we need to describe how we construct features from input data.

AI Feature Design with Interpretability in mind

AI models are trained using features, which are transformations of raw input data to make it easier for the model to use. These transformations are a standard part of the model development process.

For instance, input data may be a list of transactions with dollar amounts, but a feature might be the number of transactions in the last thirty days and the average transaction value. (Many features summarize more than one input row.) Before features are created, raw input data needs to be prepared and “cleaned”. In a future post, we’ll deep dive on data preparation and the role that model explainability plays in it.

To provide a more concrete example of what a feature is and how they might be important to the model’s prediction, take these two features that might help predict customer churn value: frequency of transactions and number of product types bought. In a coffee shop, frequency of transactions is likely a great predictor of continued patronage: the regulars who walk by every morning will likely continue to do so. But those regulars may always get the same thing: I always get a 12 oz black Americano and never get a mochaccino or a sandwich. That means that number of product types I buy isn’t a good predictor of my churn: I buy the same product, but I get it every morning.

Conversely, the bank down the road may observe that I rarely visit the branch to transact. However, I’ve got a mortgage, two bank accounts and a credit card with that bank. The bank’s churn predictions might rely on the number of products/services bought rather than frequency of buying a new product. Both models start with the same set of facts (frequency of transactions and number of product types) and predict the same thing (churn) but have learned to use different features to make accurate predictions. Model authors created a pair of features that might be useful, but the model ultimately decides how or whether to use those features based on the context.

Feature design also requires understandable names for the features. If a user doesn’t know what a feature means, then it’s hard to understand what it means if the model thinks it’s important! During feature construction, AI engineers work with Product Managers and Content Writers to create human-readable names for every feature. For example, a feature representing the average number of transactions for a customer in the last quarter could look something like ‘avg_trans_last_3_months’ in the data science experimentation environment. If we were to present features like this to business users, it could be difficult for them to understand exactly what that means.

Explainability via Game Theory

A main goal in model explainability is to understand the impact of including a feature in a model. For instance, one could train a model with all the features except one, then train a model with all features. The difference in accuracy of model predictions is a measure of the importance of the feature that was left out. If the model with the feature is much more accurate than the model without the feature, then the feature was very important.

The basic idea to compute explainability is to understand each feature’s contribution to the model’s performance by comparing performance of the whole model to performance without the feature. In reality, we use Shapley values to identify each feature’s contribution, including interactions, in one training cycle.

Figure 2: The basic idea to compute explainability is to understand each feature’s contribution to the model’s performance by comparing performance of the whole model to performance without the feature. In reality, we use Shapley values to identify each feature’s contribution, including interactions, in one training cycle.

There are nuances related to feature interaction (e.g., including city name and zip code may be redundant: removing one won’t impact model performance but removing both would) but the basic idea remains the same: how much does including a feature contribute to model performance?

With hundreds of features, it’s too expensive to train a model leaving each feature out one by one. Instead, we use a concept called Shapley values (opens in new tab) to identify feature contributions from a single training cycle. Shapley values are a technique from game theory, where the goal is to understand the gains and costs of several actors working in a coalition. In Machine Learning, the “actors” are features, and the Shapley Value algorithm can estimate each feature’s contribution even when they interact with other features.

If you are looking for (much!) more detail about Shapley analysis, a good place to start is this GitHub repository: GitHub – slundberg/shap: A game theoretic approach to explain the output of any machine learning model. (opens in new tab)

Shap Contributions to model’s prediction

Figure 3: Shap Contributions to model’s prediction

Other types of models, like deep learning neural networks, require novel methods to discover the features contributions. Customer Insights’ sentiment model uses a deep learning transformer model that uses thousands of features. To explain the impact of each feature we leverage a technique known as integrated gradients. (opens in new tab) Most deep learning models are implemented using neural networks, which learn by fine-tuning weights of the connections between the neurons in the network. Integrated gradients evaluate these connections to explain how different inputs influence the results. This lets us measure which words in a sentence have the highest contribution to the final sentiment score.

Record level explainability information generated by the Sentiment analysis model.

Figure 4: Model level explainability information generated for the Sentiment Analysis model.

 

Record level explainability information generated by the Sentiment analysis model.

Figure 5:  Record level explainability information generated by the Sentiment analysis model.

How to leverage the interpretability of a model

AI models will output predictions for each record. A record is an instance or sample of the set we want to predict a score for. For example, for a churn model in Customer Insights, each customer is a record to score. Explainability is first computed at the record level (local explainability), meaning we compute the impact of each feature on predicting the output for a single record. If we are interested in a particular set of records (e.g., I have a specific set of customer accounts I manage), or just a few examples to validate our intuitions as to what features might be important to the model, looking at local explainability makes sense. When we are interested in the main features across all scored records, we need to see the aggregated impact for each record, or its global explainability.

Global explainability example from the Churn model.

Figure 6:  Global explainability example from the Churn model.

Features can impact the score in a positive way or negative way. For instance, a high value on number of support interactions might make a customer more likely to churn by 13%, while more transactions per week might make the customer less likely to churn by 5%. In these cases, a high numerical value for each feature (support calls or transactions per week) has opposing effects on the churn outcome. Feature impact therefore needs to consider both magnitude (size of impact) and directionality (positive or negative).

 

Local explainability example for the Business-to-Business Churn model.

Figure 7: Local explainability example for the Business-to-Business Churn model.

Acting on explainability information

Now that we have made the case for adding explainability as an important output of our AI models, the question is what do I do with this information? For model creators, adding explainability as part of feature design and model debugging is a very powerful tool, as it can highlight data issues from ingestion, clean-up process, transformations, etc. It also helps validate the behavior of the model early on: does the way the model make predictions pass a “sniff test” where obviously important features are important in the model? For consumers of AI models, it helps validate their assumptions about what should be important to the model. It can also help inform you of particular trends and patterns to pay attention to in your customer base to inform next steps.

Explainability is an integral part of providing more transparency to AI models, how they work, and why they make a particular prediction. Transparency is one of the core principles of Responsible AI, which we dive into more detail in a future blog post.