Follow us on:

Explaining Explainability: Feeding your hunger for responsible AI

Unissant Team
September 11, 2024

By Vishal Deshpande, Chief Data Analytics Officer

Imagine a doctor who gives you a diagnosis but can't explain how they reached that conclusion. Doesn’t instill a lot of confidence, right? That's how some AI systems operate: their inner working are mysterious, and it’s hard to pinpoint why they make a specific prediction.

In the federal government, that lack of transparency is problematic. Enter Explainable AI (XAI). XAI is all about understanding how AI systems—especially complex ones—arrive at their decisions. XAI is an essential concept to ensure responsible development and use of AI. It allows us to build trust, identify and address biases, and ultimately achieve better AI systems.

There are different techniques for achieving XAI, depending on the type of AI model you’re creating. Some methods involve visualizing how different factors contribute to the final decision while others focus on creating simpler models that are easier to interpret. Let’s explore various XAI techniques using an analogy that’s sure to make you hungry for more.

Everyone’s favorite foodie friend

We all have that one friend who is a massive foodie. They have an uncanny ability to recommend delicious restaurants that never disappoint. You're curious what goes into their choices. XAI is like having your friend explain their thought process.

Explaining those thoughts processes—or in this case, the way these AI models work—requires us to understand the overall behavior of the AI model (global explainability) and the individual predictions made by the model (local explainability).

Global explainability: what are their recommendation habits?

Global explainability aims to understand your foodie friend’s recommendation habits. You’re looking to uncover:

  • Feature importance: This is like asking your friend, "What factors do you consider most when recommending a restaurant?" They might say things like "cuisine type," "ratings," or "dietary restrictions."
  • Partial dependence plots: Imagine telling your friend you're craving spicy food. This technique would show how the chance of them recommending a Thai restaurant increases as spiciness preference rises.
  • Tree-based explainability: Does your friend has a series of questions they ask themselves—“Do you prefer something casual? Yes/No.” “Are you looking for vegetarian options? Yes/No”—XAI can reveal these steps, making their logic easier to follow.

Local explainability: why that specific restaurant?

Local explainability focuses on understanding the individual predictions made by the model…or in this case, your food-loving friend’s specific restaurant recommendations.  Techniques include:

  • SHAP (SHapley Additive exPlanations): This method assigns credit for a prediction to different features based on game theory concepts, providing a detailed breakdown of how each feature contributed to the outcome. Consider this the same as asking your friend, "Why did you recommend that specific Italian place?" SHAP would show how much weight they gave to your love of pasta (a big factor) versus your usual preference for trying new cuisines.
  • LIME (Local Interpretable Model-Agnostic Explanations): LIME creates a simplified local model around a specific prediction, explaining that particular instance. Imagine your friend makes a simplified recommendation for a single occasion (e.g., anniversary dinner). This is akin to a mini-version of their usual thought process. LIME does this for AI models, explaining a specific recommendation.
  • Anchors: Anchors are short explanations for why the AI made a certain recommendation. This is like your friend pointing at a restaurant's menu and saying, "This extensive pasta selection tells me you'd love this place!"
  • Counterfactual explanations: This technique explores how close your preferences were to being suggested something else. Imagine asking your friend, "What would I have to tell you differently for you to recommend a different type of cuisine?"

A variety of open-source libraries, toolkits, and cloud-based solutions exist to help model developers apply the above techniques.

Informed opinions: the role of data profiling

Now, how does your friend form these opinions in the first place? This is where data profile comes in. Imagine your friend keeps a detailed record of every restaurant they've tried, including factors like cuisine, atmosphere, price, and your feedback. This data is their "training set." Analyzing this data (data profiling) helps us understand the underlying patterns and biases that shape their recommendations.

A strong data profile helps us identify potential biases in the model's recommendations, understand the features that truly matter, and ultimately trust the model's output. Your friend's knowledge of restaurants is built on years of experience and data; AI models rely on high-quality data to make accurate and explainable predictions.

Food for thought: the importance of XAI

XAI helps build trust in AI recommendations, providing increased confidence to programs—and the constituents they serve—in the AI solutions they employ. It aids in the development of more efficient and responsible AI systems, helping to mitigate bias, promote transparency, and cultivate trust. Given its potential benefits, XAI offers a feast of possibilities that federal agencies cannot afford to ignore.

Share on: 

Unissant Inc.
12901 Worldgate Dr., Suite 600
Herndon, VA 20170

© 2024. All rights reserved. | Privacy Policy | Terms of use
magnifiercrosschevron-down