Quick Setup: SHAP and LIME in Five Minutes

Install both libraries and train a model you can actually explain:

1
pip install shap lime xgboost scikit-learn pandas matplotlib
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
import shap
import lime
import lime.lime_tabular
import xgboost as xgb
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.datasets import fetch_openml

# Load a credit scoring dataset
data = fetch_openml("german_credit", version=1, as_frame=True)
X = pd.get_dummies(data.data, drop_first=True)
y = (data.target == "good").astype(int)

X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=42
)

# Train an XGBoost classifier
model = xgb.XGBClassifier(
    n_estimators=200,
    max_depth=4,
    learning_rate=0.1,
    eval_metric="logloss",
    random_state=42,
)
model.fit(X_train, y_train)
print(f"Test accuracy: {model.score(X_test, y_test):.3f}")

That gives you a working credit scoring model. Now you need to answer the question every regulator, product manager, and end user asks: why did the model make that decision?

SHAP: Global and Local Explanations from Game Theory

SHAP assigns each feature an importance value for a specific prediction. It’s grounded in Shapley values from cooperative game theory — each feature is a “player” contributing to the prediction. The math is solid, and the API is straightforward.

TreeExplainer for XGBoost and LightGBM

TreeExplainer is the fast path. It’s polynomial-time for tree-based models instead of the exponential brute-force approach.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
# TreeExplainer — optimized for tree models
explainer = shap.TreeExplainer(model)
shap_values = explainer.shap_values(X_test)

# Summary plot: global feature importance with direction
shap.summary_plot(shap_values, X_test, show=False)
import matplotlib.pyplot as plt
plt.tight_layout()
plt.savefig("shap_summary.png", dpi=150)
plt.close()

The summary plot shows which features push predictions up or down across the entire test set. Red dots on the right mean high feature values increase the prediction. This is far more informative than raw feature importance because you see the direction of each feature’s effect.

Waterfall and Force Plots for Single Predictions

When a loan application gets denied, you need to explain that specific decision:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# Explain a single prediction
idx = 0  # first test sample
explanation = shap.Explanation(
    values=shap_values[idx],
    base_values=explainer.expected_value,
    data=X_test.iloc[idx],
    feature_names=X_test.columns.tolist(),
)

# Waterfall plot: breakdown of one prediction
shap.waterfall_plot(explanation, show=False)
plt.tight_layout()
plt.savefig("shap_waterfall.png", dpi=150)
plt.close()

# Force plot: compact single-prediction view
shap.force_plot(
    explainer.expected_value,
    shap_values[idx],
    X_test.iloc[idx],
    matplotlib=True,
    show=False,
)
plt.tight_layout()
plt.savefig("shap_force.png", dpi=150)
plt.close()

The waterfall plot reads top-to-bottom: the base value (average model output) gets pushed up or down by each feature until you reach the final prediction. Force plots pack the same information into a horizontal bar.

KernelExplainer for Any Model

If you’re not using a tree model — maybe it’s a logistic regression, SVM, or some scikit-learn pipeline — KernelExplainer handles anything with a predict function. The tradeoff is speed: it’s model-agnostic but much slower.

1
2
3
4
5
6
7
# KernelExplainer — works with any model
# Use a background sample to speed things up
background = shap.sample(X_train, 100)
kernel_explainer = shap.KernelExplainer(model.predict_proba, background)

# Explain a small batch (KernelExplainer is slow on large sets)
shap_values_kernel = kernel_explainer.shap_values(X_test.iloc[:10])

Use KernelExplainer when you have no other option. For tree models, always prefer TreeExplainer. For deep learning, use DeepExplainer.

DeepExplainer for Neural Networks

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
import torch
import torch.nn as nn

class CreditNet(nn.Module):
    def __init__(self, input_dim):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(input_dim, 64),
            nn.ReLU(),
            nn.Linear(64, 32),
            nn.ReLU(),
            nn.Linear(32, 1),
            nn.Sigmoid(),
        )

    def forward(self, x):
        return self.net(x)

net = CreditNet(X_train.shape[1])
# ... train the network ...

# DeepExplainer for PyTorch/TF models
background_tensor = torch.tensor(X_train.values[:100], dtype=torch.float32)
deep_explainer = shap.DeepExplainer(net, background_tensor)

test_tensor = torch.tensor(X_test.values[:5], dtype=torch.float32)
deep_shap_values = deep_explainer.shap_values(test_tensor)

DeepExplainer uses DeepLIFT under the hood. It’s faster than KernelExplainer for neural networks but slightly less exact. For most practical purposes the difference doesn’t matter.

LIME: Local Explanations for Individual Predictions

LIME takes a different approach. It perturbs the input, observes how predictions change, and fits a simple interpretable model (usually linear regression) around that single data point. The result is a local approximation that tells you which features mattered for this specific prediction.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# Create a LIME explainer for tabular data
lime_explainer = lime.lime_tabular.LimeTabularExplainer(
    training_data=X_train.values,
    feature_names=X_test.columns.tolist(),
    class_names=["bad", "good"],
    mode="classification",
    random_state=42,
)

# Explain one prediction
lime_exp = lime_explainer.explain_instance(
    X_test.iloc[0].values,
    model.predict_proba,
    num_features=10,
    num_samples=5000,
)

# Save the explanation as HTML
lime_exp.save_to_file("lime_explanation.html")

# Or get it as a list of (feature, weight) tuples
for feature, weight in lime_exp.as_list():
    print(f"{feature}: {weight:+.4f}")

LIME explanations are intuitive — you get a list of features with positive or negative weights. Positive means the feature pushed toward the predicted class.

When to Use SHAP vs LIME

Pick SHAP when you need global explanations across the full dataset, when you want mathematically consistent feature attributions, or when you’re working with tree models (TreeExplainer is fast). SHAP values have nice theoretical properties: they always sum to the difference between the prediction and the base value.

Pick LIME when you need quick local explanations, when your stakeholders want a simple “these five features mattered” answer, or when you need model-agnostic explanations and speed isn’t critical. LIME is easier to explain to non-technical audiences.

In practice, use both. SHAP for your dashboards and monitoring. LIME for ad-hoc explanations in customer-facing tools.

Integrating Explanations into Production APIs

Don’t just generate plots during development. Ship explanations alongside predictions:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
from fastapi import FastAPI
from pydantic import BaseModel
import shap
import numpy as np

app = FastAPI()

# Load model and explainer at startup
explainer = shap.TreeExplainer(model)
feature_names = X_train.columns.tolist()

class PredictionRequest(BaseModel):
    features: list[float]

class PredictionResponse(BaseModel):
    prediction: int
    probability: float
    top_factors: list[dict]

@app.post("/predict", response_model=PredictionResponse)
def predict(req: PredictionRequest):
    arr = np.array(req.features).reshape(1, -1)
    pred = model.predict(arr)[0]
    prob = model.predict_proba(arr)[0].max()

    sv = explainer.shap_values(arr)[0]
    # Return top 5 features by absolute SHAP value
    top_idx = np.argsort(np.abs(sv))[-5:][::-1]
    top_factors = [
        {
            "feature": feature_names[i],
            "shap_value": round(float(sv[i]), 4),
            "direction": "increases" if sv[i] > 0 else "decreases",
        }
        for i in top_idx
    ]

    return PredictionResponse(
        prediction=int(pred),
        probability=round(float(prob), 4),
        top_factors=top_factors,
    )

This gives every API response a top_factors field explaining why the model made its decision. Cache the explainer — creating a new TreeExplainer per request is wasteful. For KernelExplainer, precompute the background dataset at startup and consider async processing since it’s slow.

Common Errors

shap.TreeExplainer raises XGBoostError: Invalid feature_names

This happens when you pass a numpy array but the model was trained with a DataFrame that had feature names. Fix it by passing a DataFrame with the same column names:

1
2
3
4
5
# Wrong — loses feature names
shap_values = explainer.shap_values(X_test.values)

# Right — preserves feature names
shap_values = explainer.shap_values(X_test)

LIME gives inconsistent explanations across runs

LIME is stochastic. It samples perturbations randomly, so two runs can produce different explanations. Set random_state in the constructor and increase num_samples to reduce variance:

1
2
3
4
5
6
7
8
9
lime_explainer = lime.lime_tabular.LimeTabularExplainer(
    training_data=X_train.values,
    feature_names=feature_names,
    mode="classification",
    random_state=42,  # fix this
)
exp = lime_explainer.explain_instance(
    row, model.predict_proba, num_samples=10000  # increase from default 5000
)

KernelExplainer is unbearably slow

The default behavior evaluates every possible feature coalition. Use a small background dataset and explain fewer samples at a time:

1
2
3
# Use kmeans to summarize background data
background = shap.kmeans(X_train, 10)  # 10 clusters instead of full dataset
explainer = shap.KernelExplainer(model.predict_proba, background)

SHAP summary plot is blank or crashes with matplotlib backend errors

This usually happens in headless environments (CI, Docker, SSH). Set the matplotlib backend before importing shap:

1
2
3
import matplotlib
matplotlib.use("Agg")  # must be before importing pyplot or shap
import shap