3 min read

Google’s PAIR ( People + AI Research ) team has come out with a new tool called “What-if”. It is a new feature in the open-source TensorBoard web application which allows users to analyze an ML model without the need of writing code. It also provides an interactive visual interface which lets you explore the model results.

The “What-if” tool comes packed with two major features namely, Counterfactuals, and Performance and Algorithmic Fairness analysis.

Let’s have a look at these two features.


What-if allows you to compare a datapoint to the most similar point where your model predicts a different result. These points are known as “counterfactuals”.

It lets you edit a datapoint by hand and explore the prediction changes in a model’s a. In the figure below, the What-if tool is used on a binary classification model which predicts whether a person’s income is more than $50k depending on public census data from the UCI census dataset.

Comparing counterfactuals

This is a prediction task used by ML researchers when analyzing algorithmic fairness. So, here the model made a prediction that the person’s income is more than $50k for the selected datapoint. The tool then automatically locates the most-similar person in the dataset for which earnings of less than $50k had been predicted in the model and compares the two cases side-by-side.

Performance and Algorithmic Fairness Analysis

With the What-if tool, exploring the effects of distinct classification thresholds is also possible. The tool considers constraints such as different numerical fairness criteria. The figure below presents the results of a smile detector model which has been trained on the open-source CelebA dataset. The CelebA dataset comprises annotated face images of celebrities.

           Comparing the performance of two slices of data in a smile detection model

 In the figure above, the datasets have been divided by whether the people have brown hair. Each of the two groups in the figure has a ROC curve and a confusion matrix of the predictions. It also includes sliders for setting how confident the model must be before determining that a face is smiling. Here, the What-if tool automatically sets up the confidence thresholds for the two groups in order to optimize for equal opportunity.

Apart from these major features, the What-if tool also explores features such as visualizing your dataset directly using Facets and manually editing examples from your dataset along with automatic generation of partial dependence plots ( shows how the model’s predictions change with any single feature changing).

Additionally, the Google’s PAIR team released a set of demos using pre-trained models to illustrate the capabilities of the What-If Tool. Some of these demos include detecting misclassifications (A multiclass classification model), assessing fairness in binary classification models (image classification model), and investigating model performance across different subgroups (A regression model).

“We look forward to people inside and outside of Google using this tool to better understand ML models and to begin assessing fairness,” says the PAIR team.

For more information on What-if, be sure to check out the official Google AI blog.

Read Next

Dr. Fei Fei Li, Google’s AI Cloud head steps down amidst speculations; Dr. Andrew Moore to take her place

Introducing Deon, a tool for data scientists to add an ethics checklist

Google wants web developers to embrace AMP. Great news for users, more work for developers

Tech writer at the Packt Hub. Dreamer, book nerd, lover of scented candles, karaoke, and Gilmore Girls.