Skip to main content

Evaluate Model

During the Evaluate Model stage, you apply holdout data, review general information about the model pipeline, assess performance, gain insights, and simulate potential cases your model might encounter in production.

To use holdout data we’ve prepared for you, built from the original data you provided, select Evaluate Model.

Important

You must apply your holdout data by selecting Evaluate Model before you can see evaluations specific to your model in the General, AdvancedInsights, and Simulations panels.

General

The General panel contains information that allows you to review your model at a high level:

  • The cross-validation and holdout scores show how well the model has performed when scored against the validation and holdout data, based on the ranking metric you've selected.

  • The positive or negative performance score shows how much better or worse your model performed when compared with a baseline model, which simulates random guessing.

  • Pipeline Highlights show the different operations we used to build the model.

  • For Regression and Classification problem types, use Feature Importance to measure which features are most important. You can also use this measure to identify features that could put your model at risk of generalization error by associating too weakly or too strongly with the target.

  • For Time Series problem types, use the predicted versus actual values graph to visualize how well your model tracks the target.

Advanced Insights

The AdvancedInsights panel contains in-depth information about how well your model performed based on your problem type.

Classification

Metrics

Compare how the model performed based on different ranking metrics. To learn more about specific ranking metrics, select the book icon in the upper-right corner.

The Metrics tab also displays a confusion matrix. The confusion matrix shows how frequently an algorithm's predicted values match the actual values in the training data. Use the confusion matrix to identify what categories the model accurately predicts.

ROC Curve

For binary classification problems, you can see an ROC Curve. Use ROC (receiver operating characteristic) curve to determine how well your model performs compared to a random guess. Use ROC with AUC (the area under the curve) measure to identify how well your model makes predictions across classification thresholds. The AUC measure ranges from 0 to 1. 0 means all model predictions are incorrect. 1 means all model predictions are correct.

Probability Distributions

Use the Probability Distributions Chart to visualize how well a binary classifier can separate your holdout data between the actual positives and actual negatives (rows from your dataset whose actual values are true and false, respectively). Theactual positivesare blue, and theactual negativesare rose.The vertical line represents the decision boundary that we tuned to optimize your chosen metric.

Partial Dependence

The Partial Dependence plot uses the trained model to show the association between the feature you select and the target. For numeric features, the plot shows what happens to the target as the value of the feature changes. For categorical features, the plot shows the association between the categories and the target. Use the plot to learn what kinds of associations individual features have with the target. Switch from Absolute to Relative to make the visualization fit the data, rather than display the whole plot.

Prediction Explanations

For binary classification problems, use the Prediction Explanations tab to find out how the feature values for a single row explain the prediction. The tab displays some representative rows along with the most important features for each of those rows. Different combinations of features can influence each prediction.

Regression

Metrics

Compare how the model performed based on different ranking metrics. To learn more about specific ranking metrics, select the book icon in the upper-right corner.

Partial Dependence

The Partial Dependence plot uses the trained model to show the association between the feature you select and the target. For numeric features, the plot shows what happens to the target as the value of the feature changes. For categorical features, the plot shows the association between the categories and the target. Use the plot to learn what kinds of associations individual features have with the target. Switch from Absolute to Relative to make the visualization fit the data, rather than display the whole plot.

Time Series

Metrics

Compare how the model performed based on different ranking metrics. To learn more about specific ranking metrics, select the book icon in the upper-right corner.

Simulations

The Simulations panel allows you to choose a row of data, then manipulate different features to see how they affect the prediction the model makes for that row.

You can select a specific row by its number, or you can have us pick a Random row for you.

Make sure to select Run each time you make a change to a feature.