Metrics used to evaluate a classifier’s performance.


struct MLClassifierMetrics


Use MLClassifierMetrics to evaluate your model’s ability to distinguish between different categories when it’s classifying data.

You can determine the model’s accuracy using the classificationError metric. For information about how your model is mislabeling or missing a certain category, use the precisionRecall metric. To determine specific cases where your model is mistaking one label for another, use the confusion property.

Accuracy can be a misleading metric if you have unbalanced data, which means the number of examples for some categories are much larger than others. Instead, use precisionRecall or confusion.


Understanding the Model

var classificationError: Double

The fraction of incorrectly labeled examples.

var precisionRecall: MLDataTable

A data table listing the precision and recall percentages for each category.

var confusion: MLDataTable

A table comparing the actual and predicted labels for each classification category.

Handling Errors

var isValid: Bool

A Boolean value indicating whether the classifier model was able to calculate metrics.

var error: Error?

The underlying error present when the metrics are invalid.

Creating Metrics

init(classificationError: Double, confusion: MLDataTable, precisionRecall: MLDataTable)

Creates classifier metrics describing the quality of your model.

Describing Metrics

var description: String

A text representation of the classifier metrics.

var debugDescription: String

A text representation of the classifier metrics that’s suitable for output during debugging.

var playgroundDescription: Any

A description of the classifier metrics shown in a playground.

See Also

Model Accuracy

Improving Your Model’s Accuracy

Use metrics to tune the performance of your machine learning model.

struct MLRegressorMetrics

Metrics used to evaluate a regressor’s performance.