Metrics used to evaluate a classifier’s performance.
Use MLClassifierMetrics to evaluate your model’s ability to distinguish between different categories when it is classifying data.
You can determine the model's accuracy using the classificationError metric. For information about how your model is mislabeling or missing a certain category, use the precisionRecall metric. To determine specific cases where your model is mistaking one label for another, use the confusion property.
If your data is unbalanced, meaning you have a large difference in the number of examples per category, accuracy can be a misleading metric. Instead use precisionRecall or confusion.
Metrics used to evaluate a regressor’s performance.
This documentation contains preliminary information about an API or technology in development. This information is subject to change, and software implemented according to this documentation should be tested with final operating system software.