Framework

Core ML

Integrate machine learning models into your app.

Overview

Use Core ML to integrate machine learning models into your app. Core ML provides a unified representation for all models. Your app uses Core ML APIs and user data to make predictions, and to train or fine-tune models, all on the user’s device.

Flow diagram going from left to right. Starting on the left is a Core ML model file icon. Next, in the center is the Core ML framework icon, and on the right is a generic app icon, labeled "your app".

A model is the result of applying a machine learning algorithm to a set of training data. You use a model to make predictions based on new input data. Models can accomplish a wide variety of tasks that would be difficult or impractical to write in code. For example, you can train a model to categorize photos, or detect specific objects within a photo directly from its pixels.

You can build and train a model with the Create ML app bundled with Xcode. Models trained using Create ML are in the Core ML model format and are ready to use in your app. Alternatively, you can use a wide variety of other machine learning libraries and then use Core ML Tools to convert the model into the Core ML format. Once a model is on a user’s device, you can use Core ML to retrain or fine-tune it on-device, with that user’s data.

Core ML optimizes on-device performance by leveraging the CPU, GPU, and Neural Engine while minimizing its memory footprint and power consumption. Running a model strictly on the user’s device removes any need for a network connection, which helps keep the user’s data private and your app responsive.

Core ML is the foundation for domain-specific frameworks and functionality. Core ML supports Vision for analyzing images, Natural Language for processing text, Speech for converting audio to text, and SoundAnalysis for identifying sounds in audio. Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders.

A block diagram of the machine learning stack. The top layer is a single block labeled "Your App,” which spans the entire width of the block diagram. The second layer has four blocks labeled “Vision,” "Natural Language," “Speech,” and "Sound Analysis.” The third layer is labeled "Core ML," which also spans the entire width. The fourth and final layer has two blocks, "Accelerate and BNNS" and "Metal Performance Shaders."

Topics

First Steps

Getting a Core ML Model

Obtain a Core ML model to use in your app.

Integrating a Core ML Model into Your App

Add a simple model to an app, pass input data to the model, and process the model’s predictions.

Converting Trained Models to Core ML

Convert trained models created with third-party machine learning tools to the Core ML model format.

Computer Vision

Classifying Images with Vision and Core ML

Preprocess photos using the Vision framework and classify them with a Core ML model.

Understanding a Dice Roll with Vision and Object Detection

Detect dice position and values shown in a camera frame, and determine the end of a roll by leveraging a dice detection model.

Natural Language

Finding Answers to Questions in a Text Document

Locate relevant passages in a document by asking the Bidirectional Encoder Representations from Transformers (BERT) model a question.

App Size Management

Reducing the Size of Your Core ML App

Reduce the storage used by the Core ML model inside your app bundle.

Core ML API

Core ML API

Use the Core ML API directly to support custom workflows and advanced use cases.