Build more intelligent apps with machine learning.

Take advantage of Core ML a new foundational machine learning framework used across Apple products, including Siri, Camera, and QuickType. Core ML delivers blazingly fast performance with easy integration of machine learning models enabling you to build apps with intelligent new features using just a few lines of code.

Overview

Core ML lets you integrate a broad variety of machine learning model types into your app. In addition to supporting extensive deep learning with over 30 layer types, it also supports standard models such as tree ensembles, SVMs, and generalized linear models. Because it’s built on top of low level technologies like Metal and Accelerate, Core ML seamlessly takes advantage of the CPU and GPU to provide maximum performance and efficiency. You can run machine learning models on the device so data doesn't need to leave the device to be analyzed.

Vision

You can easily build computer vision machine learning features into your app. Supported features include face tracking, face detection, landmarks, text detection, rectangle detection, barcode detection, object tracking, and image registration.

Natural Language Processing

The natural language processing APIs in Foundation use machine learning to deeply understand text using features such as language identification, tokenization, lemmatization, part of speech, and named entity recognition.

Get started with Core ML.

Videos

Watch WWDC session videos to learn about using Core ML in your apps.

Documentation

Get detailed documentation on how to use machine learning in your app with the latest iOS SDK.

Journal

Read posts written by Apple engineers about their work using machine learning technologies.

View Apple Machine Learning Journal

Working with Models

Build your apps with the ready-to-use Core ML models below, or use Core ML Tools to easily convert models into the Core ML format.

Models

MobileNet

MobileNets are based on a streamlined architecture that have depth-wise separable convolutions to build lightweight, deep neural networks.

Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.

SqueezeNet

Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.

With an overall footprint of only 5 MB, SqueezeNet has a similar level of accuracy as AlexNet but with 50 times fewer parameters.

Places205-GoogLeNet

Detects the scene of an image from 205 categories such as an airport terminal, bedroom, forest, coast, and more.

ResNet50

Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.

Inception v3

Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.

VGG16

Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.

Model Converters

Core ML Tools

Core ML Tools is a python package that can be used to convert models from machine learning toolboxes into the Core ML format.

Get Core ML Tools

Apache MXNet

MXNet helps you train machine learning models and convert them into the Core ML format.

Get MXNet model converter

TensorFlow New

Train machine learning models in TensorFlow and easily convert them to the Core ML Model format.

Get TensorFlow Converter

Build Your Own Model

Turi Create New

Build your own custom machine learning models with Turi Create. You don't have to be a machine learning expert to add recommendations, object detection, image classification, image similarity or activity classification to your app.

Get Turi Create