What's new in Machine Learning and Computer Vision

Whether you’re building a fitness coaching app, exploring new ways of interacting, or understanding text, you can create incredible experiences in your app with machine learning. Find out about the latest advancements to Create ML, Core ML, Natural Language, and the Vision framework, and how we’re making it even easier to incorporate machine learning models and technologies into your app.

  • WWDC20

Build an Action Classifier with Create ML

Discover how to build Action Classification models in Create ML. With a custom action classifier, your app can recognize and understand body movements in real-time from videos or through a camera. We’ll show you how to use samples to easily train a Core ML model to identify human actions like...

  • WWDC20

Detect Body and Hand Pose with Vision

Explore how the Vision framework can help your app detect body and hand poses in photos and video. With pose detection, your app can analyze the poses, movements, and gestures of people to offer new video editing possibilities, or to perform action classification when paired with an action...

  • WWDC20

Use model deployment and security with Core ML

Discover how to deploy Core ML models outside of your app binary, giving you greater flexibility and control when bringing machine learning features to your app. And learn how Core ML Model Deployment enables you to deliver revised models to your app without requiring an app update. We’ll also...

  • WWDC20

Explore Computer Vision APIs

Learn how to bring Computer Vision intelligence to your app when you combine the power of Core Image, Vision, and Core ML. Go beyond machine learning alone and gain a deeper understanding of images and video. Discover new APIs in Core Image and Vision to bring Computer Vision to your application...

  • WWDC20

Explore the Action & Vision app

It's now easy to create an app for fitness or sports coaching that takes advantage of machine learning — and to prove it, we built our own. Learn how we designed the Action & Vision app using Object Detection and Action Classification in Create ML along with the new Body Pose Estimation,...

  • WWDC20

Make apps smarter with Natural Language

Explore how you can leverage the Natural Language framework to better analyze and understand text. Learn how to draw meaning from text using the framework's built-in word and sentence embeddings, and how to create your own custom embeddings for specific needs. We’ll show you how to use samples...

  • WWDC20

Build Image and Video Style Transfer models in Create ML

Bring stylized effects to your photos and videos with Style Transfer in Create ML. Discover how you can train models in minutes that make it easy to bring creative visual features to your app. Learn about the training process and the options you have for controlling the results. And we’ll explore...

  • WWDC20

Control training in Create ML with Swift

With the Create ML framework you have more power than ever to easily develop models and automate workflows. We'll show you how to explore and interact with your machine learning models while you train them, helping you get a better model quickly. Discover how training control in Create ML can...

  • WWDC20

Get models on device using Core ML Converters

With Core ML you can bring incredible machine learning models to your app and run them entirely on-device. And when you use Core ML Converters, you can incorporate almost any trained model from TensorFlow or PyTorch and take full advantage of the GPU, CPU, and Neural Engine. Discover everything you...

  • WWDC20

Build customized ML models with the Metal Performance Shaders Graph

Discover the Metal Performance Shaders (MPS) Graph, which extends Metal's Compute capabilities to multi-dimensional Tensors. MPS Graph builds on the highly tuned library of data parallel primitives that are vital to machine learning and leverages the tremendous power of the GPU. Explore how MPS...