Streaming is available in most browsers,
and in the Developer app.
-
Bring your machine learning and AI models to Apple silicon
Learn how to optimize your machine learning and AI models to leverage the power of Apple silicon. Review model conversion workflows to prepare your models for on-device deployment. Understand model compression techniques that are compatible with Apple silicon, and at what stages in your model deployment workflow you can apply them. We'll also explore the tradeoffs between storage size, latency, power usage and accuracy.
Chapters
- 0:00 - Introduction
- 2:47 - Model compression
- 13:35 - Stateful model
- 17:08 - Transformer optimization
- 26:24 - Multifunction model
Resources
Related Videos
WWDC24
- Deploy machine learning and AI models on-device with Core ML
- Explore machine learning on Apple platforms
- Support real-time ML inference on the CPU
WWDC23
- Improve Core ML integration with async prediction
- Use Core ML Tools for machine learning model compression
WWDC22
WWDC21
-
DownloadArray
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.