Core ML
Core ML is optimized for on-device performance of a broad variety of model types by leveraging Apple silicon and minimizing memory footprint and power consumption.
What’s new
Updates to Core ML will help you optimize and run advanced generative machine learning and AI models on device faster and more efficiently. Core ML Tools offers more granular and composable weight compression techniques to help you bring your large language models and diffusion models to Apple silicon. Models can now hold multiple functions and efficiently manage state, enabling more flexible and efficient execution of large language models and adapters. The Core ML framework also adds a new MLTensor type which provides an efficient, simple and familiar API for expressing operations on multi-dimensional arrays. And Core ML performance reports in Xcode have been updated to provide more insight into support and estimated cost of each operation in your model.
Experience more with Core ML
Run models fully on-device
Core ML models run strictly on the user’s device and remove any need for a network connection, keeping your app responsive and your users’ data private.
Run advanced machine learning and AI models
Core ML supports generative AI models with advanced model compression support, stateful models and efficient execution of transformer model operations.
Convert models to Core ML
Models from libraries like TensorFlow or PyTorch can be converted to Core ML using Core ML Tools more easily than ever before.