Core ML is optimized for on-device performance of a broad variety of model types by leveraging Apple hardware and minimizing memory footprint and power consumption.
Updates to the Core ML framework bring even faster model loading and inference. The new Async Prediction API simplifies the creation of interactive ML-powered experiences and aids in maximizing hardware utilization. Use the new Core ML Tools optimization module to help compress and optimize your models for deployment on Apple hardware. Weight pruning, quantization, and palettization utilities can be applied during model conversion or while training your model in frameworks like PyTorch to preserve accuracy during compression.
Experience more with Core ML
Run models fully on-device
Core ML models run strictly on the user’s device and remove any need for a network connection, keeping your app responsive and your users’ data private.
Run advanced neural networks
Core ML supports the latest models, such as cutting-edge neural networks designed to understand images, video, sound, and other rich media.
Convert models to Core ML
Models from libraries like TensorFlow or PyTorch can be converted to Core ML using Core ML Tools more easily than ever before.
Personalize models on-device
Models bundled in apps can be updated with user data on-device, helping models stay relevant to user behavior without compromising privacy.