
What’s new in Machine learning & AI
Dive into the latest key technologies and capabilities.
Foundation Models framework
You now have direct access to the on-device foundation model at the core of Apple Intelligence so you can build experiences that are smart, private, and work without internet connectivity through the Foundation Models framework. With native support for Swift, you can tap into the model with as few as three lines of code. You can use this framework to power intelligent features in your app, with model capabilities, such as text extraction, summarization, and more. Features like guided generation, tool calling, and more are built into the framework, making it easier than ever to implement intelligent experiences right into your existing apps and games.
Vision framework, SpeechAnalyzer, BNNSGraph, Metal4
The Vision framework now includes advanced document recognition and smudge detection, alongside the new SpeechAnalyzer API for superior speech-to-text capabilities to help your app or game better interact with input from users. The BNNSGraph Builder API now enables you to write graphs of operations using Swift to generate pre- and post-processing routines and small models. To boost performance, Metal4 offers programmable ML execution within the Metal timeline, enabling you to optimize your app's performance and integrate powerful AI capabilities.
MLX
Improvements to the open source library MLX helps you to efficiently experiment with, fine-tune, and train large language models on Apple silicon for your applications and projects.