Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

InferenceError with Apple Foundation Model – Context Length Exceeded on macOS 26.0 Beta
Hello Team, I'm currently working on a proof of concept using Apple's Foundation Model for a RAG-based chat system on my MacBook Pro with the M1 Max chip. Environment details: macOS: 26.0 Beta Xcode: 26.0 beta 2 (17A5241o) Target platform: iPad (as the iPhone simulator does not support Foundation models) While testing, even with very small input prompts to the LLM, I intermittently encounter the following error: InferenceError::inference-Failed::Failed to run inference: Context length of 4096 was exceeded during singleExtend. Has anyone else experienced this issue? Are there known limitations or workarounds for context length handling in this setup? Any insights would be appreciated. Thank you!
3
0
230
Jul ’25
InferenceError referencing context length in FoundationModels framework
I'm experimenting with downloading an audio file of spoken content, using the Speech framework to transcribe it, then using FoundationModels to clean up the formatting to add paragraph breaks and such. I have this code to do that cleanup: private func cleanupText(_ text: String) async throws -> String? { print("Cleaning up text of length \(text.count)...") let session = LanguageModelSession(instructions: "The content you read is a transcription of a speech. Separate it into paragraphs by adding newlines. Do not modify the content - only add newlines.") let response = try await session.respond(to: .init(text), generating: String.self) return response.content } The content length is about 29,000 characters. And I get this error: InferenceError::inferenceFailed::Failed to run inference: Context length of 4096 was exceeded during singleExtend.. Is 4096 a reference to a max input length? Or is this a bug? This is running on an M1 iPad Air, with iPadOS 26 Seed 1.
5
0
314
Jul ’25
A Summary of the WWDC25 Group Lab - Machine Learning and AI Frameworks
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Machine Learning and AI Frameworks. What are you most excited about in the Foundation Models framework? The Foundation Models framework provides access to an on-device Large Language Model (LLM), enabling entirely on-device processing for intelligent features. This allows you to build features such as personalized search suggestions and dynamic NPC generation in games. The combination of guided generation and streaming capabilities is particularly exciting for creating delightful animations and features with reliable output. The seamless integration with SwiftUI and the new design material Liquid Glass is also a major advantage. When should I still bring my own LLM via CoreML? It's generally recommended to first explore Apple's built-in system models and APIs, including the Foundation Models framework, as they are highly optimized for Apple devices and cover a wide range of use cases. However, Core ML is still valuable if you need more control or choice over the specific model being deployed, such as customizing existing system models or augmenting prompts. Core ML provides the tools to get these models on-device, but you are responsible for model distribution and updates. Should I migrate PyTorch code to MLX? MLX is an open-source, general-purpose machine learning framework designed for Apple Silicon from the ground up. It offers a familiar API, similar to PyTorch, and supports C, C++, Python, and Swift. MLX emphasizes unified memory, a key feature of Apple Silicon hardware, which can improve performance. It's recommended to try MLX and see if its programming model and features better suit your application's needs. MLX shines when working with state-of-the-art, larger models. Can I test Foundation Models in Xcode simulator or device? Yes, you can use the Xcode simulator to test Foundation Models use cases. However, your Mac must be running macOS Tahoe. You can test on a physical iPhone running iOS 18 by connecting it to your Mac and running Playgrounds or live previews directly on the device. Which on-device models will be supported? any open source models? The Foundation Models framework currently supports Apple's first-party models only. This allows for platform-wide optimizations, improving battery life and reducing latency. While Core ML can be used to integrate open-source models, it's generally recommended to first explore the built-in system models and APIs provided by Apple, including those in the Vision, Natural Language, and Speech frameworks, as they are highly optimized for Apple devices. For frontier models, MLX can run very large models. How often will the Foundational Model be updated? How do we test for stability when the model is updated? The Foundation Model will be updated in sync with operating system updates. You can test your app against new model versions during the beta period by downloading the beta OS and running your app. It is highly recommended to create an "eval set" of golden prompts and responses to evaluate the performance of your features as the model changes or as you tweak your prompts. Report any unsatisfactory or satisfactory cases using Feedback Assistant. Which on-device model/API can I use to extract text data from images such as: nutrition labels, ingredient lists, cashier receipts, etc? Thank you. The Vision framework offers the RecognizeDocumentRequest which is specifically designed for these use cases. It not only recognizes text in images but also provides the structure of the document, such as rows in a receipt or the layout of a nutrition label. It can also identify data like phone numbers, addresses, and prices. What is the context window for the model? What are max tokens in and max tokens out? The context window for the Foundation Model is 4,096 tokens. The split between input and output tokens is flexible. For example, if you input 4,000 tokens, you'll have 96 tokens remaining for the output. The API takes in text, converting it to tokens under the hood. When estimating token count, a good rule of thumb is 3-4 characters per token for languages like English, and 1 character per token for languages like Japanese or Chinese. Handle potential errors gracefully by asking for shorter prompts or starting a new session if the token limit is exceeded. Is there a rate limit for Foundation Models API that is limited by power or temperature condition on the iPhone? Yes, there are rate limits, particularly when your app is in the background. A budget is allocated for background app usage, but exceeding it will result in rate-limiting errors. In the foreground, there is no rate limit unless the device is under heavy load (e.g., camera open, game mode). The system dynamically balances performance, battery life, and thermal conditions, which can affect the token throughput. Use appropriate quality of service settings for your tasks (e.g., background priority for background work) to help the system manage resources effectively. Do the foundation models support languages other than English? Yes, the on-device Foundation Model is multilingual and supports all languages supported by Apple Intelligence. To get the model to output in a specific language, prompt it with instructions indicating the user's preferred language using the locale API (e.g., "The user's preferred language is en-US"). Putting the instructions in English, but then putting the user prompt in the desired output language is a recommended practice. Are larger server-based models available through Foundation Models? No, the Foundation Models API currently only provides access to the on-device Large Language Model at the core of Apple Intelligence. It does not support server-side models. On-device models are preferred for privacy and for performance reasons. Is it possible to run Retrieval-Augmented Generation (RAG) using the Foundation Models framework? Yes, it is possible to run RAG on-device, but the Foundation Models framework does not include a built-in embedding model. You'll need to use a separate database to store vectors and implement nearest neighbor or cosine distance searches. The Natural Language framework offers simple word and sentence embeddings that can be used. Consider using a combination of Foundation Models and Core ML, using Core ML for your embedding model.
1
0
695
Jun ’25
Swipe-to-Type Broken in iOS 26 Beta 1 & 2 Siri Typing Mode
I’ve been testing silent Siri engagement via typing on iOS 18 and also on iOS 26 beta 1 and beta 2. While normal typing works perfectly in type-to-Siri mode, I’ve noticed that swipe-to-type gestures don’t work within Siri’s input field. Interestingly, you still feel the usual haptic feedback associated with swipe typing, but no text appears in the Siri text box. Swipe-to-type continues to work flawlessly in other apps like Messages and Notes, so this seems to be an issue specific to Siri’s typing input handler in these betas. Hopefully, it will be fixed in the next release because swipe typing is essential to my silent Siri workflow.
1
0
101
Jun ’25
visionOS 26 beta 2: Symbol Not Found on Foundation Models
When I try to run visionOS 26 beta 2 on my device the app crashes on Launch: dyld[904]: Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC Referenced from: <A71932DD-53EB-39E2-9733-32E9D961D186> /private/var/containers/Bundle/Application/53866099-99B1-4BBD-8C94-CD022646EB5D/VisionPets.app/VisionPets.debug.dylib Expected in: <F68A7984-6B48-3958-A48D-E9F541868C62> /System/Library/Frameworks/FoundationModels.framework/FoundationModels Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC Referenced from: <A71932DD-53EB-39E2-9733-32E9D961D186> /private/var/containers/Bundle/Application/53866099-99B1-4BBD-8C94-CD022646EB5D/VisionPets.app/VisionPets.debug.dylib Expected in: <F68A7984-6B48-3958-A48D-E9F541868C62> /System/Library/Frameworks/FoundationModels.framework/FoundationModels dyld config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/usr/lib/libLogRedirect.dylib:/usr/lib/libBacktraceRecording.dylib:/usr/lib/libMainThreadChecker.dylib:/usr/lib/libViewDebuggerSupport.dylib:/System/Library/PrivateFrameworks/GPUToolsCapture.framework/GPUToolsCapture Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC Referenced from: <A71932DD-53EB-39E2-9733-32E9D961D186> /private/var/containers/Bundle/Application/53866099-99B1-4BBD-8C94-CD022646EB5D/VisionPets.app/VisionPets.debug.dylib Expected in: <F68A7984-6B48-3958-A48D-E9F541868C62> /System/Library/Frameworks/FoundationModels.framework/FoundationModels dyld config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/usr/lib/libLogRedirect.dylib:/usr/lib/libBacktraceRecording.dylib:/usr/lib/libMainThreadChecker.dylib:/usr/lib/libViewDebuggerSupport.dylib:/System/Library/PrivateFrameworks/GPUToolsCapture.framework/GPUToolsCapture Message from debugger: Terminated due to signal 6
5
0
154
Jun ’25
How to pass data to FoundationModels with a stable identifier
For example: I have a list of to-dos, each with a unique id (a GUID). I want to feed them to the LLM model and have the model rewrite the items so they start with an action verb. I'd like to get them back and identify which rewritten item corresponds to which original item. I obviously can't compare the text, as it has changed. I've tried passing the original GUIDs in with each to-do, but the extra GUID characters pollutes the input and confuses the model. I've tried numbering them in order and adding an originalSortOrder field to my generable type, but it doesn't work reliably. Any suggestions? I could do them one at a time, but I also have a use case where I'm asking for them to be organized in sections, and while I've instructed the model not to rename anything, it still happens. It's just all very nondeterministic.
2
0
214
Jun ’25
Failing to run SystemLanguageModel inference with custom adapter
Hi, I have trained a basic adapter using the adapter training toolkit. I am trying a very basic example of loading it and running inference with it, but am getting the following error: Passing along InferenceError::inferenceFailed::loadFailed::Error Domain=com.apple.TokenGenerationInference.E5Runner Code=0 "Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006)." UserInfo={NSLocalizedDescription=Failed to load model: ANE adapted model load failure: createProgramInstanceWithWeights:modelToken:qos:baseModelIdentifier:owningPid:numWeightFiles:error:: Program load new instance failure (0x170006).} in response to ExecuteRequest Any ideas / direction? For testing I am including the .fmadapter file inside the app bundle. This is where I load it: @State private var session: LanguageModelSession? // = LanguageModelSession() func loadAdapter() async throws { if let assetURL = Bundle.main.url(forResource: "qasc---afm---4-epochs-adapter", withExtension: "fmadapter") { print("Asset URL: \(assetURL)") let adapter = try SystemLanguageModel.Adapter(fileURL: assetURL) let adaptedModel = SystemLanguageModel(adapter: adapter) session = LanguageModelSession(model: adaptedModel) print("Loaded adapter and updated session") } else { print("Asset not found in the main bundle.") } } This seems to work fine as I get to the log Loaded adapter and updated session. However when the below inference code runs I get the aforementioned error: func sendMessage(_ msg: String) { self.loading = true if let session = session { Task { do { let modelResponse = try await session.respond(to: msg) DispatchQueue.main.async { self.response = modelResponse.content self.loading = false } } catch { print("Error: \(error)") DispatchQueue.main.async { self.loading = false } } } } }
3
0
176
Jun ’25
Overly strict foundation model rate limit when used in app extension
I am calling into an app extension from a Safari Web Extension (sendNativeMessage, which in turn results in a call to NSExtensionRequestHandling’s beginRequest). My Safari extension aims to make use of the new foundation models for some of the features it provides. In my testing, I hit the rate limit by sending 4 requests, waiting 30 seconds between each. This makes the FoundationModels framework (which would otherwise serve my use case perfectly well) unusable in this context, because the model is called in response to user input, and this rate of user input is perfectly plausible in a real world scenario. The error thrown as a result of the rate limit is “Safety guardrail was triggered after consecutive failures during streaming.", but looking at the system logs in Console.app shows the rate limit as the real culprit. My suggestions: Please introduce sensible rate limits for app extensions, through an entitlement if need be. If it is rate limited to 1 request per every couple of seconds, that would already fix the issue for me. Please document the rate limit. Please make the thrown error reflect that it is the result of a rate limit and not a generic guardrail violation. IMPORTANT: please indicate in the thrown error when it is safe to try again. Filed a feedback here: FB18332004
3
1
173
Jun ’25
Foundation Model Framework
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following: Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s. I was wondering if anyone had any advice?
15
3
1k
Jun ’25
Foundation model sandbox restriction error
I'm seeing this error a lot in my console log of my iPhone 15 Pro (Apple Intelligence enabled): com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1 Are there entitlements / permissions I need to enable in Xcode that I forgot to do? Code example Here's how I'm initializing the language model session: private func setupLanguageModelSession() { if #available(iOS 26.0, *) { let instructions = """ my instructions """ do { languageModelSession = try LanguageModelSession(instructions: instructions) print("Foundation Models language model session initialized") } catch { print("Error creating language model session: \(error)") languageModelSession = nil } } else { print("Device does not support Foundation Models (requires iOS 26.0+)") languageModelSession = nil } }
2
0
146
Jun ’25
Safety Guardrail errors for tiny prompt (dropped into large app)
I was able to open a new project and play around with the Foundation Model, but when I dropped this class in a production app (with a lot of files) I'm running into Safety Guardrail errors for this very small prompt. Specifically it's "Safety guardrail was triggered after consecutive failures during streaming." Does it have something to do with the size of the app? I don't know what else to try to get it to work? import FoundationModels import Playgrounds @available(iOS 26.0, *) #Playground { Task { do { let session = LanguageModelSession() let prompt = "Write a short story about a talking cat." let response = try await session.respond(to: prompt) print(response) } catch { print("Error: \(error)") } } }
3
2
205
Jun ’25
macOS 26 Beta 2 - Foundation Models - Symbol not found
It seems like there was an undocumented change that made Transcript.init(entries: [Transcript.Entry] initializer private, which broke my application, which relies on (manual) reconstruction of Transcript entries. Worked fine on beta 1, on beta 2 there's this error dyld[72381]: Symbol not found: _$s16FoundationModels10TranscriptV7entriesACSayAC5EntryOG_tcfC Referenced from: <44342398-591C-3850-9889-87C9458E1440> /Users/mika/experiments/apple-on-device-ai/fm Expected in: <66A793F6-CB22-3D1D-A560-D1BD5B109B0D> /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels Is this a part of an API transition, if so - Apple, please update your documentation
3
0
284
Jun ’25
FoundationModels not supported on Mac Catalyst?
I'd love to add a feature based on FoundationModels to the Mac Catalyst version of my iOS app. Unfortunately I get an error when importing FoundationModels: No such module 'FoundationModels'. Documentation says Mac Catalyst is supported: https://developer.apple.com/documentation/foundationmodels I can create iOS builds using the FoundationModels framework without issues. Hope this will be fixed soon! Config: Xcode 26.0 beta (17A5241e) macOS 26.0 Beta (25A5279m) 15-inch, M4, 2025 MacBook Air
2
1
210
Jun ’25
Foundation Models Error: Local Sanitizer Asset
Hi, I just upgraded to macOS Tahoe Beta 2 and now I'm getting this error when I try to initialize my Foundation Models' session: Error Resource (Local Sanitizer Asset) unavailable error. import FoundationModels #Playground { let session = LanguageModelSession() do { let result = try await session.respond(to: "Tell me 3 colors") print(result.content) } catch { print("Error", error) } } I couldn't find any resource guiding me on how to solve this. Any help/workaround? Thank you!
1
4
367
Jun ’25
Apple Intelligence stuck at 100% on macOS 26 Beta 1
Hello, I'm unable to develop for Apple Intelligence on my Mac Studio, M1 Max running macOS 26 beta 1. The models get downloaded and I can also verify that they exist in /System/Library/AssetsV2/ however the download progress remains stuck at 100%. Checking console logs shows the process generativeexperiencesd reporting the following: My device region and language is set to English (India). Things I've already tried: Changing language and region to English (US) Reinstalling macOS Trying with a different ISP via hotspot.
6
10
358
Jun ’25
SpeechTranscriber time indexes - detect pauses?
I'm experimenting with the new SpeechTranscriber in macOS/iOS 26, transcribing speech from a prerecorded mp4 file. Speed and quality are amazing! I've told the transcriber to include time indexes. Each run is always exactly one word, which can be very useful. When I look at the indexes the end of one run is always identical to the start of the next run, even if there's a pause. I'd like to identify pauses, perhaps to generate something like phrases for subtitling. With each run of text going into the next I can't do this, other than using punctuation - which might be rather rough. Any suggestions on detecting pauses, or getting that kind of metadata from the transcriber? Here's a short sample, showing each run with the start, end, and characters in the run: 105.9 --> 107.04 I 107.04 --> 107.16 think 107.16 --> 108.0 more 108.0 --> 108.42 lighting 108.42 --> 108.6 is 108.6 --> 108.72 definitely 108.72 --> 109.2 needed, 109.2 --> 109.92 downtown. 109.98 --> 110.4 My 110.4 --> 110.52 only 110.52 --> 110.7 question 110.7 --> 111.06 is, 111.06 --> 111.48 poll 111.48 --> 111.78 five, 111.78 --> 111.84 that 111.84 --> 112.08 you're 112.08 --> 112.38 increasing 112.38 --> 112.5 the 112.5 --> 113.34 50,000? 113.4 --> 113.58 Where 113.58 --> 113.88 exactly
0
0
126
Jun ’25
Is there anywhere to get precompiled WhisperKit models for Swift?
If try to dynamically load WhipserKit's models, as in below, the download never occurs. No error or anything. And at the same time I can still get to the huggingface.co hosting site without any headaches, so it's not a blocking issue. let config = WhisperKitConfig( model: "openai_whisper-large-v3", modelRepo: "argmaxinc/whisperkit-coreml" ) So I have to default to the tiny model as seen below. I have tried so many ways, using ChatGPT and others, to build the models on my Mac, but too many failures, because I have never dealt with builds like that before. Are there any hosting sites that have the models (small, medium, large) already built where I can download them and just bundle them into my project? Wasted quite a large amount of time trying to get this done. import Foundation import WhisperKit @MainActor class WhisperLoader: ObservableObject { var pipe: WhisperKit? init() { Task { await self.initializeWhisper() } } private func initializeWhisper() async { do { Logging.shared.logLevel = .debug Logging.shared.loggingCallback = { message in print("[WhisperKit] \(message)") } let pipe = try await WhisperKit() // defaults to "tiny" self.pipe = pipe print("initialized. Model state: \(pipe.modelState)") guard let audioURL = Bundle.main.url(forResource: "44pf", withExtension: "wav") else { fatalError("not in bundle") } let result = try await pipe.transcribe(audioPath: audioURL.path) print("result: \(result)") } catch { print("Error: \(error)") } } }
0
0
67
Jun ’25