Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Can not use Language Model in Xcode-beta
I've downloaded the Xcode-beta and run the sample project "FoundationModelsTripPlanner" but I got this error when trying generate the response. InferenceError::inferenceFailed::Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog} Device: M1 Pro Question: Is it because M1 not supporting this feature?
1
1
300
Jun ’25
Selecting an output language with Foundation Models
When using Foundation Models, is it possible to ask the model to produce output in a specific language, apart from giving an instruction like "Provide answers in ." ? (I tried that and it kind of worked, but it seems fragile.) I haven't noticed an API to do so and have a use-case where the output should be in a user-selectable language that is not the current system language.
3
1
551
Jul ’25
Model Rate Limits?
Trying the Foundation Model framework and when I try to run several sessions in a loop, I'm getting a thrown error that I'm hitting a rate limit. Are these rate limits documented? What's the best practice here? I'm trying to run the models against new content downloaded from a web service where I might get ~200 items in a given download. They're relatively small but there can be that many that want to be processed in a loop.
4
1
589
Jun ’25
Foundation Models flags 'Six Flags Great America' as unsafe
I'm working on a to-do list app that uses SpeechTranscriber and Foundation Models framework to transcribe a user's voice into text and create to-do items based off of it. After about 30 minutes looking at my code, I couldn't figure out why I was failing to generate a to-do for "I need to go to Six Flags Great America tomorrow at 3pm." It turns out, I was consistently firing the Foundation Models's safety filter violation for unsafe content ("May contain unsafe content"). Lesson learned: consider comprehensively logging Foundation Models error states to quickly identify when safety filters are unexpectedly triggered.
3
1
509
Jul ’25
no tensorflow-metal past tf 2.18?
Hi We're on tensorflow 2.20 that has support now for python 3.13 (finally!). tensorflow-metal is still only supporting 2.18 which is over a year old. When can we expect to see support in tensorflow-metal for tf 2.20 (or later!) ? I bought a mac thinking I would be able to get great performance from the M processors but here I am using my CPU for my ML projects. If it's taking so long to release it, why not open source it so the community can keep it more up to date? cheers Matt
1
1
383
Nov ’25
Defining instructions employing Content Tagging Model
Hello It seems the model Content Tagging doesn't obey when I define the type of tag I wish in the instructions parameters, always the output are the main topics. The unique form to get other type of tags like emotions is using Generable + Guided types. The documentation says it is recommended but not mandatory the use instructions. Maybe I'm setting wrongly the instructions but take a look in the attached snapshot. I copied the definition of tagging emotions from the official documentation. The upper example is employing generable and it works but in the example at the botton I set like instruction the same description of emotion and it doesn't work. I tried with other statements with more or less verbose and never output emotions. Could you provide a state using instruction where it works? Current version of model isn't working with instruction?
1
0
393
Oct ’25
Insufficient memory for Foundational Model Adapter Training
I have a MacBook Pro M3 Pro with 18GB of RAM and was following the instructions to fine tune the foundational model given here: https://developer.apple.com/apple-intelligence/foundation-models-adapter/ However, while following the code sample in the example Jupyter notebook, my Mac hangs on the second code cell. Specifically: from examples.generate import generate_content, GenerationConfiguration from examples.data import Message output = generate_content( [[ Message.from_system("A conversation between a user and a helpful assistant. Taking the role as a play writer assistant for a kids' play."), Message.from_user("Write a script about penguins.") ]], GenerationConfiguration(temperature=0.0, max_new_tokens=128) ) output[0].response After some debugging, I was getting the following error: RuntimeError: MPS backend out of memory (MPS allocated: 22.64 GB, other allocations: 5.78 MB, max allowed: 22.64 GB). Tried to allocate 52.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). So is my machine not capable enough to adapter train Apple's Foundation Model? And if so, what's the recommended spec and could this be specified somewhere? Thanks!
8
1
338
Jul ’25
Making a model in MLLinearRegressor works with Sonoma, but on upgrading to 15.3.1 it no longer does "anything"
I was generating models using the code:- import Foundation import CreateML import TabularData import CoreML .... func makeTheModel(columntopredict:String,training:DataFrame,colstouse:[String],numberofmodels:Int) -> [MLLinearRegressor] { var returnmodels = [MLLinearRegressor]() var result = 0.0 for i in 0...numberofmodels { let pms = MLLinearRegressor.ModelParameters(validation: .split(strategy: .automatic)) do { let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) returnmodels.append(tm) } catch let error as NSError { print("Error: \(error.localizedDescription)") } } return returnmodels } Which worked absolutely fine with Sonoma, but upon upgrading the OS to 15.3.1, it does absolutely nothing. I get no error messages, I get nothing, the code just pauses. If I look at CPU usage, as soon as it hits the line let tm = try MLLinearRegressor(trainingData: training, targetColumn: columntopredict) the CPU usage drops to 0% What am I doing wrong? Is there a flag I need to set somewhere in Xcode? This is on an M1 MacBook Pro Any help would be greatly appreciated
2
1
515
Mar ’25
Foundation Models unavailable for millions of users due to device language restriction - Need per-app language override
Hi everyone, I'm developing an iOS app using Foundation Models and I've hit a critical limitation that I believe affects many developers and millions of users. The Issue Foundation Models requires the device system language to be one of the supported languages. If a user has their device set to an unsupported language (Catalan, Dutch, Swedish, Polish, Danish, Norwegian, Finnish, Czech, Hungarian, Greek, Romanian, and many others), SystemLanguageModel.isSupported returns false and the framework is completely unavailable. Why This Is Problematic Scenario: A Catalan user has their iPhone in Catalan (native language). They want to use an AI chat app in Spanish or English (languages they speak fluently). Current situation: ❌ Foundation Models: Completely unavailable ✅ OpenAI GPT-4: Works perfectly ✅ Anthropic Claude: Works perfectly ✅ Any cloud-based AI: Works perfectly The user must choose between: Keep device in Catalan → Cannot use Foundation Models at all Change entire device to Spanish → Can use Foundation Models but terrible UX Impact This affects: Millions of users in regions where unsupported languages are official Multilingual users who prefer their device in their native language but can comfortably interact with AI in English/Spanish Developers who cannot deploy Foundation Models-based apps in these markets Privacy-conscious users who are ironically forced to use cloud AI instead of on-device AI What We Need One of these solutions would solve the problem: Option 1: Per-app language override (preferred) // Proposed API let session = try await LanguageModelSession(preferredLanguage: "es-ES") Option 2: Faster rollout of additional languages (particularly EU languages) Option 3: Allow fallback to user-selected supported language when system language is unsupported Technical Details Current behavior: // Device in Catalan let isAvailable = SystemLanguageModel.isSupported // Returns false // No way to override or specify alternative language Why This Matters Apple Intelligence and Foundation Models are amazing for privacy and performance. But this language restriction makes the most privacy-focused AI solution less accessible than cloud alternatives. This seems contrary to Apple's values of accessibility and user choice. Questions for the Community Has anyone else encountered this limitation? Are there any workarounds I'm missing? Has anyone successfully filed feedback about this?(Please share FB number so we can reference it) Are there any sessions or labs where this has been discussed? Thanks for reading. I'd love to hear if others are facing this and how you're handling it.
1
1
398
Nov ’25
Siri 2.0 (suggests and future updates)
Hey dear developers! This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri. My change of many: Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
1
1
876
Oct ’25
FoundationModels guardrailViolation on Beta 3
Hello everybody! I’m encountering an unexpected guardrailViolation error when using Foundation Models on macOS Beta 3 (Tahoe) with an Apple M2 Pro chip. This issue didn’t occur on Beta 1 or Beta 2 using the same codebase. Reproduction Context I’m developing an app that leverages Foundation Models for structured generation, paired with a local database tool. After upgrading to macOS Beta 3, I started receiving this error consistently, despite no changes in the generation logic. To isolate the issue, I opened the official WWDC sample project from the Adding intelligent app features with generative models and the same guardrailViolation error appeared without any modifications. Simplified Working Example I attempted to narrow down the issue by starting with a minimal prompt structure. This basic case works fine: import Foundation import Playgrounds import FoundationModels @Generable struct GeneableLandmark { @Guide(description: "Name of the landmark to visit") var name: String } final class LandmarkSuggestionGenerator { var landmarkSuggestion: GeneableLandmark.PartiallyGenerated? private var session: LanguageModelSession init(){ self.session = LanguageModelSession( instructions: Instructions { """ generate a list of landmarks to visit """ } ) } func createLandmarkSuggestion(location: String) async throws { let stream = session.streamResponse( generating: GeneableLandmark.self, options: GenerationOptions(sampling: .greedy), includeSchemaInPrompt: false ) { """ Generate a list of landmarks to viist in \(location) """ } for try await partialResponse in stream { landmarkSuggestion = partialResponse } } } #Playground { let generator = LandmarkSuggestionGenerator() Task { do { try await generator.createLandmarkSuggestion(location: "New york") if let suggestion = generator.landmarkSuggestion { print("Suggested landmark: \(suggestion)") } else { print("No suggestion generated.") } } catch { print("Error generating landmark suggestion: \(error)") } } } But as soon as I use the Sample ItineraryPlanner: #Playground { // Example landmark for demonstration let exampleLandmark = Landmark( id: 1, name: "San Francisco", continent: "North America", description: "A vibrant city by the bay known for the Golden Gate Bridge.", shortDescription: "Iconic Californian city.", latitude: 37.7749, longitude: -122.4194, span: 0.2, placeID: nil ) let planner = ItineraryPlanner(landmark: exampleLandmark) Task { do { try await planner.suggestItinerary(dayCount: 3) if let itinerary = planner.itinerary { print("Suggested itinerary: \(itinerary)") } else { print("No itinerary generated.") } } catch { print("Error generating itinerary: \(error)") } } } The error pops up: Multiline Error generating itinerary: guardrailViolation(FoundationModels.LanguageModelSession. >GenerationError.Context(debug Description: "May contain sensitive or unsafe content", >underlyingErrors: [FoundationModels. LanguageModelSession. Gene >rationError.guardrailViolation(FoundationMo dels. >LanguageModelSession.GenerationError.C ontext (debugDescription: >"May contain unsafe content", underlyingErrors: []))])) Based on my tests: The error may not be tied to structure complexity (since more nested structures work) The issue may stem from the tools or prompt content used inside the ItineraryPlanner The guardrail sensitivity may have increased or changed in Beta 3, affecting models that worked in earlier betas Thank you in advance for your help. Let me know if more details or reproducible code samples are needed - I’m happy to provide them. Best, Sasha Morozov
2
1
411
Jul ’25
What is the proper way to integrate a CoreML app into Xcode
Hi, I have been trying to integrate a CoreML model into Xcode. The model was made using tensorflow layers. I have included both the model info and a link to the app repository. I am mainly just really confused on why its not working. It seems to only be printing the result for case 1 (there are 4 cases labled, case 0, case 1, case 2, and case 3). If someone could help work me through this error that would be great! here is the link to the repository: https://github.com/ShivenKhurana1/Detect-to-Protect-App this file with the model code is called SecondView.swift and here is the model info: Input: conv2d_input-> image (color 224x224) Output: Identity -> MultiArray (Float32 1x4)
1
1
214
Apr ’25
Rate limit exceeded when using Foundation Model framework
When I use the FoundationModel framework to generate long text, it will always hit an error. "Passing along Client rate limit exceeded, try again later in response to ExecuteRequest" And stop generating. eg. for the prompt "Write a long story", it will almost certainly hit that error after 17 seconds of generation. do{ let session = LanguageModelSession() let prompt: String = "Write a long story" let response = try await session.respond(to: prompt) }catch{} If possible, I want to know how to prevent that error or at least how to handle it.
2
1
730
Jul ’25
Core ML Model performance far lower on iOS 17 vs iOS 16 (iOS 17 not using Neural Engine)
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
1
1
1.9k
Mar ’25
ImageCreator fails with GenerationError Code=11 on Apple Intelligence-enabled device
When I ran the following code on a physical iPhone device that supports Apple Intelligence, I encountered the following error log. What does this internal error code mean? Image generation failed with NSError in a different domain: Error Domain=ImagePlaygroundInternal.ImageGeneration.GenerationError Code=11 “(null)”, returning a generic error instead let imageCreator = try await ImageCreator() let style = imageCreator.availableStyles.first ?? .animation let stream = imageCreator.images(for: [.text("cat")], style: style, limit: 1) for try await result in stream { // error: ImagePlayground.ImageCreator.Error.creationFailed _ = result.cgImage }
0
1
284
Jul ’25
iOS 26 beta breaking my model
I just recently updated to iOS 26 beta (23A5336a) to test an app I am developing I running an MLModel loaded from a .mlmodelc file. On the current iOS version 18.6.2 the model is running as expected with no issues. However on iOS 26 I am now getting error when trying to perform an inference to the model where I pass a camera frame into it. Below is the error I am seeing when I attempt to run an inference. at the bottom it says "Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model " does this indicate I need to convert my model or something? I don't understand since it runs as normal on iOS 18. Any help getting this to run again would be greatly appreciated. Thank you, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: Could not process request ret=0x1d lModel=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/ : sourceURL=(null) : UUID=46228BFC-19B0-45BF-B18D-4A2942EEC144 : key={"isegment":0,"inputs":{"input":{"shape":[512,512,1,3,1]}},"outputs":{"var_633":{"shape":[512,512,1,19,1]},"94_argmax_out_value":{"shape":[512,512,1,1,1]},"argmax_out":{"shape":[512,512,1,1,1]},"var_637":{"shape":[512,512,1,19,1]}}} : identifierSource=1 : cacheURLIdentifier=01EF2D3DDB9BA8FD1FDE18C7CCDABA1D78C6BD02DC421D37D4E4A9D34B9F8181_93D03B87030C23427646D13E326EC55368695C3F61B2D32264CFC33E02FFD9FF : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=259022032430 : intermediateBufferHandle=13949 : queueDepth=127 } : state=3 : [Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=8 err=Error Domain=com.apple.appleneuralengine Code=8 "processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error} [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": ANEF error: /private/var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/model.espresso.net, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). Error Domain=com.apple.Vision Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x114d92940 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}
1
0
1.2k
Sep ’25
Xcode Beta 1 and FoundationsModel access
I downloaded Xcode Beta 1 on my mac (did not upgrade the OS). The target OS level of iOS26 and the device simulator for iOS26 is downloaded and selected as the target. When I try a simple Playground in Xcode ( #Playground ) I get a session error. #Playground { let avail = SystemLanguageModel.default.availability if avail != .available { print("SystemLanguageModel not available") return } let session = LanguageModelSession() do { let response = try await session.respond(to: "Create a recipe for apple pie") } catch { print(error) } } The error I get is: Asset com.apple.gm.safety_deny_input.foundation_models.framework.api not found in Model Catalog Is there a way to test drive the FoundationModel code without upgrading to macos26?
1
1
364
Jun ’25