Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Foundation Models / Playgrounds Hello World - Help!
I am using Foundation Models for the first time and no response is being provided to me. Code import Playgrounds import FoundationModels #Playground { let session = LanguageModelSession() let result = try await session.respond(to: "List all the states in the USA") print(result.content) } Canvas Output What I did New file Code Canvas refreshes but nothing happens Am I missing a step or setup here? Please help. Something so basic is not working I do not know what to do. Running 40GPU, 16CPU MacBook Pro.. IOS26/Xcodebeta2/Tahoe allocated 8CPU, 48GB memory in Parallels VM. Settings for Playgrounds in Xcode Thank you for your help in advance.
5
1
260
Jul ’25
Rate limit exceeded when using Foundation Model framework
When I use the FoundationModel framework to generate long text, it will always hit an error. "Passing along Client rate limit exceeded, try again later in response to ExecuteRequest" And stop generating. eg. for the prompt "Write a long story", it will almost certainly hit that error after 17 seconds of generation. do{ let session = LanguageModelSession() let prompt: String = "Write a long story" let response = try await session.respond(to: prompt) }catch{} If possible, I want to know how to prevent that error or at least how to handle it.
2
1
665
Jul ’25
How to Fine-Tune the SNSoundClassifier for Custom Sound Classification in iOS?
Hi Apple Developer Community, I’m exploring ways to fine-tune the SNSoundClassifier to allow users of my iOS app to personalize the model by adding custom sounds or adjusting predictions. While Apple’s WWDC session on sound classification explains how to train from scratch, I’m specifically interested in using SNSoundClassifier as the base model and building/fine-tuning on top of it. Here are a few questions I have: 1. Fine-Tuning on SNSoundClassifier: Is there a way to fine-tune this model programmatically through APIs? The manual approach using macOS, as shown in this documentation is clear, but how can it be done dynamically - within the app for users or in a cloud backend (AWS/iCloud)? Are there APIs or classes that support such on-device/cloud-based fine-tuning or incremental learning? If not directly, can the classifier’s embeddings be used to train a lightweight custom layer? Training is likely computationally intensive and drains too much on battery, doing it on cloud can be right way but need the right apis to get this done. A sample code will do good. 2. Recommended Approach for In-App Model Customization: If SNSoundClassifier doesn’t support fine-tuning, would transfer learning on models like MobileNetV2, YAMNet, OpenL3, or FastViT be more suitable? Given these models (SNSoundClassifier, MobileNetV2, YAMNet, OpenL3, FastViT), which one would be best for accuracy and performance/efficiency on iOS? I aim to maintain real-time performance without sacrificing battery life. Also it is important to see architecture retention and accuracy after conversion to CoreML model. 3. Cost-Effective Backend Setup for Training: Mac EC2 instances on AWS have a 24-hour minimum billing, which can become expensive for limited user requests. Are there better alternatives for deploying and training models on user request when s/he uploads files (training data)? 4. TensorFlow vs PyTorch: Between TensorFlow and PyTorch, which framework would you recommend for iOS Core ML integration? TensorFlow Lite offers mobile-optimized models, but I’m also curious about PyTorch’s performance when converted to Core ML. 5. Metrics: Metrics I have in mind while picking the model are these: Publisher, Accuracy, Fine-Tuning capability, Real-Time/Live use, Suitability of iPhone 16, Architectural retention after coreML conversion, Reasons for unsuitability, Recommended use case. Any insights or recommended approaches would be greatly appreciated. Thanks in advance!
6
1
1.3k
Dec ’24
can't install tenserflow metal
I was installing TensorFlow metal in the environment called "arm64_tf'" in anaconda using command line "python -m pip install tensorflow-metal" in terminal and it shows : ERROR: Could not find a version that satisfies the requirement tensorflow-metal (from versions: none) ERROR: No matching distribution found for tensorflow-metal I have already tried using " conda install -c anaconda libffi" but it still doesn't work is there a solution ? Thanks apologies for my bad English
3
1
750
Dec ’24
Apple Intelligence / Mac Mail : Summaries Unavailable
At one point, Mac Mail's apple added a summarize functionality that worked. Now when I click on Summarize, I get: "Summaries Unavailable Mail summarization is unavailable at this time. Try again later." I've rebooted, stopped/restarted Apple AI, waited a day to see if it was synching up things, etc. I'm running the latest version of Apple OS (Version 15.1 (24B82)). any ideas?
2
1
1.2k
Oct ’24
Insufficient memory for Foundational Model Adapter Training
I have a MacBook Pro M3 Pro with 18GB of RAM and was following the instructions to fine tune the foundational model given here: https://developer.apple.com/apple-intelligence/foundation-models-adapter/ However, while following the code sample in the example Jupyter notebook, my Mac hangs on the second code cell. Specifically: from examples.generate import generate_content, GenerationConfiguration from examples.data import Message output = generate_content( [[ Message.from_system("A conversation between a user and a helpful assistant. Taking the role as a play writer assistant for a kids' play."), Message.from_user("Write a script about penguins.") ]], GenerationConfiguration(temperature=0.0, max_new_tokens=128) ) output[0].response After some debugging, I was getting the following error: RuntimeError: MPS backend out of memory (MPS allocated: 22.64 GB, other allocations: 5.78 MB, max allowed: 22.64 GB). Tried to allocate 52.00 MB on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure). So is my machine not capable enough to adapter train Apple's Foundation Model? And if so, what's the recommended spec and could this be specified somewhere? Thanks!
8
1
206
Jul ’25
Image Playground Supported Devices
I'm trying to determine the best practice for handling if Image Playground is available but not installed or simply not supported. If ImagePlaygroundViewController.isAvailable is true, I will just display a button to start an Image Playground session. If it is false, does that mean ImagePlayground is supported but not installed? If it's supported and not installed, instead of a button to launch it, I want to display something like "Enable Apple Intelligence in Settings" or, better yet, a button that opens the Intelligence settings. Is that possible? But if it is on a system that doesn't support it, of course, I don't want to instruct the user to enable it. How can I determine if a device cannot install Image Playground? I read that Apple Intelligence requires iPhone 15 Pro, iPhone 15 Pro Max, and all iPhone 16 models, and no mention of the M1 iPad Pro, yet Image Playground runs on my M1 iPad Pro. What are the hardware requirements for Image Playground?
2
1
1.5k
Dec ’24
Ho to export a PyTorch model to CoreML model for usage in a iOS App
Hi, as showed in the course I created the PyTorch model sample and want to export / convert this model o a CoreML iOS Model using the coremltools. Input is a 224x224 image and output is a image classification (3 different classes) I am using coremltools for this with this code: import coremltools as ct modelml = ct.convert( scripted_model, inputs=[ct.ImageType(shape=(1,3,224,244))] ) I have a working iOS App code which performs with another model which was created using Microsoft Azure Vision. The PyTorch exported model is loaded and a prediction is performed, but I am getting this error: Foundation.MonoTouchException: Objective-C exception thrown. Name: NSInvalidArgumentException Reason: -[VNCoreMLFeatureValueObservation identifier]: unrecognized selector sent to instance 0x2805dd3b0 When I check the exported model with Xcode and compare it with another model which is working with the sample iOS App code (created and exported from Microsoft Azure) I can see that the input (for image classification using the device camera) seems ok and is equal, but the output is totally different. (see screenshots) The working model has two outputs: loss => Dictionary (String => Double) classLabel => String My exported model using coremltools just has one export: MultiArray(Float32) (name var_1620, I think this is the last feature layer output of the EfficentNetB2) How do I change my model or my coremltools export to get the correct output for the prediction ? I read the coreml documentation (https://coremltools.readme.io/docs/pytorch-conversion) and tried some GitHub samples. But I never get the correct output. How do I export the PyTorch model so that the output is correct and the prediction will work ? Best Marco
2
1
1.5k
Dec ’24
Overly strict foundation model rate limit when used in app extension
I am calling into an app extension from a Safari Web Extension (sendNativeMessage, which in turn results in a call to NSExtensionRequestHandling’s beginRequest). My Safari extension aims to make use of the new foundation models for some of the features it provides. In my testing, I hit the rate limit by sending 4 requests, waiting 30 seconds between each. This makes the FoundationModels framework (which would otherwise serve my use case perfectly well) unusable in this context, because the model is called in response to user input, and this rate of user input is perfectly plausible in a real world scenario. The error thrown as a result of the rate limit is “Safety guardrail was triggered after consecutive failures during streaming.", but looking at the system logs in Console.app shows the rate limit as the real culprit. My suggestions: Please introduce sensible rate limits for app extensions, through an entitlement if need be. If it is rate limited to 1 request per every couple of seconds, that would already fix the issue for me. Please document the rate limit. Please make the thrown error reflect that it is the result of a rate limit and not a generic guardrail violation. IMPORTANT: please indicate in the thrown error when it is safe to try again. Filed a feedback here: FB18332004
3
1
176
Jun ’25
Will Apple Intelligence gather feedback from users out of beta?
I had assumed that Apple Intelligence features would not allow users to give thumbs up or down when they are released later this year. But I recently stumbled upon new marketing material for the iPad Mini (A17 Pro), and in an embedded video on the marketing page, it shows the ability to give a thumbs up and down on an Image generated with Image Wand. https://www.apple.com/ipad-mini/ Was my assumption about non-beta users not being able to submit feedback on the model’s outputs wrong, or was Apple perhaps taking a screen recording of an unreleased beta and forgot to disable the feedback UI? I assume it can’t be the ladder.
1
1
430
Oct ’24
FoundationModels not supported on Mac Catalyst?
I'd love to add a feature based on FoundationModels to the Mac Catalyst version of my iOS app. Unfortunately I get an error when importing FoundationModels: No such module 'FoundationModels'. Documentation says Mac Catalyst is supported: https://developer.apple.com/documentation/foundationmodels I can create iOS builds using the FoundationModels framework without issues. Hope this will be fixed soon! Config: Xcode 26.0 beta (17A5241e) macOS 26.0 Beta (25A5279m) 15-inch, M4, 2025 MacBook Air
2
1
213
Jun ’25
Xcode Beta 1 and FoundationsModel access
I downloaded Xcode Beta 1 on my mac (did not upgrade the OS). The target OS level of iOS26 and the device simulator for iOS26 is downloaded and selected as the target. When I try a simple Playground in Xcode ( #Playground ) I get a session error. #Playground { let avail = SystemLanguageModel.default.availability if avail != .available { print("SystemLanguageModel not available") return } let session = LanguageModelSession() do { let response = try await session.respond(to: "Create a recipe for apple pie") } catch { print(error) } } The error I get is: Asset com.apple.gm.safety_deny_input.foundation_models.framework.api not found in Model Catalog Is there a way to test drive the FoundationModel code without upgrading to macos26?
1
1
301
Jun ’25
FoundationModels guardrailViolation on Beta 3
Hello everybody! I’m encountering an unexpected guardrailViolation error when using Foundation Models on macOS Beta 3 (Tahoe) with an Apple M2 Pro chip. This issue didn’t occur on Beta 1 or Beta 2 using the same codebase. Reproduction Context I’m developing an app that leverages Foundation Models for structured generation, paired with a local database tool. After upgrading to macOS Beta 3, I started receiving this error consistently, despite no changes in the generation logic. To isolate the issue, I opened the official WWDC sample project from the Adding intelligent app features with generative models and the same guardrailViolation error appeared without any modifications. Simplified Working Example I attempted to narrow down the issue by starting with a minimal prompt structure. This basic case works fine: import Foundation import Playgrounds import FoundationModels @Generable struct GeneableLandmark { @Guide(description: "Name of the landmark to visit") var name: String } final class LandmarkSuggestionGenerator { var landmarkSuggestion: GeneableLandmark.PartiallyGenerated? private var session: LanguageModelSession init(){ self.session = LanguageModelSession( instructions: Instructions { """ generate a list of landmarks to visit """ } ) } func createLandmarkSuggestion(location: String) async throws { let stream = session.streamResponse( generating: GeneableLandmark.self, options: GenerationOptions(sampling: .greedy), includeSchemaInPrompt: false ) { """ Generate a list of landmarks to viist in \(location) """ } for try await partialResponse in stream { landmarkSuggestion = partialResponse } } } #Playground { let generator = LandmarkSuggestionGenerator() Task { do { try await generator.createLandmarkSuggestion(location: "New york") if let suggestion = generator.landmarkSuggestion { print("Suggested landmark: \(suggestion)") } else { print("No suggestion generated.") } } catch { print("Error generating landmark suggestion: \(error)") } } } But as soon as I use the Sample ItineraryPlanner: #Playground { // Example landmark for demonstration let exampleLandmark = Landmark( id: 1, name: "San Francisco", continent: "North America", description: "A vibrant city by the bay known for the Golden Gate Bridge.", shortDescription: "Iconic Californian city.", latitude: 37.7749, longitude: -122.4194, span: 0.2, placeID: nil ) let planner = ItineraryPlanner(landmark: exampleLandmark) Task { do { try await planner.suggestItinerary(dayCount: 3) if let itinerary = planner.itinerary { print("Suggested itinerary: \(itinerary)") } else { print("No itinerary generated.") } } catch { print("Error generating itinerary: \(error)") } } } The error pops up: Multiline Error generating itinerary: guardrailViolation(FoundationModels.LanguageModelSession. >GenerationError.Context(debug Description: "May contain sensitive or unsafe content", >underlyingErrors: [FoundationModels. LanguageModelSession. Gene >rationError.guardrailViolation(FoundationMo dels. >LanguageModelSession.GenerationError.C ontext (debugDescription: >"May contain unsafe content", underlyingErrors: []))])) Based on my tests: The error may not be tied to structure complexity (since more nested structures work) The issue may stem from the tools or prompt content used inside the ItineraryPlanner The guardrail sensitivity may have increased or changed in Beta 3, affecting models that worked in earlier betas Thank you in advance for your help. Let me know if more details or reproducible code samples are needed - I’m happy to provide them. Best, Sasha Morozov
2
1
364
Jul ’25
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
2
1
749
Dec ’24