Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

coreml Fetching decryption key from server failed
My iOS app supports iOS 18, and I’m using an encrypted CoreML model secured with a key generated from Xcode. Every few months (around every 3 months), the encrypted model fails to load for both me and my users. When I investigate, I find this error: coreml Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID To temporarily fix it, I delete the old key, generate a new one, re-encrypt the model, and submit an app update. This resolves the issue, but only for a while. This is a terrible experience for users and obviously not a sustainable solution. I want to understand: Why is this happening? Is there a known expiration or invalidation policy for CoreML encryption keys? How can I prevent this issue permanently? Any insights or official guidance would be really appreciated.
5
2
508
Jul ’25
Group AppIntents’ Searchable DynamicOptionsProvider in Sections
I’m trying to group my EntityPropertyQuery selection into sections as well as making it searchable. I know that the EntityStringQuery is used to perform the text search via entities(matching string: String). That works well enough and results in this modal: Though, when I’m using a DynamicOptionsProvider to section my EntityPropertyQuery, it doesn’t allow for searching anymore and simply opens the sectioned list in a menu like so: How can I combine both? I’ve seen it in other apps, but can’t figure out why my code doesn’t allow to section the results and make it searchable? Any ideas? My code (simplified) struct MyIntent: AppIntent { @Parameter(title: "Meter"), optionsProvider: MyOptionsProvider()) var meter: MyIntentEntity? // … struct MyOptionsProvider: DynamicOptionsProvider { func results() async throws -> ItemCollection<MyIntentEntity> { // Get All Data let allData = try IntentsDataHandler.shared.getEntities() // Create Arrays for Sections let fooEntities = allData.filter { $0.type == .foo } let barEntities = allData.filter { $0.type == .bar } return ItemCollection(sections: [ ItemSection("Foo", items: fooEntities), ItemSection("Bar", items: barEntities) ]) } } struct MeterIntentQuery: EntityStringQuery { // entities(for identifiers: [UUID]) and suggestedEntities() functions func entities(matching string: String) async throws -> [MyIntentEntity] { // Fetch All Data let allData = try IntentsDataHandler.shared.getEntities() // Filter Data by String let matchingData = allData.filter { data in return data.title.localizedCaseInsensitiveContains(string)) } return matchingData } }
0
2
534
Mar ’25
Create ML Model Shows Wrong output or predictions in xcode
I am working on a CoreML image classification model in Xcode, which takes a 299x299 image and attempts to classify hand-drawn sketches. The model was trained using Create ML and works perfectly when tested in the Create ML preview. However, when used in Xcode application, the classification results are incorrect. I have already verified that the image is correctly resized to 299x299 pixels, matching the input size of the model. The classification always returns incorrect results, even when using images that were correctly classified during training. I originally used kCVPixelFormatType_32ARGB, but I read that CoreML typically expects BGRA format. I updated my conversion function to use kCVPixelFormatType_32BGRA and CGImageAlphaInfo.premultipliedLast, but the issue persists. This makes me suspect that either the pixel format is still incorrect or that something went wrong during the .mlmodelc compilation.
1
2
421
Jan ’25
app intents not working with placeholders and without app name
I tried this: struct CarShortcutsProvider: AppShortcutsProvider { @AppShortcutsBuilder static var appShortcuts: [AppShortcut] { AppShortcut( intent: LockCarIntent(), phrases: ["Lock my car with \(.applicationName)", "Lock my \(\.$car) with \(.applicationName)"], shortTitle: LocalizedStringResource("Lock Car"), systemImageName: "lock.fill" ) AppShortcut( intent: UnlockCarIntent(), phrases: ["Unlock my car with \(.applicationName)", "Unlock my \(\.$car) with \(.applicationName)"], shortTitle: LocalizedStringResource("Unlock Car"), systemImageName: "lock.open.fill" ) } } but Siri only understands "unlock my car ", not with the placeholder. Siri asks me then for the car, and it understands it, but not in one sentence. Is there something wrong with my code? Also I tried it without applicationName first, and then it didn't work at all with Siri. Is this a general limitation of app intents? I thought the goal was to reduce friction. If the user has to mention the app name all the time, it adds friction.
0
2
433
Nov ’24
Image Playground not available for "Designed for iPad" apps?
I'm currently trying to add support for Image Playground to our apps. It seems that it's not working in an app that is "Designed for iPad" and runs on a Mac. The modal just shows a spinner and the following is logged to console: Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> Private sandbox for com.apple.GenerativePlaygroundApp.remoteUIExtension : <none> GP extension could not be loaded: Extension (platform: 2) could not be found (in update) dealloc Query controller [C32BA176-6A3E-465D-B3C5-0F8D91068B89] ImagePlaygroundViewController.isAvailable returns true, however. In a "real" Mac Catalyst app, it's working. Just not when the app is actually an iPad app. Is this a bug?
4
2
1.4k
Jan ’25
Conforming an existing AppIntent to the photos domain schema
I have an image based app with albums, except in my app, albums are known as galleries. When I tried to conform my existing OpenGalleryIntent with @AssistantIntent(schema: .photos.openAlbum), I had to change my existing gallery parameter to be called target in order to fit the predefined shape of this domain. Previously, my intent was configured to display as “Open Gallery” with the description “Opens the selected Gallery” in the Shortcuts app. After conforming to the photos domain, it displays as “Open Album” with a description “Opens the Provided Album”. Shortcuts is ignoring my configured title and description now. My code builds, but with the following build warnings: Parameter argument title of a required Assistant schema intent parameter target should not be overridden Implementation of the property title of an AppIntent conforming to AssistantSchemaIntent should not be overridden Implementation of the property description of an AppIntent conforming to AssistantSchemaIntent should not be overridden Is my only option to change the concept of a Gallery inside of my app into an Album? I don't want to do this... Conceptually, my app aligns well with this domain does, but I didn't consider that conforming to the shape of an AI schema intent would also dictate exactly how it's presented to the user. FB16283840
2
2
749
Jan ’25
Using Core ML in a .swiftpm file
Hi everyone, I've been struggling for a few weeks to integrate my Core ML Image Classifier model into my .swiftpm project, and I’m hoping someone can help. Here’s what I’ve done so far: I converted my .mlmodel file to .mlmodelc manually via the terminal. In my Package.swift file, I tried both "copy" and "process" options for the resource. The issues I’m facing: When using "process", Xcode gives me the error: "multiple resources named 'coremldata.bin' in target 'AppModule'." When using "copy", the app runs, but the model doesn’t work, and the terminal shows: "A valid manifest does not exist at path: .../Manifest.json." I even tried creating a Manifest.json manually to test, but this led to more errors, such as: "File format version must be in the form of major.minor.patch." "Failed to look up root model." To check if the problem was specific to my model, I tested other Core ML models in the same setup, but none of them worked either. I feel stuck and unsure of how to resolve these issues. Any guidance or suggestions would be greatly appreciated. Thanks in advance! :)
2
2
1.1k
Jan ’25
TimeSeriesClassifier
In the WWDC24 What’s New In Create ML at 6:03 the presenter introduced TimeSeriesClassifier as a new component of Create ML Components. Where are documentation and code examples for this feature? My app captures accelerometer time series data that I want to classify. Thank you so much!
4
2
926
Oct ’24
WWDC24 - What's New in Create ML - Time Series Forecasting
The What’s New in Create ML session in WWDC24 went into great depth with time-series forecasting models (beginning at: 15:14) and mentioned these new models, capabilities, and tools for iOS 18. So, far, all I can find is API documentation. I don’t see any other session in WWDC24 covering these new time-series forecasting Create ML features. Is there more substance/documentation on how to use these with Create ML? Maybe I am looking in the wrong place but I am fairly new with ML. Are there any food truck / donut shop demo/sample code like in the video? It is of great interest to get ahead of the curve on this within business applications that may take advantage of this with inventory / ordering data.
3
2
1.4k
Dec ’24
Almost 48 Hours, Still No Access to Playground.
Just wanted to reach out to see if this is the norm. I see several posts saying people are still waiting for the early access playground app, what’s going on? it’s been almost 48 hours and I’ve received nothing. If this is the norm, then so be it…but even when I had to wait for the Apple Intelligence early access that was only a few hours. Hopefully, this will be resolved quickly. I mean what’s the point of being developer beta testers, if we can’t test the beta?
3
2
502
Oct ’24
Guardrail configuration options?
Is anything configurable for LanguageModelSession.Guardrails besides the default? I'm prototyping a camping app, and it's constantly slamming into guardrail errors when I use the new foundation model interface. Any subjects relating to fishing, survival, etc. won't generate. For example the prompt "How can I kill deer ticks using a clothing treatment?" returns a generation error. The results that I get are great when it works, but so far the local model sessions are extremely unreliable.
2
2
205
Jul ’25
Setting Required Capabilities for Foundation Models
Is there any way to ensure iOS apps we develop using Foundation Models can only be purchasable/downloadable on App Store by folks with capable devices? I would've thought there would be a Required Capabilities that App Store would hook into, but I don't seem to see it in the documentation here: https://developer.apple.com/documentation/bundleresources/information-property-list/uirequireddevicecapabilities The closest seems to be iphone-performance-gaming-tier as that seems to target all M1 and above chips on iPhone & iPad. There is an ipad-minimum-performance-m1 that would more reasonably seem to ensure Foundation Models is likely available, but that doesn't help with iPhone. So far, it seems the only path would be to set Minimum Deployment to iOS 26 and add iphone-performance-gaming-tier as a required capability, but I'm a bit worried that capability might diverge in the future from what's Foundation Model / Apple Intelligence capable. While I understand for the majority of apps they'll want to just selectively add in Apple Intelligence features and so can be usable by folks whose devices don't support it, the app experience I'm building doesn't make sense without the Foundation Models being available and I'd rather not have a large number of users downloading the app to be told "Sorry, you're not Apple Intelligence capable"
2
2
198
Aug ’25
Safety Guardrail errors for tiny prompt (dropped into large app)
I was able to open a new project and play around with the Foundation Model, but when I dropped this class in a production app (with a lot of files) I'm running into Safety Guardrail errors for this very small prompt. Specifically it's "Safety guardrail was triggered after consecutive failures during streaming." Does it have something to do with the size of the app? I don't know what else to try to get it to work? import FoundationModels import Playgrounds @available(iOS 26.0, *) #Playground { Task { do { let session = LanguageModelSession() let prompt = "Write a short story about a talking cat." let response = try await session.respond(to: prompt) print(response) } catch { print("Error: \(error)") } } }
3
2
205
Jun ’25
existential any error in MLModel class
Problem I have set SWIFT_UPCOMING_FEATURE_EXISTENTIAL_ANY at Build Settings > Swift Compiler - Upcoming Features to true to support this existential any proposal. Then following errors appears in the MLModel class, but this is an auto-generated file, so I don't know how to deal with it. Use of protocol 'MLFeatureProvider' as a type must be written 'any MLFeatureProvider' Use of protocol 'Error' as a type must be written 'any Error' environment Xcode 16.0 Xcode 16.1 Beta 2 What I tried Delete cache of DerivedData and regenerate MLModel class files I also tried using DepthAnythingV2SmallF16P6.mlpackage to verify if there is a problem with my mlmodel I tried the above after setting up Swift6 in Xcode I also used coremlc to generate MLModel class files with Swift6 specified by command.
2
2
655
Dec ’24
CreateML json format
I'm trying to generate a json for my training data, tried manually first and then tried using roboflow and I still get the same error: _annotations.createml.json file contains field "Index 0" that is not of type String. the json format provided by roboflow was [{"image":"menu1_jpg.rf.44dfacc93487d5049ed82952b44c81f7.jpg","annotations":[{"label":"100","coordinates":{"x":497,"y":431.5,"width":32,"height":10}}]}] any help would be greatly appreciated
4
0
1.2k
Oct ’24
Developing apps with local LLM Foundation Models
I just watched the October 30 MacBook Pro Announcement where they talked about on-device local LLMs for the M4s. What developer training resources are available, where we can learn how to use custom llm models and build our Swift apps to use both Apple Intelligence and other llm models on device? Is the guidance to follow MLX github repos, or were those experimental and now there is an approved workflow and tooling? https://www.youtube.com/watch?v=G0cmfY7qdmY
0
2
1.2k
Oct ’24
Apple Intelligence missing Image features released in 18.2 today
I'm using an iPhone 15 Pro Max and running developer beta 18.2 released today. I've already been an 'Apple Intelligence' user and now have been able to link it with my PAID ChatGPT account. HOWEVER; I'm searching for these Image features everyone seems to be posting about and cannot find them anywhere. I'm apparently supposed to sign up for beta access to the Image features through some new Apple natively released app that was supposedly included in this build update, which I cannot find. What gives??!!
4
2
1.5k
Oct ’24