I’m developing an activity classifier that I’d like to input using the JSON format of CoreMotion data.
I am getting the error:
Unable to parse /Users/DewG/Downloads/Testing/Step1/Testing.json. It does not appear to be in JSON record format. A SequenceType of dictionaries is expected
I've verified that the format I am using is JSON via various JSON validators, so I am expecting I'm just holding it wrong. Is there an example of a JSON file with CoreMotion data that I can model after?
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
While runninf Apple Foundation Model in iPhone simulator, I got this error:
IPC error: Underlying connection interrupted
What does this mean? Related to foundation model?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Do we know what a safe max token limit is? After some iterating, I have come to believe 4096 might be the limit on device.
Could you help me out by answering any of these questions:
Is 4096 the correct limit?
Do all devices have the same limit?
Will the limit change over time or by device?
The errors I get when going over the limit do not seem to say, hey you are over, so it's just by trial and error that I figure these issues out.
Thanks for the fun new toys.
Regards,
Rob
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
So, I've declared an AppIntent that indicates my app can "Open files" that conform to UTType.Image.
I've got a @AssistantEntity(schema: .files.file) and a
@AssistantIntent(schema: .files.openFile) declared.
So I navigate to the files app, quicklook an image, and open type-to-siri.
I tell siri "open this in " and all it does is act like "open ". No breakpoint is hit in my intent's perform method.
Am I doing something wrong? How can I test these cross-app behaviors?
Are they... not actually possible? Does an "OpenIntent" only work on my app's own URLs and not on file URLs from other apps?
When context window size exceeded, this error is not called (instead another error has shown up) to handle new session.
LanguageModelSession.GenerationError.exceededContextWindowSize
Or am I doing things wrong?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
In this thread, I asked about adding parameters to App Shortcuts. The conclusion that I've drawn so far is that for App Shortcuts, there cannot be any parameters in the prompt, otherwise the system cannot find the AppShortcutsProvider. While this is fine for Shortcuts and non-voice interaction, I'd like to find a way to add parameters to the prompt. Here is the scenario:
My app controls a device that displays some content on "pages." The pages are defined in an AppEnum, which I use for Shortcuts integration via App Intents. The App Intent functions as expected, and is able to change the page based on the user selection within Shortcuts (or prompted if using the App Shortcut). What I'd like to do is allow the user to be able to say "Siri, open with ."
So far, The closest I've come to understanding how this works is through the .intentsdefinition file you can create (and SiriKit in general), however the part that really confused me there is a button in the File Editor that says "Convert to App Intent." To me, this means that I should be able to use the app intent I've already authored and hook that into Siri, rather than making an entirely new function/code-block that does exactly the same thing. Ideally, that's what I want to do.
What's the right way to define this behavior?
p.s. If I had to pick an intent schema in the context of AssistantSchemas, I'd say it's closest to the "Open File" one, if that helps. I'd ultimately like to make the "pages" user-customizable so in the long run, that would be what I'd do.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Siri and Voice
SiriKit
App Intents
Apple Intelligence
Is there anywhere we can reference error codes? I'm getting this error: "The operation couldn’t be completed. (FoundationModels.LanguageModelSession.GenerationError error 4.)" and I have no idea of what it means or what to attempt to fix.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
Tags:
Machine Learning
Create ML
Apple Intelligence
Is anything configurable for LanguageModelSession.Guardrails besides the default? I'm prototyping a camping app, and it's constantly slamming into guardrail errors when I use the new foundation model interface. Any subjects relating to fishing, survival, etc. won't generate.
For example the prompt "How can I kill deer ticks using a clothing treatment?" returns a generation error.
The results that I get are great when it works, but so far the local model sessions are extremely unreliable.
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
We are using VNRecognizeTextRequest to detect text in documents, and we have noticed that even in some very clear and well-formatted documents, there are still instances where text blocks are missed. the live text also have the same issue.
Hey everyone, I am a beginner with developing and using Artificial Intelligence models.
How do I integrate my createML image classification with swift.
I already have have an ML model and I want to integrate it into a swiftUI app.
If anyone could help, that would be great.
Thank you, O3DP
For example:
I have a list of to-dos, each with a unique id (a GUID). I want to feed them to the LLM model and have the model rewrite the items so they start with an action verb.
I'd like to get them back and identify which rewritten item corresponds to which original item. I obviously can't compare the text, as it has changed.
I've tried passing the original GUIDs in with each to-do, but the extra GUID characters pollutes the input and confuses the model.
I've tried numbering them in order and adding an originalSortOrder field to my generable type, but it doesn't work reliably.
Any suggestions?
I could do them one at a time, but I also have a use case where I'm asking for them to be organized in sections, and while I've instructed the model not to rename anything, it still happens. It's just all very nondeterministic.
With respond() methods, the foundation model works well enough. With streamResponse() methods, the responses are very repetitive, verbose, and messy.
My app with foundation model uses more than 500 MB memory on an iPad Pro when running from Xcode. Devices supporting Apple Intelligence have at least 8GB memory. Should Apple use a bigger model (using 3 ~ 4 GB memory) for better stream responses?
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
It been hours still no acess to image playground
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I've been waiting for confirmation for a long time, it finally seems to have arrived, but now the software for working with Playground and Genmoji is not downloaded. For the past 8 hours, the phone has been connected to the network, it has been on hold for more than 20 minutes, everything has passed by. The most interesting thing is there is no rebounding how much to swing
WWDC25: Combine Metal 4 machine learning and graphics
Demonstrated a way to combine neural network in the graphics pipeline directly through the shaders, using an example of Texture Compression. However there is no mention of using which ML technique texture is compressed.
Can anyone point me to some well known model/s for this particular use case shown in WWDC25.
Hey everyone
I'm Manish Mehta, field CTO at Centific. I recently read Apple's white paper, The Illusion of Thinking and it got me thinking about the current state of AI reasoning. Who here has read it?
The paper highlights how LLMs often rely on pattern recognition rather than genuine understanding. When faced with complex tasks, their performance can degrade significantly.
I was just thinking that to move beyond this problem, we need to explore approaches that combines Deeper Reasoning Architectures for true cognitive capability with Deep Human Partnership to guide AI toward better judgment and understanding.
The first part means fundamentally rewiring AI to reason. This involves advancing deeper architectures like World Models, which can build internal simulations to understand real-world scenarios , and Neurosymbolic systems, which combines neural networks with symbolic reasoning for deeper self-verification.
Additionally, we need to look at deep human partnership and scalable oversight. An AI cannot learn certain things from data alone, it lacks the real-world judgment an AI will never have. Among other things, deep domain expert human partners are needed to instill this wisdom , validate the AI's entire reasoning process , build its ethical guardrails , and act as skilled adversaries to find hidden flaws before they can cause harm.
What do you all think? Is this focus on a deeper partnership between advanced AI reasoning and deep human judgment the right path forward?
Agree? Disagree?
Thanks
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
I'm seeing this error a lot in my console log of my iPhone 15 Pro (Apple Intelligence enabled):
com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1
Are there entitlements / permissions I need to enable in Xcode that I forgot to do?
Code example
Here's how I'm initializing the language model session:
private func setupLanguageModelSession() {
if #available(iOS 26.0, *) {
let instructions = """
my instructions
"""
do {
languageModelSession = try LanguageModelSession(instructions: instructions)
print("Foundation Models language model session initialized")
} catch {
print("Error creating language model session: \(error)")
languageModelSession = nil
}
} else {
print("Device does not support Foundation Models (requires iOS 26.0+)")
languageModelSession = nil
}
}
I'd love to add a feature based on FoundationModels to the Mac Catalyst version of my iOS app. Unfortunately I get an error when importing FoundationModels: No such module 'FoundationModels'.
Documentation says Mac Catalyst is supported: https://developer.apple.com/documentation/foundationmodels
I can create iOS builds using the FoundationModels framework without issues.
Hope this will be fixed soon!
Config:
Xcode 26.0 beta (17A5241e)
macOS 26.0 Beta (25A5279m)
15-inch, M4, 2025 MacBook Air
Topic:
Machine Learning & AI
SubTopic:
Foundation Models
When using the imagePlaygroundSheet modifier in SwiftUI, the system presets an image playground in a fixed size. Especially on macOS, this modal is rather small and doesn't utilize the available space efficiently.
Is there a way to make the modal bigger, or allow the user to resize the dialog? I tried presentationDetents, but this would need to be applied to the content of the sheet, which is provided by the system...
I guess this question applies to other system-provided sheets like the photo picker as well.
Hi everyone,
I've been struggling for a few weeks to integrate my Core ML Image Classifier model into my .swiftpm project, and I’m hoping someone can help.
Here’s what I’ve done so far:
I converted my .mlmodel file to .mlmodelc manually via the terminal.
In my Package.swift file, I tried both "copy" and "process" options for the resource.
The issues I’m facing:
When using "process", Xcode gives me the error:
"multiple resources named 'coremldata.bin' in target 'AppModule'."
When using "copy", the app runs, but the model doesn’t work, and the terminal shows:
"A valid manifest does not exist at path: .../Manifest.json."
I even tried creating a Manifest.json manually to test, but this led to more errors, such as:
"File format version must be in the form of major.minor.patch."
"Failed to look up root model."
To check if the problem was specific to my model, I tested other Core ML models in the same setup, but none of them worked either.
I feel stuck and unsure of how to resolve these issues. Any guidance or suggestions would be greatly appreciated. Thanks in advance! :)
Topic:
Machine Learning & AI
SubTopic:
Core ML
Tags:
Swift Packages
Swift Student Challenge
Swift Playground
Core ML