Xcode Version: Version 15.2 (15C500b)
com.github.apple.coremltools.source: torch==1.12.1
com.github.apple.coremltools.version: 7.2
Compute: Mixed (Float16, Int32)
Storage: Float16
The input to the mlpackage is MultiArray (Float16 1 × 1 × 544 × 960)
The flexibility is: 1 × 1 × 544 × 960 | 1 × 1 × 384 × 640 | 1 × 1 × 736 × 1280 | 1 × 1 × 1088 × 1920
I tested this on iPhone XR, iPhone 11, iPhone 12, iPhone 13, and iPhone 14. On all devices except the iPhone 11, the model runs correctly on the NPU. However, on the iPhone 11, the model runs on the CPU instead.
Here is the CoreMLTools conversion code I used:
mlmodel = ct.convert(trace,
inputs=[ct.TensorType(shape=input_shape, name="input", dtype=np.float16)],
outputs=[ct.TensorType(name="output", dtype=np.float16, shape=output_shape)],
convert_to='mlprogram',
minimum_deployment_target=ct.target.iOS16
)
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Every time I click join the waitlist, there's is no reaction.
My phone just redirects me back to this page
Posting a follow up question after the WWDC 2025 Machine Learning AI & Frameworks Group Lab on June 12.
In regards to the on-device API of any of the AI frameworks (foundation model, vision framework, ect.), is there a response condition or path where the API outsources it's input to ChatGPT if the user has allowed this like Siri does?
Ignore this if it's a no: is this handled behind the scenes or by the developer?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
Machine Learning
VisionKit
Apple Intelligence
the specific context is that i would like to build an agent that monitors my phone call (with a customer support for example), and simiply identify whether or not im still put on hold, and notify me when im not.
currently after reading the doc, i dont think its possible yet, but im so annoyed by the customer support calls that im willing to go the distance and see if theres any way.
I have seen a lot of tutorials on pytorchvision models being able to be converted to coreml models but I have not been able to google or find any tutorials for torchaudio models. Is converting to a torchaudio to coreml model even possible? Does anybody have links that show how to do it?
Topic:
Machine Learning & AI
SubTopic:
Core ML
what am I not understanding here.
in short the view loads text from the jsons descriptions and then should filter out the words. and return and display a list of most used words, debugging shows words being identified by the code but does not filter them out
private func loadWordCounts() {
DispatchQueue.global(qos: .background).async {
let fileManager = FileManager.default
guard let documentsDirectory = try? fileManager.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: false) else { return }
let descriptions = loadDescriptions(fileManager: fileManager, documentsDirectory: documentsDirectory)
var counts = countWords(in: descriptions)
let tagsToRemove: Set<NLTag> = [
.verb,
.pronoun,
.determiner,
.particle,
.preposition,
.conjunction,
.interjection,
.classifier
]
for (word, _) in counts {
let tagger = NLTagger(tagSchemes: [.lexicalClass])
tagger.string = word
let (tag, _) = tagger.tag(at: word.startIndex, unit: .word, scheme: .lexicalClass)
if let unwrappedTag = tag, tagsToRemove.contains(unwrappedTag) {
counts[word] = 0
}
}
DispatchQueue.main.async {
self.wordCounts = counts
}
}
}
I request the access to playground when released ios 18.2 beta 1 and now in beta 2 of ios 18.2 is stiill waiting, how long take this??
Hello Apple Developer Community,
I'm investigating Core ML model loading behavior and noticed that even when the compiled model path remains unchanged after an APP update, the first run still triggers an "uncached load" process. This seems to impact user experience with unnecessary delays.
Question: Does Core ML provide any public API to check whether a compiled model (from a specific .mlmodelc path) is already cached in the system?
If such API exists, we'd like to use it for pre-loading decision logic - only perform background pre-load when the model isn't cached.
Has anyone encountered similar scenarios or found official solutions? Any insights would be greatly appreciated!
Faz mais de três dias estou aguardando o app playgrounds para testar essas funcionalidades Da Apple intelligence. sem falar que atualizei o sistema e o app playgrounds não apareceu no meu iPhone 15 pro. alguém sabe me dizer onde eu baixo o app playgrounds??
Is the face and body detection service in the Vision framework a local model or a cloud model?
https://developer.apple.com/documentation/vision
I signed up for apple intelligence on my IPad Air m1 and then updated my phone today. It tells me that I’ve already been put on a waitlist i didn’t even join? And it’s been stuck on that for 2 days now
Hardware: Macbook Pro M4 Nov 2024
Software: macOS Tahoe 26.0 & xcode 26.0
Apple Intelligence is activated and the Image playground macOS app works
Running the following on xcode throws ImagePlayground.ImageCreator.Error.creationFailed
Any suggestions on how to make this work?
import Foundation
import ImagePlayground
Task {
let creator = try await ImageCreator()
guard let style = creator.availableStyles.first else {
print("No styles available")
exit(1)
}
let images = creator.images(
for: [.text("A cat wearing mittens.")],
style: style,
limit: 1)
for try await image in images {
print("Generated image: \(image)")
}
exit(0)
}
RunLoop.main.run()
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Can vector embeddings be used in a SwiftData Model?
If yes, are there resources available to learn more about it? or at least a guided step on how to make it work?
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Tags:
iOS
Swift
SwiftData
Apple Intelligence
I’m keep looking around documentation and some sample codes but still haven’t found example of how was used this type of Network Regressor .
Does it take some special parameters to perform on ANE , what size,format of DataFrame ?
Hi everyone,
I'm working on an iOS app built in Swift using Xcode, where I'm integrating Roboflow's object detection API to extract items from grocery receipts. My goal is to identify key information (like items, total, tax, etc.) from the images of these receipts.
I'm successfully sending images to the Roboflow API and receiving predictions with bounding box data, but when I attempt to extract text from the detected regions (bounding boxes), it appears that the text extraction is failing—no text is being recognized. The issue seems to be that the bounding boxes are either not properly being handled or something is going wrong in the way I process the API response.
Here's a brief breakdown of what I'm doing:
The image is captured, converted to base64, and sent to the Roboflow API.
The API response comes back with bounding boxes for the detected elements (items, date, subtotal, etc.).
The problem occurs when I try to extract the text from the image using the bounding box data—it seems like the bounding boxes are being found, but no text is returned.
I suspect the issue might be happening because the app’s segue to the results view controller is triggered before the OCR extraction completes, or there might be a problem in my code handling the bounding box response.
Response Data:
{
"inference_id": "77134cce-91b5-4600-a59b-fab74350ca06",
"time": 0.09240847699993537,
"image": {
"width": 370,
"height": 502
},
"predictions": [
{
"x": 163.5,
"y": 250.5,
"width": 313.0,
"height": 127.0,
"confidence": 0.9357666373252869,
"class": "Item",
"class_id": 1,
"detection_id": "753341d5-07b6-42a1-8926-ecbc61128243"
},
{
"x": 52.5,
"y": 417.5,
"width": 89.0,
"height": 23.0,
"confidence": 0.8819760680198669,
"class": "Date",
"class_id": 0,
"detection_id": "b4681149-d538-47b1-8700-d9528bf1daa0"
},
...
]
}
And the log showing bounding boxes:
Prediction: ["width": 313, "y": 250.5, "x": 163.5, "detection_id": 753341d5-07b6-42a1-8926-ecbc61128243, "class": Item, "height": 127, "confidence": 0.9357666373252869, "class_id": 1]
No bounding box found in prediction.
I've double-checked the bounding box coordinates, and everything seems fine. Does anyone have experience with using OCR alongside object detection APIs in Swift? Any help on how to ensure the bounding boxes are properly processed and used for OCR would be greatly appreciated!
Also, would it help to delay the segue to the results view controller until OCR is complete?
Thank you!
I downloaded the new developer beta and then installed xcode. I did the downloads but I couldn't download the Predictive Code Completion Model. When I try to download it I get the error "The operation couldn’t be completed. (ModelCatalog.CatalogErrors.AssetErrors error 1.)". I am using the M3 Pro model.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
Hi, while trying to diagnose why some of my Core ML models are running slower when their configuration is set with compute units .CPU_AND_GPU compared to running with .CPU_ONLY I've been attempting to create Core ML model performance reports in Xcode to identify the operations that are not compatible with the GPU. However, when selecting an iPhone as the connected device and compute unit of 'All', 'CPU and GPU' or 'CPU and Neural Engine' Xcode displays one of the following two error messages:
“There was an error creating the performance report. The performance report has crashed on device”
"There was an error creating the performance report. Unable to compute the prediction using ML Program. It can be an invalid input data or broken/unsupported model."
The performance reports are successfully generated when selecting the connected device as iPhone with compute unit 'CPU only' or Mac with any combination of compute units.
Some of the models I have found the issue to occur with are stateful, some are not. I have tried to replicate the issue with some example models from the CoreML tools stateful model guide/video Bring your machine learning and AI models to Apple silicon. Running the performance report on a model generated from the Simple Accumulator example code the performance report is created successfully when trying all compute unit options, but using models from the toy attention and toy attention with kvcache examples it is only successful with compute units as 'CPU only' when choosing iPhone as the device.
Versions I'm currently working with:
Xcode Version 16.0
MacOS Sequoia 15.0.1
Core ML Tools 8.0
iPhone 16 Pro iOS 18.0.1
Is there a way to avoid these errors? Or is there another way to identify which operations within a CoreML model are supported to run on iPhone GPU/Neural engine?
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max).
What is the best way for me to determine availability to set this TipKit rule?
Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
I updated to iOS 18.2 beta 2, and with it, I am able to see behind the early access requested by just swiping down, but when I do swipe down, the app closes. Does this mean I'm close to getting access?
Hi there,
I have a custom keypoint detection model and want to use it via vision's CoremlRequest API. Here's some complication for input and output:
For input My model expect 512x512 a image. Which would be resized and padded from a 1920x1080 frame. I use the .scaleToFit option, but can I also specify the color used for padding?
For output:
My model output a CoreMLFeatureValueObservation, can I have it output in a format vision recognizes? such as joints/keypoints
If my model is able to output in a format vision recognizes, would it take care to restoring the coordinates back to the original frame? (undo the padding) If not, how do I restore it from .scaletofit option?
Best,