I have a model that uses a CoreML delegate, and I’m getting the following warning whenever I set the model to nil. My understanding is that CoreML is creating a cache in the app’s storage but is having issues clearing it. As a result, the app’s storage usage increases every time the model is loaded.
This StackOverflow post explains the problem in detail: App Storage Size Increases with CoreML usage
This is a critical issue because the cache will eventually fill up the phone’s storage:
doUnloadModel:options:qos:error:: model=_ANEModel: { modelURL=file:///var/mobile/Containers/Data/Application/22DDB13E-DABA-4195-846F-F884135F37FE/tmp/F38A9824-3944-420C-BD32-78CE598BE22D-10125-00000586EFDFD7D6.mlmodelc/ : sourceURL= (null) : key={"isegment":0,"inputs":{"0_0":{"shape":[256,256,1,3,1]}},"outputs":{"142_0":{"shape":[16,16,1,222,1]},"138_0":{"shape":[16,16,1,111,1]}}} : identifierSource=0 : cacheURLIdentifier=E0CD0F44FB0417936057FC6375770CFDCCC8C698592ED412DDC9C81E96256872_C9D6E5E73302943871DC2C610588FEBFCB1B1D730C63CA5CED15D2CD5A0AC0DA : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 } : state=3 : programHandle=6077141501305 : intermediateBufferHandle=6077142786285 : queueDepth=127 : attr={
ANEFModelDescription = {
ANEFModelInput16KAlignmentArray = (
);
ANEFModelOutput16KAlignmentArray = (
);
ANEFModelProcedures = (
{
ANEFModelInputSymbolIndexArray = (
0
);
ANEFModelOutputSymbolIndexArray = (
0,
1
);
ANEFModelProcedureID = 0;
}
);
kANEFModelInputSymbolsArrayKey = (
"0_0"
);
kANEFModelOutputSymbolsArrayKey = (
"138_0@output",
"142_0@output"
);
kANEFModelProcedureNameToIDMapKey = {
net = 0;
};
};
NetworkStatusList = (
{
LiveInputList = (
{
BatchStride = 393216;
Batches = 1;
Channels = 3;
Depth = 1;
DepthStride = 393216;
Height = 256;
Interleave = 1;
Name = "0_0";
PlaneCount = 3;
PlaneStride = 131072;
RowStride = 512;
Symbol = "0_0";
Type = Float16;
Width = 256;
}
);
LiveOutputList = (
{
BatchStride = 113664;
Batches = 1;
Channels = 111;
Depth = 1;
DepthStride = 113664;
Height = 16;
Interleave = 1;
Name = "138_0@output";
PlaneCount = 111;
PlaneStride = 1024;
RowStride = 64;
Symbol = "138_0@output";
Type = Float16;
Width = 16;
},
{
BatchStride = 227328;
Batches = 1;
Channels = 222;
Depth = 1;
DepthStride = 227328;
Height = 16;
Interleave = 1;
Name = "142_0@output";
PlaneCount = 222;
PlaneStride = 1024;
RowStride = 64;
Symbol = "142_0@output";
Type = Float16;
Width = 16;
}
);
Name = net;
}
);
} : perfStatsMask=0} was not loaded by the client.
Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
Could someone confirm if it's currently possible, or if there are any plans, to restrict users from enabling Apple Intelligence altogether?
I understand that we can block individual features using MDM, but I'm interested in knowing if we can prevent users from toggling Apple Intelligence on and off in System Settings entirely.
Thanks!
Kind Regards,
Filipe Nogueira
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I noticed that the ChatGPT is listed as an "Extension" in the Apple Intelligence settings on iOS 18.2 beta. Does this mean developers will be able to create their own extensions? Or will this be limited to larger companies to incorporate their own models into Apple Intelligence?
Hi everyone,
I'm working on integrating object recognition from live video feeds into my existing app by following Apple's sample code. My original project captures video and records it successfully. However, after integrating the Vision-based object detection components (VNCoreMLRequest), no detections occur, and the callback for the request is never triggered.
To debug this issue, I’ve added the following functionality:
Set up AVCaptureVideoDataOutput for processing video frames.
Created a VNCoreMLRequest using my Core ML model.
The video recording functionality works as expected, but no object detection happens. I’d like to know:
How to debug this further? Which key debug points or logs could help identify where the issue lies?
Have I missed any key configurations? Below is a diff of the modifications I’ve made to my project for the new feature.
Diff of Changes:
(Attach the diff provided above)
Specific Observations:
The captureOutput method is invoked correctly, but there is no output or error from the Vision request callback.
Print statements in my setup function setForVideoClassify() show that the setup executes without errors.
Questions:
Could this be due to issues with my Core ML model compatibility or configuration?
Is the VNCoreMLRequest setup incorrect, or do I need to ensure specific image formats for processing?
Platform:
Xcode 16.1, iOS 18.1, Swift 5, SwiftUI, iPhone 11,
Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64
Any guidance or advice is appreciated! Thanks in advance.
Hey, has anyone figured out how the “Persons” list in Genmoji/Playground actually works?
I’ve had a strange experience so far. When I first got access during Beta 2, the list randomly included about 10–15 people, even though my photo library contains many more recognizable faces. To try fixing this, I started naming faces in the Photos app, hoping they’d be added to the Genmoji/Playground list, but nothing changed.
Then, after updating to Beta 3, it added just 2–3 of the people I had named. Encouraged, I spent about an hour naming all the faces in my library. But a few hours later, the list unexpectedly removed around 10 people, leaving me with fewer than I had initially.
I’ve also read that leaving the phone locked and plugged into power should help sort people in the library, but that hasn’t worked for me yet.
Anyone else experienced this or found a way to make it work? Thanks!
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I tried this:
struct CarShortcutsProvider: AppShortcutsProvider {
@AppShortcutsBuilder
static var appShortcuts: [AppShortcut] {
AppShortcut(
intent: LockCarIntent(),
phrases: ["Lock my car with \(.applicationName)", "Lock my \(\.$car) with \(.applicationName)"],
shortTitle: LocalizedStringResource("Lock Car"),
systemImageName: "lock.fill"
)
AppShortcut(
intent: UnlockCarIntent(),
phrases: ["Unlock my car with \(.applicationName)", "Unlock my \(\.$car) with \(.applicationName)"],
shortTitle: LocalizedStringResource("Unlock Car"),
systemImageName: "lock.open.fill"
)
}
}
but Siri only understands "unlock my car ", not with the placeholder. Siri asks me then for the car, and it understands it, but not in one sentence. Is there something wrong with my code?
Also I tried it without applicationName first, and then it didn't work at all with Siri. Is this a general limitation of app intents? I thought the goal was to reduce friction. If the user has to mention the app name all the time, it adds friction.
Topic:
Machine Learning & AI
SubTopic:
Apple Intelligence
I have followed https://apple.github.io/coremltools/docs-guides/source/installing-coremltools.html but failed.
Looks like the doc is too outdated.
Topic:
Machine Learning & AI
SubTopic:
Core ML
Hi, I found when continuously predicting with the same Core ML model in 120 FPS will be faster than in 60 FPS.
I use Macbook Pro M2 and turn on ProMotion to run Core ML model prediction with a 120 FPS video, the average prediction time is 7.46ms as below:
But when I turn off ProMotion, set 60 Hz refresh rate, and run Core ML model prediction with a 60 FPS video, the average prediction time is 10.91ms as below:
What could be the technical explanation for these results? Is there any documentation or technical literature that addresses this behavior?
Hi Everyone,
I'm currently facing an issue where TensorFlow is unable to detect the GPU on my M1 Mac for model training. When I run the following code to check for available GPUs:
import tensorflow as tf
print("Num GPUs Available: ", len(tf.config.list_physical_devices('GPU')))
Num GPUs Available: 0
I have already applied the steps mentioned in the developer apple document.
https://developer.apple.com/metal/tensorflow-plugin/
System Information:
Device: M1 Mac Pro Max
Python Version: 3.12.2
TensorFlow Version: 2.17.0
OS: macOS Sequoia (15.1)
Questions:
Is there any additional configuration required to enable GPU support on M1 Macs?
Are there specific TensorFlow versions that I should be using for better compatibility?
Has anyone else faced this issue, and how did you resolve it?
Topic:
Machine Learning & AI
SubTopic:
General
Tags:
Developer Tools
ML Compute
Core ML
tensorflow-metal
Hi, I am trying to create a multi label image classifier model using CreateML (the one included in Xcode 16.1).
However, my annoations.json file won't get accepted by the app.
I get the following error: annotations.json file contains field "Index 0" that is not of type String
Here is a JSON example which results in said error:
[
{
"image": "image1.jpg",
"annotations": [
{
"label": "car-license-plate",
"coordinates": {
"x": 160, "y": 108, "width": 190, "height": 200
}
}
]
},
{
"image": "image2.jpg",
"annotations": [
{
"label": "car-license-plate",
"coordinates": {
"x": 250, "y": 150, "width": 100, "height": 98
}
}
]
}
]
Topic:
Machine Learning & AI
SubTopic:
Create ML
https://developer.apple.com/machine-learning/models/
Adding the DepthAnythingV2SmallF16.mlpackage to a new project in Xcode 16.1 and invoking the class crashes the app.
Anyone else having the same issue?
I tried Xcode 16.2 beta and it has the same response.
Code
import UIKit
import CoreML
class ViewController : UIViewController {
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
do {
// Use a default model configuration.
let defaultConfig = MLModelConfiguration()
// app crashes here
let model = try? DepthAnythingV2SmallF16(
configuration: defaultConfig
)
} catch {
//
}
}
}
Response
/AppleInternal/Library/BuildRoots/4b66fb3c-7dd0-11ef-b4fb-4a83e32a47e1/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:129: failed assertion Error: unhandled platform for MPSGraph serialization'
`
Topic:
Machine Learning & AI
SubTopic:
Core ML
Hello,
I’m attempting to convert a TensorFlow model to CoreML using the coremltools package, but I’m encountering an error during the conversion process. The error traceback points to an issue within the Cast operation in the MIL (Model Intermediate Layer) when it tries to perform type inference:
AttributeError: 'float' object has no attribute 'astype'
Here is the relevant part of the error traceback:
File ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/coremltools/converters/mil/mil/ops/defs/iOS15/elementwise_unary.py", line 896, in get_cast_value
return input_var.val.astype(dtype=type_map[dtype_val])
I’ve tried converting a model from the yamnet-tensorflow2 repository, and this error occurs when CoreML tries to cast a float type during the conversion of certain operations. I’m currently using Python 3.10 and coremltools version 6.0.1, with TensorFlow 2.x.
Has anyone encountered a similar issue or can offer suggestions on how to resolve this?
I’ve also considered that this might be related to mismatches in the model’s data types, but I’m not sure how to proceed.
Platform and package versions:
coremltools 6.1
tensorflow 2.10.0
tensorflow-estimator 2.10.0
tensorflow-hub 0.16.1
tensorflow-io-gcs-filesystem 0.37.1
Python 3.10.12
pip 24.3.1 from ~/.pyenv/versions/3.10.12/lib/python3.10/site-packages/pip (python 3.10)
Darwin MacBook-Pro.local 24.1.0 Darwin Kernel Version 24.1.0: Thu Oct 10 21:02:27 PDT 2024; root:xnu-11215.41.3~2/RELEASE_X86_64 x86_64
Any help or pointers would be greatly appreciated!
I am working on adding indexing to my App Entities via IndexedEntity. I already, separately index my content via Spotlight.
Watching 'What's New in App Intents', this is covered well but I have a question.
Do I need to implement both CSSearchableItem's associateAppEntity AND also a custom implementation of attributeSet in my IndexedEntity conformance? It seems duplicative but I can't tell from the video if you're supposed to do both or just one or the other.
Attempting to set up ComfyUI-CoreMLSuite on my Mac Studio.
ComfyUI starts but no Core nodes are in the add-node-list.
cloned both ComfyUI-CoreMLSuite and ml-stable-diffusion into custom_nodes and bounced the ComfyUI server.
The startup complains that ml-stable-diffusion has no init.py.
FileNotFoundError: [Errno 2] No such file or directory: ... /ComfyUI/custom_nodes/ml-stable-diffusion/init.py'
It appears to be a show stopper.
What to do?
So when I was on the Settings app. I couldn’t see it, but I updated it and I don’t it why don’t is this a glitch please fix it your friend Isaiah.
I'm finding the model is giving very jagged edges. This may be to do with the output resolution: Grayscale16Half 518 × 392.
I have tried to re-convert this model on Colab but have not had much luck as this is very much out of my comfort zone. Has anyone else dealt with this? the model would be perfect if I could just overcome this issue.
Hi,
I'm trying to analyze images in my Photos library with the following code:
func analyzeImages(_ inputIDs: [String])
{
let manager = PHImageManager.default()
let option = PHImageRequestOptions()
option.isSynchronous = true
option.isNetworkAccessAllowed = true
option.resizeMode = .none
option.deliveryMode = .highQualityFormat
let concurrentTasks=1
let clock = ContinuousClock()
let duration = clock.measure {
let group = DispatchGroup()
let sema = DispatchSemaphore(value: concurrentTasks)
for entry in inputIDs {
if let asset=PHAsset.fetchAssets(withLocalIdentifiers: [entry], options: nil).firstObject {
print("analyzing asset: \(entry)")
group.enter()
sema.wait()
manager.requestImage(for: asset, targetSize: PHImageManagerMaximumSize, contentMode: .aspectFit, options: option) { (result, info) in
if let result = result {
Task {
print("retrieved asset: \(entry)")
let aestheticsRequest = CalculateImageAestheticsScoresRequest()
let fingerprintRequest = GenerateImageFeaturePrintRequest()
let inputImage = result.cgImage!
let handler = ImageRequestHandler(inputImage)
let (aesthetics,fingerprint) = try await handler.perform(aestheticsRequest, fingerprintRequest)
// save Results
print("finished asset: \(entry)")
sema.signal()
group.leave()
}
}
else {
group.leave()
}
}
}
}
group.wait()
}
print("analyzeImages: Duration \(duration)")
}
When running this code, only two requests are being processed simultaneously (due to to the semaphore)... However, if I call the function with a large list of images (>100), memory usage balloons over 1.6GB and the app crashes. If I call with a smaller number of images, the loop completes and the memory is freed.
When I use instruments to look for memory leaks, it indicates no memory leaks are found, but there are 150+ VM:IOSurfaces allocated by CMPhoto, CoreVideo and CoreGraphics @ 35MB each. Shouldn't each surface be released when the task is complete?
In the 2019 WWDC session Training Object Detection Models
in Create ML a JSON file named:
annotations_832_newdice_copy.json
was show alongside with the images folder named:
Dice Training Images Two Sets.
Are these resources made available for devs ?
I am looking to understand whether the 6000 annotations were needed to be done manually ?
Meaning, they have annotated around 1000 images making 6 labels on each manually to achieve this source ? Video shows around 1000 images.
Can someone please clarify.
Hello! I've been trying to run tensorflow on my MBA M3. I previously had an Intel Mac and was able to run tensorflow without any problem. I've been working on a personal project in a directory I made on my previous Mac, that I was running through Jupyter notebook. Now every time I try to run the code, the kernel will die and I'm unsure what to do.
I tried following tutorials, but every tutorial I've seen has made me create a new environment to access Jupyter Notebook, but not letting me access notebooks and files that have already been created.
I tried to run this following command in terminal and received the subsequent error back.
python -m pip install tensorflow-metal
ERROR: Could not find a version that satisfies the requirement tensorflow-metal (from versions: none)
ERROR: No matching distribution found for tensorflow-metal
I've installed miniforge, Xcode, and anaconda onto my computer already and wanted some assistance.
I'm trying to set up Facebook AI's "Segment Anything" MLModel to compare its performance and efficacy on-device against the Vision library's Foreground Instance Mask Request.
The Vision request accepts any reasonably-sized image for processing, and then has a method to produce an output at the same resolution as the input image. Conversely, the MLModel for Segment Anything accepts a 1024x1024 image for inference and outputs a 1024x1024 image for output.
What is the best way to work with non-square images, such as 4:3 camera photos? I can basically think of 3 methods for accomplishing this:
Scale the image to 1024x1024, ignoring aspect ratio, then inversely scale the output back to the original size. However, I have a big concern that squashing the content will result in poor inference results.
Scale the image, preserving its aspect ratio so its minimum dimension is 1024, then run the model multiple times on a sliding 1024x1024 window and then aggregating the results. My main concern here is the complexity of de-duping the output, when each run could make different outputs based on how objects are cropped.
Fit the image within 1024x1024 and pad with black pixels to make a square. I'm not sure if the border will muck up the inference.
Anyway, this seems like it must be a well-solved problem in ML, but I'm having difficulty finding an authoritative best practice.