Hey all 👋🏼
We're currently working on a video processing project using the Vision framework (face, body and hand pose detection), and We've encountered a couple of errors that I need help with. We are on Xcode 16 Beta 3, testing on an iPhone 14 Pro running iOS 18 beta.
The error messages are as follows:
[LOG_ERROR] /Library/Caches/com.apple.xbs/Sources/MediaAnalysis/VideoProcessing/VCPHumanPoseImageRequest.mm[85]: code 18,446,744,073,709,551,598
encountered an unexpected condition: *** -[__NSArrayM insertObject:atIndex:]: object cannot be nil
What we've tried:
Debugging: I’ve tried stepping through the code, but the errors occur before I can gather any meaningful insights.
Searching Documentation: Looked through Apple’s developer documentation and forums but couldn’t find anything related to these specific error codes.
Nil Check: Added checks to ensure objects are not nil before inserting them into arrays, but the error persists.
Here are my questions:
Has anyone encountered similar errors with the Vision framework, specifically related to VCPHumanPoseImageRequest and NSArray operations?
Is there any known issue or bug in the version of the framework I might be using? Could it also be related to the beta?
Are there any additional debug steps or logging mechanisms I can implement to narrow down the cause?
Any suggestions on how to handle nil objects more effectively in this context?
I would greatly appreciate any insights or suggestions you might have. Thank you in advance for your assistance!
Thanks all!
Core ML
RSS for tagIntegrate machine learning models into your app using Core ML.
Post
Replies
Boosts
Views
Activity
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2.
The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right).
Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models.
P.S. While converting I also got the following warning. :
TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)):
P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error:
Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20.
Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20.
This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
I have a couple of models that I want to migrate to .mlpackage but can not find the resources of the session:
https://developer.apple.com/videos/play/wwdc2024/10159/
In the video at 21:10 talk about modifications and optimizations, but in the video can not even see the dependencies of the demo.
Thanks
https://developer.apple.com/videos/play/wwdc2024/10159/
This video references demo_utils but I did not see any source code attached to the video. Does anyone have access to it
"On the latest iOS 18 beta 2, the OCR API,the Translate App and Live Text performs very poorly in recognizing Japanese."
Hi, the following model does not run on ANE. Inspecting with deCoreML I see the error ane: Failed to retrieved zero_point.
import numpy as np
import coremltools as ct
from coremltools.converters.mil import Builder as mb
import coremltools.converters.mil as mil
B, CIN, COUT = 512, 1024, 1024 * 4
@mb.program(
input_specs=[
mb.TensorSpec((B, CIN), mil.input_types.types.fp16),
],
opset_version=mil.builder.AvailableTarget.iOS18
)
def prog_manual_dequant(
x,
):
qw = np.random.randint(0, 2 ** 4, size=(COUT, CIN), dtype=np.int8).astype(mil.mil.types.np_uint4_dtype)
scale = np.random.randn(COUT, 1).astype(np.float16)
offset = np.random.randn(COUT, 1).astype(np.float16)
# offset = np.random.randint(0, 2 ** 4, size=(COUT, 1), dtype=np.uint8).astype(mil.mil.types.np_uint4_dtype)
dqw = mb.constexpr_blockwise_shift_scale(data=qw, scale=scale, offset=offset)
return mb.linear(x=x, weight=dqw)
cml_qmodel = ct.convert(
prog_manual_dequant,
compute_units=ct.ComputeUnit.CPU_AND_NE,
compute_precision=ct.precision.FLOAT16,
minimum_deployment_target=ct.target.iOS18,
)
Whereas if I use an offset with the same dtype as the weights (uint4 in this case), it does run on ANE
Tested on coremltools 8.0b1, on macOS 15.0 beta 2/Xcode 15 beta 2, and macOS 15.0 beta 3/Xcode 15 beta 3.
The Keras Embedding layer cannot be calculated on Metal because of the missing Op:StatelessRandomGetKeyCounter, as shown in this error message:
tensorflow.python.framework.errors_impl.InvalidArgumentError: Could not satisfy device specification '/job:localhost/replica:0/task:0/device:GPU:0'. enable_soft_placement=0. Supported device types [CPU]. All available devices [/job:localhost/replica:0/task:0/device:GPU:0, /job:localhost/replica:0/task:0/device:CPU:0]. [Op:StatelessRandomGetKeyCounter]
A workaround is to enable soft placement, but this obviously is slower:
tf.config.set_soft_device_placement(True)
Reporting it here as recommended by the TensorFlow Plugin Metal team.
I was trying the latest coremltools-8.0b1 beta on macOS 15 Beta with the intent to try using the new stateful models api in CoreML.
But the conversion would always fail with the error:
/AppleInternal/Library/BuildRoots/<snip>/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:162: failed assertion `Error: the minimum deployment target for macOS is 14.0.0'
Here's a minimal repro, which works fine with both the stable version of coremltools (7.2) and the beta version (8.0b1) on macOS Sonoma 14.5, but fails with both versions of coremltools on macOS 15.0 Beta and Xcode 16.0 Beta. Which means that this most likely isn't an issue with coremltools, but with the native compilation toolchain.
from collections import OrderedDict
import coremltools as ct
import numpy as np
import torch
import torch.nn as nn
class ResidualAttentionBlock(nn.Module):
def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None):
super().__init__()
self.attn = nn.MultiheadAttention(d_model, n_head)
self.ln_1 = nn.LayerNorm(d_model)
self.mlp = nn.Sequential(
OrderedDict(
[
("c_fc", nn.Linear(d_model, d_model * 4)),
("gelu", nn.GELU()),
("c_proj", nn.Linear(d_model * 4, d_model)),
]
)
)
self.ln_2 = nn.LayerNorm(d_model)
self.attn_mask = attn_mask
def attention(self, x: torch.Tensor):
self.attn_mask = (
self.attn_mask.to(dtype=x.dtype, device=x.device)
if self.attn_mask is not None
else None
)
return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0]
def forward(self, x: torch.Tensor):
x = x + self.attention(self.ln_1(x))
x = x + self.mlp(self.ln_2(x))
return x
class Transformer(nn.Module):
def __init__(
self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None
):
super().__init__()
self.width = width
self.layers = layers
self.resblocks = nn.Sequential(
*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)]
)
def forward(self, x: torch.Tensor):
return self.resblocks(x)
transformer = Transformer(width=512, layers=12, heads=8)
emb_tokens = torch.rand((1, 512))
ct_model = ct.convert(
torch.jit.trace(transformer.eval(), emb_tokens),
convert_to="mlprogram",
minimum_deployment_target=ct.target.macOS14,
inputs=[ct.TensorType(name="embIn", shape=[1, 512])],
outputs=[ct.TensorType(name="embOutput", dtype=np.float32)],
)
I have several CoreML models that I've set up to run in sequence where one of the outputs from each model is passed as one of the inputs to the next.
For the most part, there is very little overhead in between each sub-model "chunk":
However a couple of the models (eg the first two above) spend a noticeable amount of time in "Prepare Neural Engine Request". From Instruments, it seems like this is spent doing some sort of model loading.
Given that I'm calling these models in sequence and in a fixed order, is there some way to reduce or amortize this cost? Thanks!
Deploy machine learning and AI models on-device with Core ML say the performance report can see the ops run on which unit and why it cannot run on Neural Engine.
I tested my model and the report shows a gray checkmark at the Neural Engine, indicating it can run on the Neural Engine. However, it's not executing on the Neural Engine but on the CPU. Why is this happening?
I want to try an any resolution image input Core ML model.
So I wrote the model following the Core ML Tools "Set the Range for Each Dimensionas" sample code, modified as below:
# Trace the model with random input.
example_input = torch.rand(1, 3, 50, 50)
traced_model = torch.jit.trace(model.eval(), example_input)
# Set the input_shape to use RangeDim for each dimension.
input_shape = ct.Shape(shape=(1,
3,
ct.RangeDim(lower_bound=25, upper_bound=1920, default=45),
ct.RangeDim(lower_bound=25, upper_bound=1920, default=45)))
scale = 1/(0.226*255.0)
bias = [- 0.485/(0.229) , - 0.456/(0.224), - 0.406/(0.225)]
# Convert the model with input_shape.
mlmodel = ct.convert(traced_model,
inputs=[ct.ImageType(shape=input_shape, name="input", scale=scale, bias=bias)],
outputs=[ct.TensorType(name="output")],
convert_to="mlprogram",
)
# Save the Core ML model
mlmodel.save("image_resize_model.mlpackage")
It converts OK but when I predict the result with an image It will get the error as below:
You will not be able to run predict() on this Core ML model. Underlying exception message was: {
NSLocalizedDescription = "Failed to build the model execution plan using a model architecture file '/private/var/folders/8z/vtz02xrj781dxvz1v750skz40000gp/T/model-small.mlmodelc/model.mil' with error code: -7.";
}
Where did I do wrong?
Hi,
I want to create a real time sports analytics app that takes camera input and records basketball stats. I want to use pose estimation and object classification to record things such as dribbles, when the ball leaves one's hands. etc.
Is it possible to have a model in CoreML that performs pose estimation on people but also does just simple object detection on other classes (ie. ball, hoop?)
Thanks
Error when trying to generate CoreML performance report, message says
The data couldn't be written because it isn't in the correct format.
Here is the code to replicate the issue
import numpy as np
import coremltools as ct
from coremltools.converters.mil import Builder as mb
import coremltools.converters.mil as mil
w = np.random.normal(size=(256, 128, 1))
wemb = np.random.normal(size=(1, 32000, 128)) # .astype(np.float16)
rope_emb = np.random.normal(size=(1, 2048, 128))
shapes = [(1, seqlen) for seqlen in (32, 64)]
enum_shape = mil.input_types.EnumeratedShapes(shapes=shapes)
fixed_shape = (1, 128)
max_length = 2048
dtype = np.float32
@mb.program(
input_specs=[
# mb.TensorSpec(enum_shape.symbolic_shape, dtype=mil.input_types.types.int32),
mb.TensorSpec(enum_shape.symbolic_shape, dtype=mil.input_types.types.int32),
],
opset_version=mil.builder.AvailableTarget.iOS17,
)
def flex_like(input_ids):
indices = mb.fill_like(ref_tensor=input_ids, value=np.array(1, dtype=np.int32))
causal_mask = np.expand_dims(
np.triu(np.full((max_length, max_length), -np.inf, dtype=dtype), 1),
axis=0,
)
mask = mb.gather(
x=causal_mask,
indices=indices,
axis=2,
batch_dims=1,
name="mask_gather_0",
)
# mask = mb.gather(
# x=mask, indices=indices, axis=1, batch_dims=1, name="mask_gather_1"
# )
rope = mb.gather(x=rope_emb.astype(dtype), indices=indices, axis=1, batch_dims=1, name="rope")
hidden_states = mb.gather(x=wemb.astype(dtype), indices=input_ids, axis=1, batch_dims=1, name="embedding")
return (
hidden_states,
mask,
rope,
)
cml_flex_like = ct.convert(
flex_like,
compute_units=ct.ComputeUnit.ALL,
compute_precision=ct.precision.FLOAT32,
minimum_deployment_target=ct.target.iOS17,
inputs=[
ct.TensorType(name="input_ids", shape=enum_shape),
],
)
cml_flex_like.save("flex_like_32")
If I remove the hidden states from the return it does work, and it also works if I keep the hidden states, but remove both mask, and rope, i.e, the report is generated for both programs with either these returns:
return (
# hidden_states,
mask,
rope,
)
and
return (
hidden_states,
# mask,
# rope,
)
It also works if I use a static shape instead of an EnumeratedShape
I'm using macOS 15.0 and Xcode 16.0
Edit 1:
Forgot to mention that although the performance report fails, the model is still able to make predictions
Hi there,
I am trying to create a Message Filter app that uses a trained Text Classification to predict scam texts (as it is common in my country and is constantly evolving).
However, when I try to use the MLModel in the MessageFilterExtension class, I'm getting
initialization of text classifier model with model data failed
Here's how I initialize my MLModel that is created using Create ML.
do {
let model = try MyModel(configuration: .init())
let output = try model.prediction(text: text)
guard !output.label.isEmpty else {
return nil
}
return MessagePrediction(rawValue: output.label)
} catch {
return nil
}
Is it impossible to use CoreML in Message Filter extensions?
Thank you
I created a model that classifies certain objects using yolov8. I noticed that the model is not working properly in my application. While the model works fine in Xcode preview, in the application it either returns the same result with 99% accuracy for each classification or does not provide any result.
In Preview it looks like this:
Predictions:
extension CameraVC : AVCapturePhotoCaptureDelegate {
func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) {
guard let data = photo.fileDataRepresentation() else {
return
}
guard let image = UIImage(data: data) else {
return
}
guard let cgImage = image.cgImage else {
fatalError("Unable to create CIImage")
}
let handler = VNImageRequestHandler(cgImage: cgImage,orientation: CGImagePropertyOrientation(image.imageOrientation))
DispatchQueue.global(qos: .userInitiated).async {
do {
try handler.perform([self.viewModel.detectionRequest])
} catch {
fatalError("Failed to perform detection: \(error)")
}
}
lazy var detectionRequest: VNCoreMLRequest = {
do {
let model = try VNCoreMLModel(for: bestv720().model)
let request = VNCoreMLRequest(model: model) { [weak self] request, error in
self?.processDetections(for: request, error: error)
}
request.imageCropAndScaleOption = .centerCrop
return request
} catch {
fatalError("Failed to load Vision ML model: \(error)")
}
}()
This is where i print recognized objects:
func processDetections(for request: VNRequest, error: Error?) {
DispatchQueue.main.async {
guard let results = request.results as? [VNRecognizedObjectObservation] else {
return
}
var label = ""
var all_results = []
var all_confidence = []
var true_results = []
var true_confidence = []
for result in results {
for i in 0...results.count{
all_results.append(result.labels[i].identifier)
all_confidence.append(result.labels[i].confidence)
for confidence in all_confidence {
if confidence as! Float > 0.7 {
true_results.append(result.labels[i].identifier)
true_confidence.append(confidence)
}
}
}
label = result.labels[0].identifier
}
print("True Results " , true_results)
print("True Confidence ", true_confidence)
self.output?.updateView(label:label)
}
}
I converted the model like this:
from ultralytics import YOLO
model = YOLO(model_path)
model.export(format='coreml', nms=True, imgsz=[720,1280])
I am testing the new scaled dot product attention CoreML op on macOS 15 beta 1. Based on the session video I was expecting to see a speedup when running on GPU however I see roughly equivalent performance to the same model on macOS 14.
I ran tests with two models:
one that simply repeats y = sdpa(y, k, v) 50 times
gpt2 124M converted from nanoGPT (the only change is not returning loss from the forward method)
I converted both models using coremltools 8.0b1 with minimum deployment targets of macOS 14 and also macOS 15. In Xcode, I can see that the new op was used for the macOS 15 target. Running on macOS 15 both target models take the same time, and that time matches the runtime on macOS 14.
Should I be seeing performance improvements?
Hi all,
I'm trying to build a scam detection in Message Filter powered by CoreML. I find the predictions of ML reliable and the solution for text frauds and scams are sorely needed.
I was able to create a trained MLModel and deploy it in the app. It works on my container app, but when I try to use and initialise the model in the Message Filter extension, I get an error;
initialization of text classifier model with model data failed
I have tried putting the model in the container app, extension, even made a shared framework for container and extension but to no avail. Every time I invoke the codes to init my model from the extension, I am met with the same error.
Here's my code for initializing the model
do {
let model = try Ace_v24_6(configuration: .init())
let output = try model.prediction(text: text)
guard !output.label.isEmpty else {
return nil
}
return MessagePrediction(rawValue: output.label)
} catch {
return nil
}
My question is: Is it impossible to use CoreML in MessageFilters?
Cheers
For some reason YDF does not work with the ARM processor. An issue with mutex and destruction.
for (int i = 0; i < 1000; i++){
double st_tmp = CFAbsoluteTimeGetCurrent();
retBuffer = [self.enhancer enhance:pixelBuffer error:&error];
double et_tmp = CFAbsoluteTimeGetCurrent();
NSLog(@"[enhance once] %f ms ", (et_tmp - st_tmp) * 1000);
}
When I run a CoreML model using the above code, I notice that the runtime gradually decreases at the beginning.
output:
[enhance once] 14.965057 ms
[enhance once] 12.727022 ms
[enhance once] 12.818098 ms
[enhance once] 11.829972 ms
[enhance once] 11.461020 ms
[enhance once] 10.949016 ms
[enhance once] 10.712981 ms
[enhance once] 10.367990 ms
[enhance once] 10.077000 ms
[enhance once] 9.699941 ms
[enhance once] 9.370089 ms
[enhance once] 8.634090 ms
[enhance once] 7.659078 ms
[enhance once] 7.061005 ms
[enhance once] 6.729007 ms
[enhance once] 6.603003 ms
[enhance once] 6.427050 ms
[enhance once] 6.376028 ms
[enhance once] 6.509066 ms
[enhance once] 6.452084 ms
[enhance once] 6.549001 ms
[enhance once] 6.616950 ms
[enhance once] 6.471038 ms
[enhance once] 6.462932 ms
[enhance once] 6.443977 ms
[enhance once] 6.683946 ms
[enhance once] 6.538987 ms
[enhance once] 6.628990 ms
...
In most deep learning inference frameworks, there is usually a warmup process, but typically, only the first inference is slower. Why does CoreML have a decreasing runtime at the beginning? Is there a way to make only the first inference time longer, while keeping the rest consistent?
I use the CoreML model in the (void)display_pixels:(IJKOverlay *)overlay function.
I have a model that uses ‘flatten’, and when I converted it to a Core ML model and ran it on Xcode with an iPhone XR, I noticed that ‘flatten’ was automatically converted to ‘reshape’. However, the NPU does not support ‘reshape’.
howerver, I got the Resnet50 model on apple models and performance it on XCode with the same iphone XR, I can see the 'flatten' operator which run on NPU.
On the other hand, when I used the following code to convert ResNet50 in PyTorch and ran it on Xcode Performance, the ‘flatten’ operation was converted to ‘reshape’, which then ran on the CPU.
? So I dont know how to keep 'flatten' operator when convert to ml model ?
coreml tool 7.1
iphone XR
ios 17.5.1
from torchvision import models
import coremltools as ct
import torch
import torch.nn as nn
network_name = "my_resnet50"
torch_model = models.resnet50(pretrained=True)
torch_model.eval()
width = 224
height = 224
example_input = torch.rand(1, 3, height, width)
traced_model = torch.jit.trace(torch_model, (example_input))
model = ct.convert(
traced_model,
convert_to = "neuralnetwork",
inputs=[
ct.TensorType(
name = "data",
shape = example_input.shape,
dtype = np.float32
)
],
outputs = [
ct.TensorType(
name = "output",
dtype = np.float32
)
],
compute_units = ct.ComputeUnit.CPU_AND_NE,
minimum_deployment_target = ct.target.iOS14,
)
model.save("my_resnet.mlmodel")
ResNet50 on Resnet50.mlmodel
My Convertion of ResNet50