Integrate machine learning models into your app using Core ML.

Core ML Documentation

Post

Replies

Boosts

Views

Activity

coreml convert flatten to reshape, but npu does not support reshape
I have a model that uses ‘flatten’, and when I converted it to a Core ML model and ran it on Xcode with an iPhone XR, I noticed that ‘flatten’ was automatically converted to ‘reshape’. However, the NPU does not support ‘reshape’. howerver, I got the Resnet50 model on apple models and performance it on XCode with the same iphone XR, I can see the 'flatten' operator which run on NPU. On the other hand, when I used the following code to convert ResNet50 in PyTorch and ran it on Xcode Performance, the ‘flatten’ operation was converted to ‘reshape’, which then ran on the CPU. ? So I dont know how to keep 'flatten' operator when convert to ml model ? coreml tool 7.1 iphone XR ios 17.5.1 from torchvision import models import coremltools as ct import torch import torch.nn as nn network_name = "my_resnet50" torch_model = models.resnet50(pretrained=True) torch_model.eval() width = 224 height = 224 example_input = torch.rand(1, 3, height, width) traced_model = torch.jit.trace(torch_model, (example_input)) model = ct.convert( traced_model, convert_to = "neuralnetwork", inputs=[ ct.TensorType( name = "data", shape = example_input.shape, dtype = np.float32 ) ], outputs = [ ct.TensorType( name = "output", dtype = np.float32 ) ], compute_units = ct.ComputeUnit.CPU_AND_NE, minimum_deployment_target = ct.target.iOS14, ) model.save("my_resnet.mlmodel") ResNet50 on Resnet50.mlmodel My Convertion of ResNet50
1
0
328
Jun ’24
Any luck with DINO conversion to MPS Graph/CoreML ?
The DINO v1/v2 models are particularly interesting to me as they produce embeddings for the detected objects rather than ordinary classification indexes. That makes them so much more useful than the CNN based models. I would like to prepare some of the models posted on Huggingface to run on Apple Silicon, but it seems that the default conversion with TorchScript will not work. The other default conversions I've looked at so far also don't work. Conversion based on an example input doesn't capture enough of the model. I know that some have managed to convert it as I have a demo with a coreml model that seems to work, but I would like to know how to do the conversion myself. Has anyone managed to convert any of the DINOv2 models?
0
0
248
Jun ’24
PyTorch to CoreML Model inaccuracy
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2. The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right). Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models. P.S. While converting I also got the following warning. : TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error: Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
2
0
636
Jun ’24
CoreML Crashed in iOS18 Beta
Here is an App using CoreML API with ML package format, it works fine in iOS17, while it is crashed when calling [MLModel modelWithContentsOfURL ] to load model running in iOS18. It seems an exception is raised "Failed to set compute_device_types_mask E5RT: Cannot provide zero compute device types. (1)". Is it a bug of iOS18 beta version , and it will be fixed in the future? The stack is as below: Exception Codes: #0 at 0x1e9280254 Crashed Thread: 49 Application Specific Information: *** Terminating app due to uncaught exception 'NSGenericException', reason: 'Failed to set compute_device_types_mask E5RT: Cannot provide zero compute device types. (1)' Last Exception Backtrace: 0 CoreFoundation 0x0000000199466418 __exceptionPreprocess + 164 1 libobjc.A.dylib 0x00000001967cde88 objc_exception_throw + 76 2 CoreFoundation 0x0000000199560794 -[NSException initWithCoder:] 3 CoreML 0x00000001b4fcfa8c -[MLE5ProgramLibraryOnDeviceAOTCompilationImpl createProgramLibraryHandleWithRespecialization:error:] + 1584 4 CoreML 0x00000001b4fcf3cc -[MLE5ProgramLibrary _programLibraryHandleWithForceRespecialization:error:] + 96 5 CoreML 0x00000001b4fc23d8 __44-[MLE5ProgramLibrary prepareAndReturnError:]_block_invoke + 60 6 libdispatch.dylib 0x00000001a12e1160 _dispatch_client_callout + 20 7 libdispatch.dylib 0x00000001a12f07b8 _dispatch_lane_barrier_sync_invoke_and_complete + 56 8 CoreML 0x00000001b4fc3e98 -[MLE5ProgramLibrary prepareAndReturnError:] + 220 9 CoreML 0x00000001b4fc3bc0 -[MLE5Engine initWithContainer:configuration:error:] + 220 10 CoreML 0x00000001b4fc3888 +[MLE5Engine loadModelFromCompiledArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] + 344 11 CoreML 0x00000001b4faf53c +[MLLoader _loadModelWithClass:fromArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] + 364 12 CoreML 0x00000001b4faedd4 +[MLLoader _loadModelFromArchive:configuration:modelVersion:compilerVersion:loaderEvent:useUpdatableModelLoaders:loadingClasses:error:] + 540 13 CoreML 0x00000001b4f9b900 +[MLLoader _loadWithModelLoaderFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] + 424 14 CoreML 0x00000001b4faaeac +[MLLoader _loadModelFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] + 460 15 CoreML 0x00000001b4fb0428 +[MLLoader _loadModelFromAssetAtURL:configuration:loaderEvent:error:] + 240 16 CoreML 0x00000001b4fb00c4 +[MLLoader loadModelFromAssetAtURL:configuration:error:] + 104 17 CoreML 0x00000001b5314118 -[MLModelAssetResourceFactoryOnDiskImpl modelWithConfiguration:error:] + 116 18 CoreML 0x00000001b5418cc0 __60-[MLModelAssetResourceFactory modelWithConfiguration:error:]_block_invoke + 72 19 libdispatch.dylib 0x00000001a12e1160 _dispatch_client_callout + 20 20 libdispatch.dylib 0x00000001a12f07b8 _dispatch_lane_barrier_sync_invoke_and_complete + 56 21 CoreML 0x00000001b5418b94 -[MLModelAssetResourceFactory modelWithConfiguration:error:] + 276 22 CoreML 0x00000001b542919c -[MLModelAssetModelVendor modelWithConfiguration:error:] + 152 23 CoreML 0x00000001b5380ce4 -[MLModelAsset modelWithConfiguration:error:] + 112 24 CoreML 0x00000001b4fb0b3c +[MLModel modelWithContentsOfURL:configuration:error:] + 168
2
0
463
Jun ’24
TensorFlow V2 to CoreML conversion fails
I'm trying to convert a TensorFlow model that I didn't create and know approximately nothing about to CoreML so that I can use it in some functional tests. I can't tell you much about the model, but you can read about it on the blog from the team that created it: https://research.google/blog/improving-mobile-app-accessibility-with-icon-detection/ I can't convert this model to a TensorFlow Lite model because it uses a few full TensorFlow operations (which I could work around) and it exceeds the 4-tensor output limit (which I can't, AFAIK). So instead, I'm trying to convert the model to CoreML so that I can run it on-device. The issue I'm running into is that every approach fails in different ways. If I load the model with tf.saved_model.load and pass that as the first parameter to the convert call, it says NotImplementedError: Expected model format: [SavedModel | concrete_function | tf.keras.Model | .h5 | GraphDef], got <tensorflow.python.trackable.autotrackable.AutoTrackable object at 0x30d90c250> If I pass model.signatures['serving_default'] as the first parameter to convert, I get NotImplementedError: Expected model format: [SavedModel | concrete_function | tf.keras.Model | .h5 | GraphDef], got ConcreteFunction [...a page or two of info about the function here...] If I try to wrap it in a Keras layer using the instructions provided in the converter, it fails because a sequential model can't have multiple outputs. If I try to use a tf.keras.layers.TFSMLayer to load the model, it fails because there are multiple tags, and there's no way to specify tags when constructing the layer. (It tells me that I need to add 'tags' to load the model, but if I do that, it tells me that tags isn't a valid parameter to the call.) If I load the model with tf.saved_model.load and specify a single tag, then re-save it in a different location with tf.saved_model.save to generate a new model with only a single tag, then do input_layer = tf.keras.Input(shape=(768, 768, 3), dtype="int8") layer = tf.keras.layers.TFSMLayer("./serve_model", call_endpoint='serving_default') outputs = layer(input_layer) model = tf.keras.Model(input_layer, outputs) I get AttributeError: 'Functional' object has no attribute '_get_save_spec' At one point, I also tried this: class LayerFromSavedModel(tf.keras.layers.Layer): def __init__(self): super(LayerFromSavedModel, self).__init__() self.vars = legacy_model.variables def call(self, inputs): return legacy_model.signatures['serving_default'](inputs) input = tf.keras.Input(shape=(3000, 3000, 3)) model = tf.keras.Model(input, LayerFromSavedModel()(input)) and saw a similar failure. I've run out of ideas here. Is there simply no support whatsoever in the converter for importing a TensorFlow 2 SavedModel into CoreML, or am I missing something fundamental?
1
1
337
Jun ’24
CoreML Implementation of a Python Class
We are currently working on implementing a baby cry detection model in the frontend of our app but have encountered some challenges with the mel spectrogram transformation. Our mel spectrogram class, developed in python, leverages librosa for generating mel spectrograms (librosa.feature.melspectrogram and librosa.power_to_db). While we have successfully exported the model to a .mlmodel file, the results we obtain in Swift differ significantly from those generated by our Python code. Could this discrepancy be due to the use of librosa in Python, which might not be directly compatible with Swift? Or should the transformation process be inherently consistent once exported to a .mlmodel file?
0
0
231
Jun ’24
Confidence of Vision different from CoreML output
Hi, I have a custom object detection CoreML model and I notice something strange when using the model with the Vision framework. I have tried two different approaches as to how to process an image and do inference on the CoreML model. The first one is using the CoreML "raw": initialising the model, getting the input image ready and using the model's .prediction() function to get the models output. The second one is using Vision to wrap the CoreML model in a VNCoreMLModel, creating a VNCoreMLRequest and using the VNImageRequestHandler to actually perform the model inference. The result of the VNCoreMLRequest is of type VNRecognizedObjectObservation. The issue I now face is in the difference in the output of both methods. The first method gives back the raw output of the CoreML model: confidence and coordinates. The confidence is an array with size equal to the number of classes in my model (3 in my case). The second method gives back the boundingBox, confidence and labels. However here the confidence is only the confidence for the most likely class (so size is equal to 1). But the confidence I get from the second approach is quite different from the confidence I get during the first approach. I can use either one of the approaches in my application. However, I really want to find out what is going on and understand how this difference occurred. Thanks!
5
0
1.2k
Aug ’22
CoreML model using excessive ram during prediction
I have an mlprogram of size 127.2MB it was created using tensorflow and then converted to CoreML. When I request a prediction the amount of memory shoots up to 2-2.5GB every time. I've tried using the optimization techniques in coremltools but nothing seems to work it still shoots up to the same 2-2.5GB of ram every time. I've attached a graph to see it doesn't seem to be a leak as the memory is then going back down.
3
0
541
Apr ’24
How do we use the computational power of A17 Pro Neural Engine?
Hi. A17 Pro Neural Engine has 35 TOPS computational power. But many third-party benchmarks and articles suggest that it has a little more power than A16 Bionic. Some references are, Geekbench ML Core ML performance benchmark, 2023 edition How do we use the maximum power of A17 Pro Neural Engine? For example, I guess that logical devices of ANE on A17 Pro may be two, not one, so we may need to instantiate two Core ML models simultaneously for the purpose. Please let me know any technical hints.
1
0
1.5k
Oct ’23
PyTorch convert function for op 'intimplicit' not implemented
I am trying to coremltools.converters.convert a traced PyTorch model and I got an error: PyTorch convert function for op 'intimplicit' not implemented I am trying to convert a RVC model from github. I traced the model with torch.jit.trace and it fails. So I traced down the problematic part to the ** layer : https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/infer_pack/modules.py#L188 import torch import coremltools as ct from infer.lib.infer_pack.modules import ** model = **(192, 5, dilation_rate=1, n_layers=16, ***_channels=256, p_dropout=0) model.remove_weight_norm() model.eval() test_x = torch.rand(1, 192, 200) test_x_mask = torch.rand(1, 1, 200) test_g = torch.rand(1, 256, 1) traced_model = torch.jit.trace(model, (test_x, test_x_mask, test_g), check_trace = True) x = ct.TensorType(name='x', shape=test_x.shape) x_mask = ct.TensorType(name='x_mask', shape=test_x_mask.shape) g = ct.TensorType(name='g', shape=test_g.shape) mlmodel = ct.converters.convert(traced_model, inputs=[x, x_mask, g]) I got an error RuntimeError: PyTorch convert function for op 'intimplicit' not implemented. How could I modify the **::forward so it does not generate an intimplicit operator ? Thanks David
1
0
674
Feb ’24
Run CoreML model crash on VisionPro real device when review by AppStoreConnect
I run a MiDaS CoreML model on the Device. It run well on VisionPro Simulator and iOS RealDevice. But crash on VisionPro device. crash mssage: /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:550: failed assertion `MPSKernel MTLComputePipelineStateCache unable to load function ndArrayConvolution2DA14. Crashlog_com.moemiku.VisionMagicPhoto_2024-01-21-16-01-07.txt Crashlog_com.moemiku.VisionMagicPhoto_2024-01-21-16-00-39.txt
2
0
690
Jan ’24
Vision Pro CoreML inference 10x slower than M1 Mac/seems to run on CPU
Have a CoreML model that I run in my app Spatial Media Toolkit which lets you convert 2D photos to Spatial. Running the model on my 13" M1 mac gets 70ms inference. Running the exact same code on my Vision Pro takes 700ms. I'm working on adding video support but Vision Pro inference is feeling impossible due to 700ms per frame (20x realtime for for 30fps! 1 sec of video takes 20 sec!) There's a ModelConfiguration you can provide, and when I force CPU I get the same exact performance. Either it's only running on CPU, the NeuralEngine is throttled, or maybe GPU isn't allowed to help out. Disappointing but also feels like a software issue. Would be curious if anyone else has hit this/have any workarounds
0
0
543
Feb ’24