Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

107 Posts
Sort by:
Post not yet marked as solved
0 Replies
596 Views
Hello everyone, I encountered some compiler errors while following a WWDC video on converting a colorization PyTorch model to CoreML. I have followed all the steps correctly, but I'm facing issues with the following lines of code provided in the video: In the colorize() method, there is a line: let modelInput = try ColorizerInput(inputWith: lightness.cgImage!) This line expects a cgImage as input, but the auto-generated Model class only accepts an MLMultiArray or MLShapedArray, not an image. Video conversion step did not cover setting the input or output as ImageType. In the extractColorChannels() method, there are a couple of lines: let outA: [Float] = output.output_aShapedArray.scalars let outB: [Float] = output.output_bShapedArray.scalars However, I only have output.var183_aShapedArray available. In other words, there is no var183_bShapedArray. I would appreciate any thoughts or suggestions you may have regarding these issues. Thank you. Link to the WWDC22 session 10017 https://developer.apple.com/videos/play/wwdc2022/10017/
Posted
by
Post not yet marked as solved
2 Replies
819 Views
I have tried multiple playgrounds and consistently get the same error in any playground I create. There is a tabular data playground that does work but I see nothing I am not doing. Here is the code that fails with Error: cannot find 'MLDataTable' in scope /* code start */ import CoreML import Foundation import TabularData let jsonFile = Bundle.main.url(forResource: "sentiment_analysis", withExtension: "json")! let tempTable = try DataTable let dataTable = try MLDataTable(contentsOf: jsonFile) print(dataTable) /* code end */
Posted
by
Post not yet marked as solved
0 Replies
344 Views
I download model from this https://developer.apple.com/documentation/coreml/model_integration_samples/detecting_human_body_poses_in_an_image I want to check output confident score and x,y please tell me what should I do?
Posted
by
Post marked as solved
1 Replies
784 Views
In the video here, the speaker refers to MPSGraphTool, which is supposed to convert from CoreML and other formats to the new MPSGraphPackage format. Searching for MPSGraphTool on Google returns only that video, and there is no mention of it on the forums here or elsewhere. When can we expect the tool to be released? How can we find out more information about it? My use case is that the ANECompilerService that runs on the Mac / iOS devices to compile CoreML Models / Programs is extremely slow and unreliable for large models. It often crashes entirely, sitting at 100% CPU usage forever and never completing the task at hand, meaning the user is stuck in a loading state. This also applies in Xcode when running a performance test. I would really like to compile the graph once and just run it on device directly.
Posted
by
Post not yet marked as solved
0 Replies
449 Views
I am trying to convert a model I found on TensorFlow hub to CoreML so I can use it in an iOS app I'm developing. Converting the model so far has been quite simple except that I get an NotImplementedError when specifying ImageType as output. This is the code I used: model = tf.keras.Sequential([ tf.keras.layers.InputLayer(input_shape=(256, 256, 3)), tf_hub.KerasLayer( "https://tfhub.dev/rishit-dagli/mirnet-tfjs/1" ) ]) model.build([1, 256, 256, 3]) # Batch input shape. mlmodel = ct.convert(model, convert_to="mlprogram", inputs=[ct.ImageType()], outputs=[ct.ImageType()]) If only the inputs are specified as ImageType, then no error occurs, but when I include a specification for the outputs as ImageType, I get this error: NotImplementedError: Image output 'Identity' has symbolic dimensions in its shape FYI: I'm using TensorFlow version 2.12 and CoreML 6.3 Is there any way around this? Or, am I doing this wrong? I'm quite new to machine learning and CoreML, so any helpful input is much appreciated. Thanks in advance!
Posted
by
Post not yet marked as solved
0 Replies
492 Views
Hi all, I just tried to integrate my ML model (TF to CoreML) into my Xcode project, but couldn't create a performance report. As far as I'm aware, you only need to drag your .mlmodel file into the Navigator. I took this model from TF Hub and converted it to CoreML, and it has images as inputs and MultiArray as outputs (don't know if that has any significance). Other than that, I haven't made any changes to the model itself. If anyone could point me in the right direction that would be very much appreciated! I've included a screenshot of the error here:
Posted
by
Post not yet marked as solved
0 Replies
480 Views
First of all this vision api is amazing. the OCR is very accurate. I've been looking to multiprocess using the vision API. I have about 2 million PDFs I want to OCR, and I want to run multiple threads/run parallel processing to OCR each. I tried pyobjc but it does not work so well. Any suggestions on tackling this problem?
Posted
by
Post not yet marked as solved
0 Replies
659 Views
Hello fellow developers, I am currently developing an application involving machine learning models, specifically CoreML models, and I have encountered an intriguing issue that I am hoping to get some insights on. In my current scenario, I'm planning to create a simple application with minimal UI, possibly using PyQT or similar tools. Therefore, I'm seeking a way to utilize NeuralEngine and GPU for CoreML model inference in Python. I discovered the 'predict' API in CoreMLTools which allows for model inference, but I'm unsure if its performance is on par with that of a properly built MacOS application using Swift and Neural Engine. Can anyone provide insights into whether there's a considerable difference in inference performance between these two methods? Is the performance of CoreMLTools 'predict' API comparable to that of a full-fledged Swift MacOS application leveraging the Neural Engine? Any clarification or guidance on this matter would be greatly appreciated. Thanks!
Posted
by
Post not yet marked as solved
0 Replies
331 Views
I have a dataset with 3 columns "item_id", "user_id", "rating". I created a coreML MLRecommender model from this dataset. I want to use this model to get the top 10 predictions for a new user (not in the original dataset) but who has rated a subset of the items in the dataset. I don't see any API in the Apple docs to do this. Both the recommendations APIs only seem to accept an existing user-id and get recommendations for that user. The WWDC tutorial talks about a prediction API to achieve this. But I dont see this in the Apple API documentation and code below from WWDC tutorial cannot be used since it does not give details on how to create the HikingRouteRecommenderInput class it passes into the prediction API. let hikes : [String : Double] = ["Granite Peak" : 5, "Wildflower Meadows" : 4] let input = HikingRouteRecommenderInput(items: hikes, k: 5) // Get results as sequence of recommended items let results = try model.prediction(input: input) Any pointers on how to get predictions for new user would be greatly appreciated.
Posted
by
Post not yet marked as solved
1 Replies
517 Views
I am trying to convert my Tensorflow 2.0 model to a CoreML model so I can deploy it to a mobile app. However, I continually get the error: ValueError: Converter was called with source="tensorflow", but missing tensorflow package I am working in a virtual environment with Python 3.7, Tensorflow 2.11, and Coremltools 5.3.1. I had saved the Tensorflow model by using tensorflow.saved_model.save and was attempting to convert the model with the following: import coremltools as ct image_input = ct.ImageType(shape=(1, 250, 250, 3,), bias=[-1,-1,-1], scale=1/255) classifier_config = ct.ClassifierConfig(['Billy','Not_Billy']) core_model = ct.convert( <path_to_saved_model>, convert_to='mlprogram', inputs=[image_input], classifier_config=classifier_config, source='tensorflow' ) I keep receiving this error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /var/folders/7n/vj_bf6q122bg43h_xm957hp80000gn/T/ipykernel_11024/1565729572.py in 6 inputs=[image_input], 7 classifier_config=classifier_config, ----> 8 source='tensorflow' 9 ) ~/Documents/Python/.venv/lib/python3.7/site-packages/coremltools/converters/_converters_entry.py in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug, pass_pipeline) 466 _validate_conversion_arguments(model, exact_source, inputs, outputs_as_tensor_or_image_types, 467 classifier_config, compute_precision, --> 468 exact_target, minimum_deployment_target) 469 470 if pass_pipeline is None: ~/Documents/Python/.venv/lib/python3.7/site-packages/coremltools/converters/_converters_entry.py in _validate_conversion_arguments(model, exact_source, inputs, outputs, classifier_config, compute_precision, convert_to, minimum_deployment_target) 722 if exact_source == "tensorflow" and not _HAS_TF_1: 723 raise ValueError( --> 724 'Converter was called with source="tensorflow", but missing ' "tensorflow package" 725 ) 726 ValueError: Converter was called with source="tensorflow", but missing tensorflow package
Posted
by
Post not yet marked as solved
0 Replies
399 Views
Hello. I am manually constructing some models with the CoreML protobuf format. When the model has flexible input shapes, I am seeing unexpected output shapes in some cases after running prediction(from:). The model is a single matrix multiplication, A*B (one innerProduct layer), and the dynamic dimension is the first dimension of the only input A (B is constant). What I observe is that sometimes there are additional leading ones in the output shape. Some test program output showing the shapes: running model: dynamic_shape.mlmodel A shape: [1, 2] Y shape: [1, 1, 1, 1, 4] running model: dynamic_shape.mlmodel A shape: [2, 2] Y shape: [1, 1, 1, 2, 4] running model: dynamic_input_shape.mlmodel A shape: [1, 2] Y shape: [1, 4] running model: dynamic_input_shape.mlmodel A shape: [2, 2] Y shape: [1, 1, 1, 2, 4] running model: static_shape.mlmodel A shape: [1, 2] Y shape: [1, 4] I've put the model generation and test code below. Am I specifying the dynamic input/output shapes correctly when creating the .mlmodel? Is the output shape given by CoreML expected, and if so, why are there leading ones? Would appreciate any input. Python script to generate .mlmodel files. coremltools version is 6.3.0. from coremltools.proto.Model_pb2 import Model from coremltools.proto.FeatureTypes_pb2 import ArrayFeatureType from coremltools.proto.NeuralNetwork_pb2 import EXACT_ARRAY_MAPPING def build_model(with_dynamic_input_shape: bool, with_dynamic_output_shape: bool): model = Model() model.specificationVersion = 4 input = model.description.input.add() input.name = "A" input.type.multiArrayType.shape[:] = [1, 2] input.type.multiArrayType.dataType = ArrayFeatureType.FLOAT32 if with_dynamic_input_shape: range = input.type.multiArrayType.shapeRange.sizeRanges.add() range.upperBound = -1 range = input.type.multiArrayType.shapeRange.sizeRanges.add() range.lowerBound = 2 range.upperBound = 2 output = model.description.output.add() output.name = "Y" output.type.multiArrayType.shape[:] = [1, 4] output.type.multiArrayType.dataType = ArrayFeatureType.FLOAT32 if with_dynamic_output_shape: range = output.type.multiArrayType.shapeRange.sizeRanges.add() range.upperBound = -1 range = output.type.multiArrayType.shapeRange.sizeRanges.add() range.lowerBound = 4 range.upperBound = 4 layer = model.neuralNetwork.layers.add() layer.name = "MatMul" layer.input[:] = ["A"] layer.output[:] = ["Y"] layer.innerProduct.inputChannels = 2 layer.innerProduct.outputChannels = 4 layer.innerProduct.weights.floatValue[:] = [0.0, 4.0, 1.0, 5.0, 2.0, 6.0, 3.0, 7.0] model.neuralNetwork.arrayInputShapeMapping = EXACT_ARRAY_MAPPING return model if __name__ == "__main__": model = build_model(with_dynamic_input_shape=True, with_dynamic_output_shape=True) with open("dynamic_shape.mlmodel", mode="wb") as f: f.write(model.SerializeToString(deterministic=True)) model = build_model(with_dynamic_input_shape=True, with_dynamic_output_shape=False) with open("dynamic_input_shape.mlmodel", mode="wb") as f: f.write(model.SerializeToString(deterministic=True)) model = build_model(with_dynamic_input_shape=False, with_dynamic_output_shape=False) with open("static_shape.mlmodel", mode="wb") as f: f.write(model.SerializeToString(deterministic=True)) Swift program to run the models and print the output shape. import Foundation import CoreML func makeFloatShapedArray(shape: [Int]) -> MLShapedArray<Float> { let size = shape.reduce(1, *) let values = (0 ..< size).map { Float($0) } return MLShapedArray(scalars: values, shape: shape) } func runModel(model_path: URL, m: Int) throws { print("running model: \(model_path.lastPathComponent)") let compiled_model_path = try MLModel.compileModel(at: model_path) let model = try MLModel(contentsOf: compiled_model_path) let a = MLMultiArray(makeFloatShapedArray(shape: [m, 2])) print("A shape: \(a.shape)") let inputs = try MLDictionaryFeatureProvider(dictionary: ["A": a]) let outputs = try model.prediction(from: inputs) let y = outputs.featureValue(for: "Y")!.multiArrayValue! print("Y shape: \(y.shape)") } func modelUrl(_ model_file: String) -> URL { return URL(filePath: "/path/to/models/\(model_file)") } try runModel(model_path: modelUrl("dynamic_shape.mlmodel"), m: 1) try runModel(model_path: modelUrl("dynamic_shape.mlmodel"), m: 2) try runModel(model_path: modelUrl("dynamic_input_shape.mlmodel"), m: 1) try runModel(model_path: modelUrl("dynamic_input_shape.mlmodel"), m: 2) try runModel(model_path: modelUrl("static_shape.mlmodel"), m: 1)
Posted
by
Post not yet marked as solved
1 Replies
587 Views
I need to convert a Super Resolution model to a mlmodel, but the input shape of the model is designed in the format [batch_size, height, width, 3]. Then I will convert with the following code model = hub.load("https://tfhub.dev/captain-pool/esrgan-tf2/1") tf.saved_model.save(model, "esrgan_saved_model") input_type = ct.ImageType(shape=(1 , 192, 192, 3),color_layout=ct.colorlayout.RGB) output_type = ct.ImageType(color_layout=ct.colorlayout.RGB) mlmodel = ct.convert( './esrgan_saved_model', inputs=[input_type], outputs=[output_type], source="tensorflow") mlmodel.save('esrgan.mlmodel') I got an error Shape of the RGB/BGR image output, must be of kind (1, 3, H, W), i.e., first two dimensions must be (1, 3) ImageType only seems to support input and output from [batch_size, 3, height, width]. What should I do to convert the model of format [batch_size, height, width, 3] to mlmodel?
Posted
by
Post not yet marked as solved
0 Replies
327 Views
Hi does anyone have a good link for ml inference times on the m2 chip.posted the coral edge chip as a good format model ,how much data was used in training model size with accuracy. Just hard to find the info or I’m looking in the wrong place just find it good to have like a cheat sheet of public optimised models there use case and model parameters for that’s a perfect fit for my problem x. thanks for your time
Posted
by
Post not yet marked as solved
3 Replies
696 Views
Hey all, we are currently training a Hand Pose model with the current release of CreateML, and during the feature extraction phase, we get the following error: Assertion failed: (/AppleInternal/Library/BuildRoots/d9889869-120b-11ee-b796-7a03568b17ac/Library/Caches/com.apple.xbs/Sources/TuriCore/turicreate_oss/src/core/storage/DataTable_data/DataColumn_v2_block_manager.cpp:105): seg->blocks.size()>column_id [0 > 0] We have tried to search this online and mitigate the issue, but we are getting nowhere - has anyone else experienced this issue?
Posted
by
Post marked as solved
3 Replies
1.3k Views
There seems to be a new MLE5Engine in iOS 17 and macOS 14, that causes issues with our style transfer models: The output is wrong (just gray pixels) and not the same as on iOS 16. There is a large memory leak. The memory consumption is increasing rapidly with each new frame. Concerning 2): There are a lot of CVPixelBuffers leaking during prediction. Those buffers somehow have references to themselves and are not released properly. Here is a stack trace of how the buffers are created: 0 _malloc_zone_malloc_instrumented_or_legacy 1 _CFRuntimeCreateInstance 2 CVObject::alloc(unsigned long, _CFAllocator const*, unsigned long, unsigned long) 3 CVPixe Buffer::alloc(_CFAllocator const*) 4 CVPixelBufferCreate 5 +[MLMultiArray(ImageUtils) pixelBufferBGRA8FromMultiArrayCHW:channelOrderIsBGR:error:] 6 MLE5OutputPixelBufferFeatureValueByCopyingTensor 7 -[MLE5OutputPortBinder _makeFeatureValueFromPort:featureDescription:error:] 8 -[MLE5OutputPortBinder _makeFeatureValueAndReturnError:] 9 __36-[MLE5OutputPortBinder featureValue]_block_invoke 10 _dispatch_client_callout 11 _dispatch_lane_barrier_sync_invoke_and_complete 12 -[MLE5OutputPortBinder featureValue] 13 -[MLE5OutputPort featureValue] 14 -[MLE5ExecutionStreamOperation outputFeatures] 15 -[MLE5Engine _predictionFromFeatures:options:usingStream:operation:error:] 16 -[MLE5Engine _predictionFromFeatures:options:error:] 17 -[MLE5Engine predictionFromFeatures:options:error:] 18 -[MLDelegateModel predictionFromFeatures:options:error:] 19 StyleModel.prediction(input:options:) When manually disabling the use of the MLE5Engine, the models run as expected. Is this an issue caused by our model, or is it a bug in Core ML?
Posted
by
Post not yet marked as solved
1 Replies
606 Views
Hi there, I'm trying to convert my CoreML model (it's actually .mlpackage) to .mpsgraphpackage so I can test the performance of my model with MPSGraph API. I run the code you provide in terminal but it just does nothing (command execute forever). In Activity Monitor terminal uses 0.0% of CPU. I My XCode version 15.0 beta 6 (15A5219j) and running in OS Sonoma 14.0 Beta (23A5312d)
Posted
by
Post not yet marked as solved
6 Replies
1.5k Views
Hey, I'm a web developer developing a macos app for the first time. I need a vector database where data will be stored on the user's machine. I'm familiar with libraries like FAISS, but am aware that it does not have Swift bindings and from a brief look, appears fairly annoying to attempt to get working with a macos app. I'm wondering if Apple has a similar library available in their dev kit? I don't need much, just something to store the vectors in a database, do a cosine sim search on them and maybe add some additional metadata to each vector embedding. If not, is bridging libraries like this a common thing to do when developing ios/macos apps?
Posted
by
Post not yet marked as solved
1 Replies
407 Views
Hi, As the MLModelCollection is deprecated, I have created mlarchive from mlpackage files which I can upload to cloud storage and download them on the device at runtime. But how do I use the mlarchive to create an instance of MLModel. If this is not possible then please guide me in what form can I upload an mlpackage to a cloud storage and then consume it in the app at runtime. I don't want to bundle the mlpackage inside the app as it increases the app size which is not acceptable to us.
Posted
by
Post not yet marked as solved
0 Replies
462 Views
I'm trying to get the WWDC2020 Sports Analysis code running. It's the project named BuildingAFeatureRichAppForSportsAnalysis. It seems that now the boardDetectionRequest fails when trying to run the code in the simulator. The main error that I get is Error Domain=com.apple.vis Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x6000024991d0 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline}}}. The problem is that I can't tell why the VNImageRequestHandler is failing when trying to detect the board. It doesn't say that it got a bad image. It doesn't say that it didn't detect a board. I'm running the code against the sample movie provided. I believe this used to work. The other error that I see is upon initialization in Common.warmUpVisionPipeline when trying to load I get 2023-09-07 12:58:59.239614-0500 ActionAndVision[3499:34083] [coreml] Failed to get the home directory when checking model path. From what I can tell in the debugger though the board detection model did load. Thanks.
Posted
by