Core ML

Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

107 results found
Post marked as unsolved
21 Views

Is CoreML model encryption backwards compatible?

I tried out the CoreML Model Encryption feature that comes with Xcode 12. I generated the key - https://developer.apple.com/documentation/coreml/core_ml_api/generating_a_model_encryption_key and added the --encrypt compiler flag - https://developer.apple.com/documentation/coreml/core_ml_api/encrypting_a_model_in_your_app as described in the docs. Everything works on iOS 14. However on an iOS 13 simulator I get a runtime error saying MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=3 "No known class for loading model type INVALID" UserInfo={NSLocalizedDescription=No known class for loading model type INVALID} So iOS 13 can't handle the encrypted model and says it's invalid, which makes sense I guess. But how are we supposed to maintain backwards-compatibility to iOS 13? (As expected the error goes away if I remove the --encrypt compiler flag again.) I want to add that I did NOT use the new async API - https://developer.apple.com/documentation/coreml/mlmodel/3600218-load to load the model, but instead the automatically generated wrapper class from my local model and it's init method.
Asked
by rikner.
Last updated .
Post marked as unsolved
27 Views

unable to retrieve MLModel from CoreML Model Deployment

 am very new to CoreMl and I want to retrieve a model from Coreml model deployment which was released this year at WWDC. I made an app that just classifies special and rare things and I uploaded that model.archive to the CoreMl Model deployment dashboard. I successfully deployed the model and its showing as active. now the problem is I am unable to retrieve that model, I have tried a lot, I even saw all the WWDC sessions on that one and even copied that code from that session but all in vain. Here is my whole model loading and retrieving code my classification code which takes an image and does all the loading of CoreML and from CoreML model deployment. func updateClassifications(for image: UIImage) { &#9;&#9;&#9;&#9;classificationLabel.text = "Classifying..." &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;var models = try? VNCoreMLModel(for: SqueezeNet().model) &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;if let modelsss = models { &#9;&#9;&#9;&#9;&#9;&#9;extensionofhandler(ciimage: image, vnmodel: modelsss) &#9;&#9;&#9;&#9;&#9;&#9;return &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;_ = MLModelCollection.beginAccessing(identifier: "TestingResnetModel") { [self] result in &#9;&#9;&#9;&#9;&#9;&#9;var modelUrl: URL? &#9;&#9;&#9;&#9;&#9;&#9;switch result { &#9;&#9;&#9;&#9;&#9;&#9;case .success(let collection): &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;modelUrl = collection.entries["class"]?.modelURL &#9;&#9;&#9;&#9;&#9;&#9;case .failure(let error): &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;fatalError("sorry \(error)") &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;let result = loadfishcallisier(from: modelUrl) &#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;switch result { &#9;&#9;&#9;&#9;&#9;&#9;case .success(let modelesss): &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;models = try? VNCoreMLModel(for: modelesss) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;extensionofhandler(ciimage: image, vnmodel: models!) &#9;&#9;&#9;&#9;&#9;&#9;case .failure(let error): &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;fatalError("plz \(error)") &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;} &#9;&#9;&#9; &#9;&#9;} func loadfishcallisier(from modelUrl: URL?) -> Result<MLModel,Error> { &#9;&#9;&#9;&#9;if let modelUrl = modelUrl { &#9;&#9;&#9;&#9;&#9;&#9;return Result { try MLModel(contentsOf: modelUrl)} &#9;&#9;&#9;&#9;} else { &#9;&#9;&#9;&#9;&#9;&#9;return Result { try MLModel(contentsOf: modelUrl!, configuration: .init())} &#9;&#9;&#9;&#9;} &#9;&#9;} &#9;&#9; &#9;&#9;func extensionofhandler(ciimage: UIImage,vnmodel: VNCoreMLModel) { &#9;&#9;&#9;&#9;let orientation = CGImagePropertyOrientation(ciimage.imageOrientation) &#9;&#9;&#9;&#9;guard let ciImage = CIImage(image: ciimage) else { fatalError("Unable to create \(CIImage.self) from \(ciimage).") &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;DispatchQueue.global(qos: .userInitiated).async { [self] in &#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation) &#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;do { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;try handler.perform([coremlmodel(using: vnmodel)]) &#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;} catch { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;fatalError("Check the error") &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;} &#9;&#9;} my Vision request code func coremlmodel(using: VNCoreMLModel) -> VNCoreMLRequest { &#9;&#9;&#9;&#9;&#9;&#9;let request = VNCoreMLRequest(model: using, completionHandler: { [weak self] request, error in &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;self?.processClassifications(for: request, error: error) &#9;&#9;&#9;&#9;&#9;&#9;}) &#9;&#9;&#9;&#9;&#9;&#9;request.imageCropAndScaleOption = .centerCrop &#9;&#9;&#9;&#9;&#9;&#9;return request &#9;&#9;&#9;&#9;&#9;&#9; &#9;&#9;} my classification code func processClassifications(for request: VNRequest, error: Error?) { &#9;&#9;&#9;&#9;DispatchQueue.main.async { &#9;&#9;&#9;&#9;&#9;&#9;guard let results = request.results else { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;self.classificationLabel.text = "Unable to classify image.\n\(error!.localizedDescription)" &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;return &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;// The `results` will always be `VNClassificationObservation`s, as specified by the Core ML model in this project. &#9;&#9;&#9;&#9;&#9;&#9;let classifications = results as! [VNClassificationObservation] &#9;&#9;&#9;&#9; &#9;&#9;&#9;&#9;&#9;&#9;if classifications.isEmpty { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;self.classificationLabel.text = "Nothing recognized." &#9;&#9;&#9;&#9;&#9;&#9;} else { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;// Display top classifications ranked by confidence in the UI. &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;let topClassifications = classifications.prefix(2) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;let descriptions = topClassifications.map { classification in &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;// Formats the classification for display; e.g. "(0.37) cliff, drop, drop-off". &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9; return String(format: "&#9;(%.2f) %@", classification.confidence, classification.identifier) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;self.classificationLabel.text = "Classification:\n" + descriptions.joined(separator: "\n") &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;} &#9;&#9;} I am pretty sure something is wrong with my model loading code. Xcode throws no error but it's not recognising anything. if I have done anything wrong in my code I humbly ask you to show it to me and solve it and is there any tutorial for retrieving the model from coreml model deployment.
Asked
Last updated .
Post marked as unsolved
52 Views

auto workout detection with custom activity classifier

I'm building a watch app that uses a custom ml activity classifier for detecting certain activities performed by the user. I want to prompt the user to start tracking an activity whenever it is detected by the classifier, even when the app is not running (like auto workout detection in the Apple Watch). Any idea if/how this can be done? Thanks!
Asked
Last updated .
Post marked as unsolved
11 Views

CoreML Model spec - change output type to dictionary [Double : String]

Hello everybody, For the past week I have been struggling to run inference on a classifier I built using Google's AutoML Vision tool. At first I thought everything would go smoothly because Google allows to export a CoreML version of the final model. I assumed I would only need to use Apple's CoreML library to make it work. When I export the model Google provides a .mlmodel file and a dict.txt file with the classification labels. For the current model I have 100 labels. This is my Swift code to run inference on the model. private lazy var classificationRequest: VNCoreMLRequest = { &#9;&#9;&#9;&#9;do { &#9;&#9;&#9;&#9;&#9;&#9;let classificationModel = try VNCoreMLModel(for: NewGenusModel().model) &#9;&#9;&#9;&#9;&#9;&#9;let request = VNCoreMLRequest(model: classificationModel, completionHandler: { [weak self] request, error in &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;self?.processClassifications(for: request, error: error) &#9;&#9;&#9;&#9;&#9;&#9;}) &#9;&#9;&#9;&#9;&#9;&#9;request.imageCropAndScaleOption = .scaleFit &#9;&#9;&#9;&#9;&#9;&#9;return request &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;catch { &#9;&#9;&#9;&#9;&#9;&#9;fatalError("Error! Can't use Model.") &#9;&#9;&#9;&#9;} &#9;&#9;}() &#9;&#9;func classifyImage(receivedImage: UIImage) { &#9;&#9;&#9;&#9;let orientation = CGImagePropertyOrientation(rawValue: UInt32(receivedImage.imageOrientation.rawValue)) &#9;&#9;&#9;&#9;if let image = CIImage(image: receivedImage) { &#9;&#9;&#9;&#9;&#9;&#9;DispatchQueue.global(qos: .userInitiated).async { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;let handler = VNImageRequestHandler(ciImage: image, orientation: orientation!) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;do { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;try handler.perform([self.classificationRequest]) &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;catch { &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;fatalError("Error classifying image!") &#9;&#9;&#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;} &#9;&#9;} The problem started when I tried to pass a UIImage to run inference on the model. The input type of the original model was MultiArray (Float32 1 x 224 x 224 x 3). Using Coremltools library I was able to convert the input type to Image (Color 224 x 224) using Python. This worked and here is my code: import coremltools import coremltools.proto.FeatureTypes_pb2 as ft spec = coremltools.utils.load_spec("model.mlmodel") input = spec.description.input[0] input.type.imageType.colorSpace = ft.ImageFeatureType.RGB input.type.imageType.height = 224 input.type.imageType.width = 224 coremltools.utils.save_spec(spec, "newModel.mlmodel") My problem now is with the output type. I want to be able to access the confidence of the classification as well as the result label of the classification. Again using coremltools I was able to to access the output description and I got this. name: "scores" type { &#9;multiArrayType { &#9;&#9;dataType: FLOAT32 &#9;} } I am trying to change it this way: f = open("dict.txt", "r") labels = f.read() class_labels = labels.splitlines() print(class_labels) class_labels = class_labels[1:] assert len(class_labels) == 57 for i, label in enumerate(class_labels): &#9;if isinstance(label, bytes): &#9;&#9;class_labels[i] = label.decode("utf8") classifier_config = ct.ClassifierConfig(class_labels) output = spec.description.output[0] output.type = ft.DictionaryFeatureType Unfortunately this is not working and I can't find information only that can help me... This I don't know what to do next. Thank you for your help!
Asked
by tmsm1999.
Last updated .
Post marked as solved
44 Views

VNContourObservations are outside Region of Interest

I have set the regional interest on my VNDetectContoursRequest but the observations I am getting back all seem to be outside of the region of interest. So my question is if the region of interest is set should I only get back observations in the region or should I still get back all of the contour observations for the image?
Asked
by bbarry.
Last updated .
Post marked as unsolved
22 Views

CoreML - Label presented as random string

Hello everybody, I used Google Cloud Platform to create a Machine learning model to perform computer vision. I downloaded the CoreML model from the cloud platform website and followed the instructions in the Google Tutorial for iOS model deployment. This is my code currently. class Classification {          private lazy var classificationRequest: VNCoreMLRequest = {         do {             let model = try VNCoreMLModel(for: AutoML().model)             let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in                 if let classifications = request.results as? [VNClassificationObservation] {                     print(classifications.first ?? "No classification!")                 }                          })                          request.imageCropAndScaleOption = .scaleFit             return request         }         catch {             fatalError("Error! Can't use Model.")         }     }()          func classifyImage(receivedImage: UIImage) {                  let orientation = CGImagePropertyOrientation(rawValue: UInt32(receivedImage.imageOrientation.rawValue))                  if let image = CIImage(image: receivedImage) {             DispatchQueue.global(qos: .userInitiated).async {                                  let handler = VNImageRequestHandler(ciImage: image, orientation: orientation!)                 do {                     try handler.perform([self.classificationRequest])                 }                 catch {                     fatalError("Error classifying image!")                 }             }         }     } My code executes and I receive this: <VNClassificationObservation: 0x600002091d40> A7DBD70C-541C-4112-84A4-C6B4ED2EB7E2 requestRevision=1 confidence=0.332127 "CICAgICAwPmveRIJQWdsYWlzX2lv" I receive a confidence value but I don't receive a label string. Is there any step I am not taking? With the model there is also a dict.txt file. Is there anything I have to do with that and that I am not doing? Thank you!
Asked
by tmsm1999.
Last updated .
Post marked as unsolved
9 Views

processRequest:qos:qIndex:error:: 0x1: Program Inference overflow

mlmodel run on Mac everything OK. same mlmodel run on iOS with Vision Framework some error occurs log: 2020-09-16 10:22:58.441664+0800 benchmark[18467:15060162] Metal API Validation Enabled faceAlign Execution time: 7.557034492492676 2020-09-16 10:22:59.794836+0800 benchmark[18467:15060162] [espresso] [Espresso::ANERuntimeEngine::__forward_segment 3] evaluate[RealTime]WithModel returned 0; code=5 err=Error Domain=com.apple.appleneuralengine Code=5 "processRequest:qos:qIndex:error:: 0x1: Program Inference overflow" UserInfo={NSLocalizedDescription=processRequest:qos:qIndex:error:: 0x1: Program Inference overflow} 2020-09-16 10:22:59.794881+0800 benchmark[18467:15060162] [espresso] [Espresso::overflow_error] /private/var/containers/Bundle/Application/568B5CEB-345B-48FC-969A-EF3ACE632A50/benchmark.app/fe1.mlmodelc/model.espresso.net:3 2020-09-16 10:22:59.816024+0800 benchmark[18467:15060162] [espresso] [Espresso::ANERuntimeEngine::__forward_segment 4] evaluate[RealTime]WithModel returned 0; code=5 err=Error Domain=com.apple.appleneuralengine Code=5 "processRequest:qos:qIndex:error:: 0x1: Program Inference overflow" UserInfo={NSLocalizedDescription=processRequest:qos:qIndex:error:: 0x1: Program Inference overflow} 2020-09-16 10:22:59.816053+0800 benchmark[18467:15060162] [espresso] [Espresso::overflow_error] /private/var/containers/Bundle/Application/568B5CEB-345B-48FC-969A-EF3ACE632A50/benchmark.app/fe1.mlmodelc/model.espresso.net:4 2020-09-16 10:22:59.851771+0800 benchmark[18467:15060162] [espresso] [Espresso::ANERuntimeEngine::__forward_segment 5] evaluate[RealTime]WithModel returned 0; code=5 err=Error Domain=com.apple.appleneuralengine Code=5 "processRequest:qos:qIndex:error:: 0x1: Program Inference overflow" UserInfo={NSLocalizedDescription=processRequest:qos:qIndex:error:: 0x1: Program Inference overflow} 2020-09-16 10:22:59.851810+0800 benchmark[18467:15060162] [espresso] [Espresso::overflow_error] /private/var/containers/Bundle/Application/568B5CEB-345B-48FC-969A-EF3ACE632A50/benchmark.app/fe1.mlmodelc/model.espresso.net:5 fd:1813843988, fv:70, detect:1813844058, landmark:82, fea:943 2020-09-16 10:23:00.296230+0800 benchmark[18467:15060223] [si_destination_compare] send failed: Invalid argument 2020-09-16 10:23:00.296519+0800 benchmark[18467:15060223] [si_destination_compare] send failed: Undefined error: 0
Asked
by AAACoreML.
Last updated .
Post marked as unsolved
9 Views

running yolov5 in swift vrs yolov2

I've started to look into the new yolov5 model, but I can't figure out the new data output format when I call it in swift. The old yolov2 model gave me result in VNRecognizedObjectObservation objects in swift code, which contained labels, confidence and coordinates for each detected object. The new model for yolov5 gives me exactly three of VNCoreMLFeatureValueObservation objects which contains none of the above. But instead contains featureName (which is some Int number) and featureValue which is a multidimensional array of some sort. And I have no idea of what to do with the new data. Does someone have any information that can help?
Asked
Last updated .
Post marked as unsolved
2.4k Views

Tensorflow running on Metal MPS with AMD GPU

In a WWDC2018 video there's a live demo of tensorflow running on metal performance shaders on a AMD Vega eGPU. Googling this there seems to be no news or mention of this. Does anyone know when this will be released? As NVIDIA is not supported in Mojave I'm looking into getting a Vega card insetad. The only thing I've seen so far is that Turi Create uses GPU with metal for some tasks. And some people have got KEras working with metal using PlainML as a backend.
Asked
Last updated .
Post marked as unsolved
11 Views

MLModel class only avalilable in iOS 13.0

I follow the coremltools quickstart to generate the mlmodel( https://coremltools.readme.io/docs/introductory-quickstart),But when I import the model into my project, the model autogenerated is only available in iOS 13.0. Can anyone told me how to set this, I want this to work on iOS 11
Asked
by tornador.
Last updated .
Post marked as unsolved
31 Views

is Core ML Model Deployment free?

in WWDC 2020 apple introduced Core ML Model Deployment I have created a CoreML model which is around 20GB I just want to know (1) is it free to deploy this model on Core ML Model Deployment platform. (2) if its not free then what are the charges.
Asked
Last updated .
Post marked as unsolved
18 Views

How do I port a yolov4 or yolov5 to CoreML

I am using turicreate to train a custom object detection model, however it only supports yolov2. Has anyone tried to port v4 or v5 to CoreML . Is their a utility to do this? I have a couple of v4 PyTorch examples I was going to train then try to do this.
Asked
Last updated .
Post marked as unsolved
47 Views

CreateML Classifier: Predictions Probability

I am creating a binary tabular classifier within CreateML on a Playground (using code, not the CreateML UI). After creating the model, I would like to be able to test it. Currently, I am able to get discrete labels (0 or 1) when I run the .predictions - https://developer.apple.com/documentation/createml/mlclassifier/3005436-predictions method on the test set. However, I would like to get the predictions as a probability, not a discrete label calculated from a threshold of 0.5. I know I can export my model from the Playground and just put in into a Swift file, where I can just load the model, call .predictions (CoreML's .predictions method, that is), and then it lets me access the actual probabilities for each class as well as the rounded values. But while trying to tune my models, it is a pain to do this for each model, not to mention I would have to write extra code to parse my test csv file and convert the labels column to an array and run it through the model etc. etc. In the Playground, it is much easier to just run it through the MLDataTable instead. Please let me know if there is a way to access probabilities of each class within CreateML!
Asked
by reidf.
Last updated .
Post marked as unsolved
34 Views

coremltools neural_network build model can not inspect error

I'm using coremltools neural_network to build a model implement ssd-caffe detection out layer Detail imformation is here:https://github.com/apple/coremltools/issues/879 MLModel::compileModelAtURL compile error message is too vague. cannot find out whats going wrong. Is there any way show model compile verbose message
Asked
by AAACoreML.
Last updated .