Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

109 Posts
Sort by:
Post not yet marked as solved
11 Replies
2.5k Views
On accessing the CoreML Model Deployment dashboard with my developer account, the page gives a bad request error saying "Your request was invalid". Also, when I try to create a Model Collection it gives an error saying "One of the fields was invalid".
Posted
by
Post not yet marked as solved
6 Replies
2.3k Views
Hi, I have a core ml model that when I try to print the: modelPrediction?.labelProbability which is of type String:Double and contains all the features with their corresponding probabilities, the value of type double comes with nan rest = nan right = nan up = nan Sometimes restarting makes it work again. Sometimes it can take a lot of restarts to start working again. Even when deleting the app and installing again the same thing happens. Also tried changing the deployment version but didn't seem to fix it. Any help is appreciated.
Posted
by
Post not yet marked as solved
7 Replies
2.8k Views
We've 10 CoreML models in our app, each encrypted with a separate key generated in XCode. After opening and closing the app 6-7 times, the app crashes at model initialization with error: 2021-04-21 13:52:47.711729+0300 MyApp[95443:7341643] Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=9 "Failed to generate key request for 08494FB2-B070-440F-A8A5-CBD0823A258E with error: -42905" UserInfo={NSLocalizedDescription=Failed to generate key request for 08494FB2-B070-440F-A8A5-CBD0823A258E with error: -42905}: file MyApp/Model.swift, line 43 Looks like iPhone is blocking the app for suspicious behavior and the app fails to decrypt the model. We noticed that after ~10 hours the app is unlocked, it successfully decrypts and initializes the model. Opening and closing the app many times in a short period of time is indeed unnatural, but the most important question is how to avoid blocking? Would Apple block the app if a user opens and closes it 10 times during a day? How does the number of models in the app affect probability that the app will be blocked? Thanks!
Posted
by
Post not yet marked as solved
2 Replies
1.5k Views
With the release of Xcode 13, a large section of my vision framework processing code became errors and cannot compile. All of these have became deprecated. This is my original code:  do {       // Perform VNDetectHumanHandPoseRequest       try handler.perform([handPoseRequest])       // Continue only when a hand was detected in the frame.       // Since we set the maximumHandCount property of the request to 1, there will be at most one observation.       guard let observation = handPoseRequest.results?.first else {         self.state = "no hand"         return       }       // Get points for thumb and index finger.       let thumbPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)       let indexFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyIndexFinger)       let middleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyMiddleFinger)       let ringFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyRingFinger)       let littleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyLittleFinger)       let wristPoints = try observation.recognizedPoints(forGroupKey: .all)               // Look for tip points.       guard let thumbTipPoint = thumbPoints[.handLandmarkKeyThumbTIP],          let thumbIpPoint = thumbPoints[.handLandmarkKeyThumbIP],          let thumbMpPoint = thumbPoints[.handLandmarkKeyThumbMP],          let thumbCMCPoint = thumbPoints[.handLandmarkKeyThumbCMC] else {         self.state = "no tip"         return       }               guard let indexTipPoint = indexFingerPoints[.handLandmarkKeyIndexTIP],          let indexDipPoint = indexFingerPoints[.handLandmarkKeyIndexDIP],          let indexPipPoint = indexFingerPoints[.handLandmarkKeyIndexPIP],          let indexMcpPoint = indexFingerPoints[.handLandmarkKeyIndexMCP] else {         self.state = "no index"         return       }               guard let middleTipPoint = middleFingerPoints[.handLandmarkKeyMiddleTIP],          let middleDipPoint = middleFingerPoints[.handLandmarkKeyMiddleDIP],          let middlePipPoint = middleFingerPoints[.handLandmarkKeyMiddlePIP],          let middleMcpPoint = middleFingerPoints[.handLandmarkKeyMiddleMCP] else {         self.state = "no middle"         return       }               guard let ringTipPoint = ringFingerPoints[.handLandmarkKeyRingTIP],          let ringDipPoint = ringFingerPoints[.handLandmarkKeyRingDIP],          let ringPipPoint = ringFingerPoints[.handLandmarkKeyRingPIP],          let ringMcpPoint = ringFingerPoints[.handLandmarkKeyRingMCP] else {         self.state = "no ring"         return       }               guard let littleTipPoint = littleFingerPoints[.handLandmarkKeyLittleTIP],          let littleDipPoint = littleFingerPoints[.handLandmarkKeyLittleDIP],          let littlePipPoint = littleFingerPoints[.handLandmarkKeyLittlePIP],          let littleMcpPoint = littleFingerPoints[.handLandmarkKeyLittleMCP] else {         self.state = "no little"         return       }               guard let wristPoint = wristPoints[.handLandmarkKeyWrist] else {         self.state = "no wrist"         return       } ... } Now every line from thumbPoints onwards results in error, I have fixed the first part (not sure if it is correct or not as it cannot compile) to :         let thumbPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.thumb.rawValue)        let indexFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.indexFinger.rawValue)        let middleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.middleFinger.rawValue)        let ringFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.ringFinger.rawValue)        let littleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue)        let wristPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue) I tried many different things but just could not get the retrieving individual points to work. Can anyone help on fixing this?
Posted
by
Post not yet marked as solved
1 Replies
807 Views
I implement a custom pytorch layer on both CPU and GPU following [Hollemans amazing blog] (https://machinethink.net/blog/coreml-custom-layers ). The cpu version works good, but when i implemented this op on GPU it cannot activate "encode" function. Always run on CPU. I have checked the coremltools.convert() options with compute_units=coremltools.ComputeUnit.CPU_AND_GPU, but it still not work. This problem also mentioned in https://stackoverflow.com/questions/51019600/why-i-enabled-metal-api-but-my-coreml-custom-layer-still-run-on-cpu and https://developer.apple.com/forums/thread/695640. Any idea on help this would be grateful. System Information mac OS: 11.6.1 Big Sur xcode: 12.5.1 coremltools: 5.1.0 test device: iphone 11
Posted
by
Post not yet marked as solved
12 Replies
4.1k Views
Hello, I'm new using CoreML and I'm trying to do a test app with the models that already exist. I'm having next error at the moment to classifier the image: [coreml] Failed to get the home directory when checking model path. I would like to receive your help to solve this error. Thanks.
Posted
by
Post not yet marked as solved
4 Replies
1k Views
I have tried many times. When I change the file or re-create it, it shows 404 error { "code": 400, "message": "InvalidArgumentError: Unable to unzip MLArchive", "reason": "There was a problem with your request.", "detailedMessage": "InvalidArgumentError: Unable to unzip MLArchive", "requestUuid": "699afb97-8328-4a83-b186-851f797942aa" }
Posted
by
Post not yet marked as solved
3 Replies
1.5k Views
I am trying to train an image classification network in Keras with tensorflow-metal. The training freezes after the first 2-3 epochs if image augmentation layers are used (RandomFlip, RandomContrast, RandomBrightness) The system appears to use both GPU as well as CPU (as indicated by Activity Monitor). Also, warnings appear both in Jupyter and Terminal (see below). When the image augmentation layers are removed (i.e. we only rebuild the head and feed images from disk), CPU appears to be idle, no warnings appear, and training completes successfully. Versions: python 3.8, tensorflow-macos 2.11.0, tensorflow-metal 0.7.1 Sample code: img_augmentation = Sequential( [ layers.RandomFlip(), layers.RandomBrightness(factor=0.2), layers.RandomContrast(factor=0.2) ], name="img_augmentation", ) inputs = layers.Input(shape=(384, 384, 3)) x = img_augmentation(inputs) model = tf.keras.applications.EfficientNetV2S(include_top=False, input_tensor=x, weights='imagenet') model.trainable = False x = tf.keras.layers.GlobalAveragePooling2D(name="avg_pool")(model.output) x = tf.keras.layers.BatchNormalization()(x) top_dropout_rate = 0.2 x = tf.keras.layers.Dropout(top_dropout_rate, name="top_dropout")(x) outputs = tf.keras.layers.Dense(179, activation="softmax", name="pred")(x) newModel = Model(inputs=model.input, outputs=outputs, name="EfficientNet_DF20M_species") reduce_lr = tf.keras.callbacks.ReduceLROnPlateau(monitor='val_accuracy', factor=0.9, patience=2, verbose=1, min_lr=0.000001) optimizer = tf.keras.optimizers.legacy.SGD(learning_rate=0.01, momentum=0.9) newModel.compile(optimizer=optimizer, loss='categorical_crossentropy', metrics=['accuracy']) history = newModel.fit(x=train_ds, validation_data=val_ds, epochs=30, verbose=2, callbacks=[reduce_lr]) During training with image augmentation, Jupyter prints the following warnings while training the first epoch: WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformV2 cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting RngReadAndSkip cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting Bitcast cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting StatelessRandomUniformFullIntV2 cause there is no registered converter for this op. WARNING:tensorflow:Using a while_loop for converting StatelessRandomGetKeyCounter cause there is no registered converter for this op. ... During training with image augmentation, Terminal keeps spamming the following warning: 2023-02-21 23:13:38.958633: I metal_plugin/src/kernels/stateless_random_op.cc:282] Note the GPU implementation does not produce the same series as CPU implementation. 2023-02-21 23:13:38.958920: I metal_plugin/src/kernels/stateless_random_op.cc:282] Note the GPU implementation does not produce the same series as CPU implementation. 2023-02-21 23:13:38.959071: I metal_plugin/src/kernels/stateless_random_op.cc:282] Note the GPU implementation does not produce the same series as CPU implementation. 2023-02-21 23:13:38.959115: I metal_plugin/src/kernels/stateless_random_op.cc:282] Note the GPU implementation does not produce the same series as CPU implementation. 2023-02-21 23:13:38.959359: I metal_plugin/src/kernels/stateless_random_op.cc:282] Note the GPU implementation does not produce the same series as CPU implementation. ... Any suggestions?
Posted
by
Post not yet marked as solved
1 Replies
965 Views
I am working on the neural network classifier provided on the coremltools.readme.io in the updatable->neural network section(https://coremltools.readme.io/docs/updatable-neural-network-classifier-on-mnist-dataset). I am using the same code but I get an error saying that the coremltools.converters.keras.convert does not exist. But this I know can be coreml version issue. Right know I am using coremltools version 6.2. I converted this model to mlmodel with .convert only. It got converted successfully. But I face an error in the make_updatable function saying the loss layer must be softmax output. Even the coremlt package API reference there I found its because the layer name is softmaxND but it should be softmax. Now the problem is when I convert the model from Keras sequential model to coreml model. the layer name and type change. And the softmax changes to softmaxND. Does anyone faced this issue? if I execute this builder.inspect_layers(last=4) I get this output [Id: 32], Name: sequential/dense_1/Softmax (Type: softmaxND) Updatable: False Input blobs: ['sequential/dense_1/MatMul'] Output blobs: ['Identity'] [Id: 31], Name: sequential/dense_1/MatMul (Type: batchedMatmul) Updatable: False Input blobs: ['sequential/dense/Relu'] Output blobs: ['sequential/dense_1/MatMul'] [Id: 30], Name: sequential/dense/Relu (Type: activation) Updatable: False Input blobs: ['sequential/dense/MatMul'] Output blobs: ['sequential/dense/Relu'] In the make_updatable function when I execute builder.set_categorical_cross_entropy_loss(name='lossLayer', input='Identity') I get this error ValueError: Categorical Cross Entropy loss layer input (Identity) must be a softmax layer output.
Posted
by
Post not yet marked as solved
2 Replies
735 Views
In the ml-ane-transformers repo, there is a custom LayerNorm implementation for the Neural Engine-optimized shape of (B,C,1,S). The coremltools documentation makes it sound like the layer_norm MIL op would support this natively. In fact, the following code works on CPU: B,C,S = 1,768,512 g,b = 1, 0 @mb.program(input_specs=[mb.TensorSpec(shape=(B,C,1,S)),]) def ln_prog(x): gamma = (torch.ones((C,), dtype=torch.float32) * g).tolist() beta = (torch.ones((C), dtype=torch.float32) * b).tolist() return mb.layer_norm(x=x, axes=[1], gamma=gamma, beta=beta, name="y") However it fails when run on the Neural Engine, giving results that are scaled by an incorrect value. Should this work on the Neural Engine?
Posted
by
Post marked as solved
1 Replies
855 Views
Hello, I am reaching out for some assistance regarding integrating a CoreML action classifier into a SwiftUI app. Specifically, I am trying to implement this classifier to work with the live camera of the device. I have been doing some research, but unfortunately, I have not been able to find any relevant information on this topic. I was wondering if you could provide me with any examples, resources, or information that could help me achieve this integration? Any guidance you can offer would be greatly appreciated. Thank you in advance for your help and support.
Posted
by
Post not yet marked as solved
1 Replies
929 Views
Hello, I am a student and I am doing a search for my thesis on create ML and shape recognition and image processing, so for this subject I want to find the details of the steps used in create ML for this, such as the techniques used for pre-processing, and the methods of extracting characteristics, and the filters applied, ect...
Posted
by
Post not yet marked as solved
1 Replies
577 Views
We have CoreML models in our app, each encrypted with a separate key generated in XCode. After app update we are receiving following error ` `[coreml] Could not create persistent key blob for EFD428E8-CDE7-4E0A-B379-FC169E50DE4D : error=Error Domain=com.apple.CoreML Code=8 "Fetching decryption key from server failed." UserInfo={NSLocalizedDescription=Fetching decryption key from server failed., NSUnderlyingError=0x281d80ab0 {Error Domain=CKErrorDomain Code=6 "CKInternalErrorDomain: 2022" UserInfo={NSDebugDescription=CKInternalErrorDomain: 2022, RequestUUID=D5CF13CF-6A10-436B-AB93-4C5C04859FFE, NSLocalizedDescription=Request failed with http status code 503, CKErrorDescription=Request failed with http status code 503, CKRetryAfter=35, NSUnderlyingError=0x281d80000 {Error Domain=CKInternalErrorDomain Code=2022 "Request failed with http status code 503" UserInfo={CKRetryAfter=35, CKHTTPStatus=503, CKErrorDescription=Request failed with http status code 503, RequestUUID=D5CF13CF-6A10-436B-AB93-4C5C04859FFE, NSLocalizedDescription=Request failed with http status code 503}}, CKHTTPStatus=503}}}` Tried deleting app, restarting device but nothing works. This was released on Appstore earlier and was working fine. It stopped working after update. Any help is appreciated.
Posted
by
Post not yet marked as solved
0 Replies
778 Views
When I used xcode to generate the model encryption key, an error was reported, the error was 'Failed to Generate Encryption Key and Sign in with your iCloud account in System Preferences and Retry '. It was normal a month or two ago, but now it suddenly doesn't work well, and the already encrypted model can't be decrypted. iCloud is normal, logged back in, and other Xcode Teams have tried, but they are not working Xcode version 14.2 macOS Monterey 12.6.2
Posted
by
Post not yet marked as solved
1 Replies
484 Views
I want to know how the preview function is implemented. I have a mlmodel for object detection. I found that when I open the model in xcode, xcode provides a preview function. I put a photo into it and get the target prediction box. I would like to know how this visualization function is implemented. At present, I can only get the three data items of Label, Confidence, and BoundingBox in the playground, and the drawing of the prediction box still requires me to write code for processing. import Vision func performObjectDetection() { do { let model = try VNCoreMLModel(for: court().model) let request = VNCoreMLRequest(model: model) { (request, error) in if let error = error { print("Failed to perform request: \(error)") return } guard let results = request.results as? [VNRecognizedObjectObservation] else { print("No results found") return } for result in results { print("Label: \(result.labels.first?.identifier ?? "No label")") print("Confidence: \(result.labels.first?.confidence ?? 0.0)") print("BoundingBox: \(result.boundingBox)") } } guard let image = UIImage(named: "nbaPics.jpeg"), let ciImage = CIImage(image: image) else { print("Failed to load image") return } let handler = VNImageRequestHandler(ciImage: ciImage, orientation: .up, options: [:]) try handler.perform([request]) } catch { print("Failed to load model: \(error)") } } performObjectDetection() These are my codes and results
Posted
by
Post marked as solved
3 Replies
887 Views
Is it possible to create an updatable sound classifier model which uses Apple's built in MLSoundClassifier available via Create ML that can be trained/personalized on device using Core ML? I tried to look up in quite a few places for a long while, however, I know that when on-device training was initially announced in 2019, updatable models were only restricted to non built-in classifiers, but any additional information that may have come out after 2019 in this regard has been hard to find.
Posted
by
Post not yet marked as solved
0 Replies
823 Views
Hello, I have my largely iOS app running using Mac Catalyst, but I need to limit what Macs will be able to install it from the Mac App Store based on the GPU Family like MTLGPUFamily.mac2. Is that possible? Or I could limit it to Apple Silicon using the Designed for iPad target, but I would prefer to use Mac Catalyst instead of Designed for iPad. Is it possible to limit Mac Catalyst installs to Apple Silicon Macs? Side question: what capabilities are supported by MTLGPUFamily.mac2? I can't find it. My main interest is in CoreML inference acceleration. Thank you.
Posted
by
Post not yet marked as solved
0 Replies
621 Views
I succesfully converted and loaded efficientdet model feature vector from https://tfhub.dev/tensorflow/efficientdet/lite0/feature-vector/1 finetuned using approach from https://github.com/google/automl/tree/master/efficientdet. Model correctly works on python (correctly detects objects) and converts using tensorflow==2.12.0 and coremltools==6.3.0 (to both: "mlprogram" and "neuralnetwork" with same result) mlmodel = ct.convert( [detect_fn], inputs=[img_type = ct.ImageType(name='image_arrays', shape=(1, height, width, 3))], outputs=[ct.TensorType(name="detections")], source="tensorflow", convert_to="neuralnetwork") Within the coreML model is loaded with no errors, it's metadata is correctly parsed, but during prediction following error occurs: 2023-06-05 09:10:56.942952+0200 xxxx[1619:435597] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid state": Stack_nd layer: Invalid shapes of input tensors. status=-5 2023-06-05 09:10:56.943016+0200 xxxx[1619:435597] [coreml] Error computing NN outputs -5 2023-06-05 09:10:56.943088+0200 xxxx[1619:435597] [coreml] Failure in -executePlan:error:. Unable to classify image. The VNCoreMLTransform request failed Error: Vision request failed with error "Error Domain=com.apple.vis Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x2801267f0 {Error Domain=com.apple.CoreML Code=0 "Error computing NN outputs." UserInfo={NSLocalizedDescription=Error computing NN outputs.}}}" 2023-06-05 09:10:56.991190+0200 xxxx[1619:435607] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid state": Stack_nd layer: Invalid shapes of input tensors. status=-5 2023-06-05 09:10:56.991256+0200 xxxx[1619:435607] [coreml] Error computing NN outputs -5 2023-06-05 09:10:56.991285+0200 xxxx[1619:435607] [coreml] Failure in -executePlan:error:. Unable to classify image. The VNCoreMLTransform request failed Input shape ('image_arrays' layer) is (1, 320, 320, 3) - image type Output shape ('detections' layer) is (1, 100, 7) - containing 100 rows with bounding box coords, class_id, score, and another variable, altogether 7 floats) Relevant part of the network according to the error message "... Stack_nd layer: ..." might be as presented below: Has anyone noticed similar error or was able to successfully run this model within coreML? Could anyone verify/confirm whether error is a root cause and know how to fix that? Tested on: MacOS (Dev device: Intel MacBook Pro 16, macOS 13.3 Xcode: 14.3) mobile device (Deployment target: iOS 15.6) Test device: iPhone 13 Pro Max, iOS 16.5
Posted
by
Post not yet marked as solved
1 Replies
738 Views
Hey, Are there any limits to the windowDuration property of the AudioFeaturePrint transformer such as the minimum value or maximum value? If we create a model with the Create ML UI App, upon selecting the AudioFeaturePrint as the feature extractor, we cannot go below 0.5 seconds for the window duration. Is the limit same if we programmatically create a model using the AudioFeaturePrint?
Posted
by