Integrate machine learning models into your app using Core ML.

Posts under Core ML tag

118 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Sports Analysis Code
I'm trying to get the WWDC2020 Sports Analysis code running. It's the project named BuildingAFeatureRichAppForSportsAnalysis. It seems that now the boardDetectionRequest fails when trying to run the code in the simulator. The main error that I get is Error Domain=com.apple.vis Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x6000024991d0 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline}}}. The problem is that I can't tell why the VNImageRequestHandler is failing when trying to detect the board. It doesn't say that it got a bad image. It doesn't say that it didn't detect a board. I'm running the code against the sample movie provided. I believe this used to work. The other error that I see is upon initialization in Common.warmUpVisionPipeline when trying to load I get 2023-09-07 12:58:59.239614-0500 ActionAndVision[3499:34083] [coreml] Failed to get the home directory when checking model path. From what I can tell in the debugger though the board detection model did load. Thanks.
0
0
564
Sep ’23
FAISS vs Apple vector search library?
Hey, I'm a web developer developing a macos app for the first time. I need a vector database where data will be stored on the user's machine. I'm familiar with libraries like FAISS, but am aware that it does not have Swift bindings and from a brief look, appears fairly annoying to attempt to get working with a macos app. I'm wondering if Apple has a similar library available in their dev kit? I don't need much, just something to store the vectors in a database, do a cosine sim search on them and maybe add some additional metadata to each vector embedding. If not, is bridging libraries like this a common thing to do when developing ios/macos apps?
6
1
1.9k
Sep ’23
ANE-Optimized Layer Norm Fails on ANE
In the ml-ane-transformers repo, there is a custom LayerNorm implementation for the Neural Engine-optimized shape of (B,C,1,S). The coremltools documentation makes it sound like the layer_norm MIL op would support this natively. In fact, the following code works on CPU: B,C,S = 1,768,512 g,b = 1, 0 @mb.program(input_specs=[mb.TensorSpec(shape=(B,C,1,S)),]) def ln_prog(x): gamma = (torch.ones((C,), dtype=torch.float32) * g).tolist() beta = (torch.ones((C), dtype=torch.float32) * b).tolist() return mb.layer_norm(x=x, axes=[1], gamma=gamma, beta=beta, name="y") However it fails when run on the Neural Engine, giving results that are scaled by an incorrect value. Should this work on the Neural Engine?
2
0
844
Sep ’23
CoreML fails to decrypt a model
We've 10 CoreML models in our app, each encrypted with a separate key generated in XCode. After opening and closing the app 6-7 times, the app crashes at model initialization with error: 2021-04-21 13:52:47.711729+0300 MyApp[95443:7341643] Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=9 "Failed to generate key request for 08494FB2-B070-440F-A8A5-CBD0823A258E with error: -42905" UserInfo={NSLocalizedDescription=Failed to generate key request for 08494FB2-B070-440F-A8A5-CBD0823A258E with error: -42905}: file MyApp/Model.swift, line 43 Looks like iPhone is blocking the app for suspicious behavior and the app fails to decrypt the model. We noticed that after ~10 hours the app is unlocked, it successfully decrypts and initializes the model. Opening and closing the app many times in a short period of time is indeed unnatural, but the most important question is how to avoid blocking? Would Apple block the app if a user opens and closes it 10 times during a day? How does the number of models in the app affect probability that the app will be blocked? Thanks!
7
0
3.0k
Aug ’23
Type 'VNRecognizedPointKey' has no member 'thumbTip'
With the release of Xcode 13, a large section of my vision framework processing code became errors and cannot compile. All of these have became deprecated. This is my original code:  do {       // Perform VNDetectHumanHandPoseRequest       try handler.perform([handPoseRequest])       // Continue only when a hand was detected in the frame.       // Since we set the maximumHandCount property of the request to 1, there will be at most one observation.       guard let observation = handPoseRequest.results?.first else {         self.state = "no hand"         return       }       // Get points for thumb and index finger.       let thumbPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)       let indexFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyIndexFinger)       let middleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyMiddleFinger)       let ringFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyRingFinger)       let littleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyLittleFinger)       let wristPoints = try observation.recognizedPoints(forGroupKey: .all)               // Look for tip points.       guard let thumbTipPoint = thumbPoints[.handLandmarkKeyThumbTIP],          let thumbIpPoint = thumbPoints[.handLandmarkKeyThumbIP],          let thumbMpPoint = thumbPoints[.handLandmarkKeyThumbMP],          let thumbCMCPoint = thumbPoints[.handLandmarkKeyThumbCMC] else {         self.state = "no tip"         return       }               guard let indexTipPoint = indexFingerPoints[.handLandmarkKeyIndexTIP],          let indexDipPoint = indexFingerPoints[.handLandmarkKeyIndexDIP],          let indexPipPoint = indexFingerPoints[.handLandmarkKeyIndexPIP],          let indexMcpPoint = indexFingerPoints[.handLandmarkKeyIndexMCP] else {         self.state = "no index"         return       }               guard let middleTipPoint = middleFingerPoints[.handLandmarkKeyMiddleTIP],          let middleDipPoint = middleFingerPoints[.handLandmarkKeyMiddleDIP],          let middlePipPoint = middleFingerPoints[.handLandmarkKeyMiddlePIP],          let middleMcpPoint = middleFingerPoints[.handLandmarkKeyMiddleMCP] else {         self.state = "no middle"         return       }               guard let ringTipPoint = ringFingerPoints[.handLandmarkKeyRingTIP],          let ringDipPoint = ringFingerPoints[.handLandmarkKeyRingDIP],          let ringPipPoint = ringFingerPoints[.handLandmarkKeyRingPIP],          let ringMcpPoint = ringFingerPoints[.handLandmarkKeyRingMCP] else {         self.state = "no ring"         return       }               guard let littleTipPoint = littleFingerPoints[.handLandmarkKeyLittleTIP],          let littleDipPoint = littleFingerPoints[.handLandmarkKeyLittleDIP],          let littlePipPoint = littleFingerPoints[.handLandmarkKeyLittlePIP],          let littleMcpPoint = littleFingerPoints[.handLandmarkKeyLittleMCP] else {         self.state = "no little"         return       }               guard let wristPoint = wristPoints[.handLandmarkKeyWrist] else {         self.state = "no wrist"         return       } ... } Now every line from thumbPoints onwards results in error, I have fixed the first part (not sure if it is correct or not as it cannot compile) to :         let thumbPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.thumb.rawValue)        let indexFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.indexFinger.rawValue)        let middleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.middleFinger.rawValue)        let ringFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.ringFinger.rawValue)        let littleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue)        let wristPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue) I tried many different things but just could not get the retrieving individual points to work. Can anyone help on fixing this?
2
0
1.7k
Aug ’23
CreateML Assertion Failure when training Hand Pose model with 5k+ static images
Hey all, we are currently training a Hand Pose model with the current release of CreateML, and during the feature extraction phase, we get the following error: Assertion failed: (/AppleInternal/Library/BuildRoots/d9889869-120b-11ee-b796-7a03568b17ac/Library/Caches/com.apple.xbs/Sources/TuriCore/turicreate_oss/src/core/storage/DataTable_data/DataColumn_v2_block_manager.cpp:105): seg->blocks.size()>column_id [0 > 0] We have tried to search this online and mitigate the issue, but we are getting nowhere - has anyone else experienced this issue?
3
4
819
Aug ’23
AudioFeaturePrint Create ML Components Property Limits
Hey, Are there any limits to the windowDuration property of the AudioFeaturePrint transformer such as the minimum value or maximum value? If we create a model with the Create ML UI App, upon selecting the AudioFeaturePrint as the feature extractor, we cannot go below 0.5 seconds for the window duration. Is the limit same if we programmatically create a model using the AudioFeaturePrint?
1
0
802
Aug ’23
Updatable model using built-in Create ML classifiers
Is it possible to create an updatable sound classifier model which uses Apple's built in MLSoundClassifier available via Create ML that can be trained/personalized on device using Core ML? I tried to look up in quite a few places for a long while, however, I know that when on-device training was initially announced in 2019, updatable models were only restricted to non built-in classifiers, but any additional information that may have come out after 2019 in this regard has been hard to find.
3
0
991
Aug ’23
ActionClassifier in SwiftUI App
Hello, I am reaching out for some assistance regarding integrating a CoreML action classifier into a SwiftUI app. Specifically, I am trying to implement this classifier to work with the live camera of the device. I have been doing some research, but unfortunately, I have not been able to find any relevant information on this topic. I was wondering if you could provide me with any examples, resources, or information that could help me achieve this integration? Any guidance you can offer would be greatly appreciated. Thank you in advance for your help and support.
1
0
941
Aug ’23
Ml inference ANE
Hi does anyone have a good link for ml inference times on the m2 chip.posted the coral edge chip as a good format model ,how much data was used in training model size with accuracy. Just hard to find the info or I’m looking in the wrong place just find it good to have like a cheat sheet of public optimised models there use case and model parameters for that’s a perfect fit for my problem x. thanks for your time
0
0
403
Aug ’23
coremltools convert imageType input shape format [batch_size, height, width, 3]
I need to convert a Super Resolution model to a mlmodel, but the input shape of the model is designed in the format [batch_size, height, width, 3]. Then I will convert with the following code model = hub.load("https://tfhub.dev/captain-pool/esrgan-tf2/1") tf.saved_model.save(model, "esrgan_saved_model") input_type = ct.ImageType(shape=(1 , 192, 192, 3),color_layout=ct.colorlayout.RGB) output_type = ct.ImageType(color_layout=ct.colorlayout.RGB) mlmodel = ct.convert( './esrgan_saved_model', inputs=[input_type], outputs=[output_type], source="tensorflow") mlmodel.save('esrgan.mlmodel') I got an error Shape of the RGB/BGR image output, must be of kind (1, 3, H, W), i.e., first two dimensions must be (1, 3) ImageType only seems to support input and output from [batch_size, 3, height, width]. What should I do to convert the model of format [batch_size, height, width, 3] to mlmodel?
1
0
743
Aug ’23
CoreML gives unexpected output shape for a model with dynamic input shape
Hello. I am manually constructing some models with the CoreML protobuf format. When the model has flexible input shapes, I am seeing unexpected output shapes in some cases after running prediction(from:). The model is a single matrix multiplication, A*B (one innerProduct layer), and the dynamic dimension is the first dimension of the only input A (B is constant). What I observe is that sometimes there are additional leading ones in the output shape. Some test program output showing the shapes: running model: dynamic_shape.mlmodel A shape: [1, 2] Y shape: [1, 1, 1, 1, 4] running model: dynamic_shape.mlmodel A shape: [2, 2] Y shape: [1, 1, 1, 2, 4] running model: dynamic_input_shape.mlmodel A shape: [1, 2] Y shape: [1, 4] running model: dynamic_input_shape.mlmodel A shape: [2, 2] Y shape: [1, 1, 1, 2, 4] running model: static_shape.mlmodel A shape: [1, 2] Y shape: [1, 4] I've put the model generation and test code below. Am I specifying the dynamic input/output shapes correctly when creating the .mlmodel? Is the output shape given by CoreML expected, and if so, why are there leading ones? Would appreciate any input. Python script to generate .mlmodel files. coremltools version is 6.3.0. from coremltools.proto.Model_pb2 import Model from coremltools.proto.FeatureTypes_pb2 import ArrayFeatureType from coremltools.proto.NeuralNetwork_pb2 import EXACT_ARRAY_MAPPING def build_model(with_dynamic_input_shape: bool, with_dynamic_output_shape: bool): model = Model() model.specificationVersion = 4 input = model.description.input.add() input.name = "A" input.type.multiArrayType.shape[:] = [1, 2] input.type.multiArrayType.dataType = ArrayFeatureType.FLOAT32 if with_dynamic_input_shape: range = input.type.multiArrayType.shapeRange.sizeRanges.add() range.upperBound = -1 range = input.type.multiArrayType.shapeRange.sizeRanges.add() range.lowerBound = 2 range.upperBound = 2 output = model.description.output.add() output.name = "Y" output.type.multiArrayType.shape[:] = [1, 4] output.type.multiArrayType.dataType = ArrayFeatureType.FLOAT32 if with_dynamic_output_shape: range = output.type.multiArrayType.shapeRange.sizeRanges.add() range.upperBound = -1 range = output.type.multiArrayType.shapeRange.sizeRanges.add() range.lowerBound = 4 range.upperBound = 4 layer = model.neuralNetwork.layers.add() layer.name = "MatMul" layer.input[:] = ["A"] layer.output[:] = ["Y"] layer.innerProduct.inputChannels = 2 layer.innerProduct.outputChannels = 4 layer.innerProduct.weights.floatValue[:] = [0.0, 4.0, 1.0, 5.0, 2.0, 6.0, 3.0, 7.0] model.neuralNetwork.arrayInputShapeMapping = EXACT_ARRAY_MAPPING return model if __name__ == "__main__": model = build_model(with_dynamic_input_shape=True, with_dynamic_output_shape=True) with open("dynamic_shape.mlmodel", mode="wb") as f: f.write(model.SerializeToString(deterministic=True)) model = build_model(with_dynamic_input_shape=True, with_dynamic_output_shape=False) with open("dynamic_input_shape.mlmodel", mode="wb") as f: f.write(model.SerializeToString(deterministic=True)) model = build_model(with_dynamic_input_shape=False, with_dynamic_output_shape=False) with open("static_shape.mlmodel", mode="wb") as f: f.write(model.SerializeToString(deterministic=True)) Swift program to run the models and print the output shape. import Foundation import CoreML func makeFloatShapedArray(shape: [Int]) -> MLShapedArray<Float> { let size = shape.reduce(1, *) let values = (0 ..< size).map { Float($0) } return MLShapedArray(scalars: values, shape: shape) } func runModel(model_path: URL, m: Int) throws { print("running model: \(model_path.lastPathComponent)") let compiled_model_path = try MLModel.compileModel(at: model_path) let model = try MLModel(contentsOf: compiled_model_path) let a = MLMultiArray(makeFloatShapedArray(shape: [m, 2])) print("A shape: \(a.shape)") let inputs = try MLDictionaryFeatureProvider(dictionary: ["A": a]) let outputs = try model.prediction(from: inputs) let y = outputs.featureValue(for: "Y")!.multiArrayValue! print("Y shape: \(y.shape)") } func modelUrl(_ model_file: String) -> URL { return URL(filePath: "/path/to/models/\(model_file)") } try runModel(model_path: modelUrl("dynamic_shape.mlmodel"), m: 1) try runModel(model_path: modelUrl("dynamic_shape.mlmodel"), m: 2) try runModel(model_path: modelUrl("dynamic_input_shape.mlmodel"), m: 1) try runModel(model_path: modelUrl("dynamic_input_shape.mlmodel"), m: 2) try runModel(model_path: modelUrl("static_shape.mlmodel"), m: 1)
0
0
483
Aug ’23
Core Ml Model, labels probabilities returning nan
Hi, I have a core ml model that when I try to print the: modelPrediction?.labelProbability which is of type String:Double and contains all the features with their corresponding probabilities, the value of type double comes with nan rest = nan right = nan up = nan Sometimes restarting makes it work again. Sometimes it can take a lot of restarts to start working again. Even when deleting the app and installing again the same thing happens. Also tried changing the deployment version but didn't seem to fix it. Any help is appreciated.
6
0
2.4k
Aug ’23
CoreML Converter Missing Tensorflow Package
I am trying to convert my Tensorflow 2.0 model to a CoreML model so I can deploy it to a mobile app. However, I continually get the error: ValueError: Converter was called with source="tensorflow", but missing tensorflow package I am working in a virtual environment with Python 3.7, Tensorflow 2.11, and Coremltools 5.3.1. I had saved the Tensorflow model by using tensorflow.saved_model.save and was attempting to convert the model with the following: import coremltools as ct image_input = ct.ImageType(shape=(1, 250, 250, 3,), bias=[-1,-1,-1], scale=1/255) classifier_config = ct.ClassifierConfig(['Billy','Not_Billy']) core_model = ct.convert( <path_to_saved_model>, convert_to='mlprogram', inputs=[image_input], classifier_config=classifier_config, source='tensorflow' ) I keep receiving this error: --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /var/folders/7n/vj_bf6q122bg43h_xm957hp80000gn/T/ipykernel_11024/1565729572.py in 6 inputs=[image_input], 7 classifier_config=classifier_config, ----> 8 source='tensorflow' 9 ) ~/Documents/Python/.venv/lib/python3.7/site-packages/coremltools/converters/_converters_entry.py in convert(model, source, inputs, outputs, classifier_config, minimum_deployment_target, convert_to, compute_precision, skip_model_load, compute_units, package_dir, debug, pass_pipeline) 466 _validate_conversion_arguments(model, exact_source, inputs, outputs_as_tensor_or_image_types, 467 classifier_config, compute_precision, --> 468 exact_target, minimum_deployment_target) 469 470 if pass_pipeline is None: ~/Documents/Python/.venv/lib/python3.7/site-packages/coremltools/converters/_converters_entry.py in _validate_conversion_arguments(model, exact_source, inputs, outputs, classifier_config, compute_precision, convert_to, minimum_deployment_target) 722 if exact_source == "tensorflow" and not _HAS_TF_1: 723 raise ValueError( --> 724 'Converter was called with source="tensorflow", but missing ' "tensorflow package" 725 ) 726 ValueError: Converter was called with source="tensorflow", but missing tensorflow package
1
0
582
Jul ’23
How to get recommendations for new user in MLRecommender model
I have a dataset with 3 columns "item_id", "user_id", "rating". I created a coreML MLRecommender model from this dataset. I want to use this model to get the top 10 predictions for a new user (not in the original dataset) but who has rated a subset of the items in the dataset. I don't see any API in the Apple docs to do this. Both the recommendations APIs only seem to accept an existing user-id and get recommendations for that user. The WWDC tutorial talks about a prediction API to achieve this. But I dont see this in the Apple API documentation and code below from WWDC tutorial cannot be used since it does not give details on how to create the HikingRouteRecommenderInput class it passes into the prediction API. let hikes : [String : Double] = ["Granite Peak" : 5, "Wildflower Meadows" : 4] let input = HikingRouteRecommenderInput(items: hikes, k: 5) // Get results as sequence of recommended items let results = try model.prediction(input: input) Any pointers on how to get predictions for new user would be greatly appreciated.
0
0
405
Jul ’23