Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

109 Posts
Sort by:
Post marked as solved
3 Replies
1.3k Views
There seems to be a new MLE5Engine in iOS 17 and macOS 14, that causes issues with our style transfer models: The output is wrong (just gray pixels) and not the same as on iOS 16. There is a large memory leak. The memory consumption is increasing rapidly with each new frame. Concerning 2): There are a lot of CVPixelBuffers leaking during prediction. Those buffers somehow have references to themselves and are not released properly. Here is a stack trace of how the buffers are created: 0 _malloc_zone_malloc_instrumented_or_legacy 1 _CFRuntimeCreateInstance 2 CVObject::alloc(unsigned long, _CFAllocator const*, unsigned long, unsigned long) 3 CVPixe Buffer::alloc(_CFAllocator const*) 4 CVPixelBufferCreate 5 +[MLMultiArray(ImageUtils) pixelBufferBGRA8FromMultiArrayCHW:channelOrderIsBGR:error:] 6 MLE5OutputPixelBufferFeatureValueByCopyingTensor 7 -[MLE5OutputPortBinder _makeFeatureValueFromPort:featureDescription:error:] 8 -[MLE5OutputPortBinder _makeFeatureValueAndReturnError:] 9 __36-[MLE5OutputPortBinder featureValue]_block_invoke 10 _dispatch_client_callout 11 _dispatch_lane_barrier_sync_invoke_and_complete 12 -[MLE5OutputPortBinder featureValue] 13 -[MLE5OutputPort featureValue] 14 -[MLE5ExecutionStreamOperation outputFeatures] 15 -[MLE5Engine _predictionFromFeatures:options:usingStream:operation:error:] 16 -[MLE5Engine _predictionFromFeatures:options:error:] 17 -[MLE5Engine predictionFromFeatures:options:error:] 18 -[MLDelegateModel predictionFromFeatures:options:error:] 19 StyleModel.prediction(input:options:) When manually disabling the use of the MLE5Engine, the models run as expected. Is this an issue caused by our model, or is it a bug in Core ML?
Posted Last updated
.
Post not yet marked as solved
0 Replies
936 Views
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
Posted
by mrdbourke.
Last updated
.
Post not yet marked as solved
1 Replies
605 Views
Hello, I've been working on an app that involves training a neural network model on the iPhone. I've been using the Metal Performance Shaders Graph (MPS Graph) for this purpose. In the training process the loss becomes Nan on iOS17 (21A329). I noticed that the official sample code for Training a Neural Network using MPS Graph (link) works perfectly fine on Xcode 14.3.1 with iOS 16.6.1. However, when I run the same code on Xcode 15.0 beta 8 with iOS 17.0 (21A329), the training process produces a NaN loss in function updateProgressCubeAndLoss. The official sample code and my own app exhibit the same issue. Has anyone else experienced this issue? Is this a known bug, or is there something specific that needs to be adjusted for iOS 17? Any guidance would be greatly appreciated. Thank you!
Posted
by shenjiaqi.
Last updated
.
Post not yet marked as solved
2 Replies
636 Views
I have accidentally updated to Sonoma, and found my CoreML models are generating nearly 7x slower since the update. I also no longer get the verbose information in terminal (i.e time taken per cycle, deviation from actual result etc). This is using xcode and swift developed for MacOS. The M1 laptop I am using is also under considerably less stress (i.e it is no longer getting warm) Is there a flag I need to set to increase performance, a button I need to press, any suggestions would be helpful. Please note this is 24hr+ since the update, so it should no longer be affected by any usual background tasks after an upgrade.
Posted Last updated
.
Post not yet marked as solved
0 Replies
613 Views
I trying to scan credit card but i am getting some issue with printed( Pressed ) number on card and some dark background with dark card numbers as well not scanning. Please help me to smooth scan for every cards. Thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
335 Views
annotation.js file [ { "filename": "image1.jpg", "annotations": ["terminal airport", "two people"] }, { "filename": "image2.jpg", "annotations": ["airport", "two people"] }, { "filename": "image3.jpg", "annotations": ["airport", "one person"] }, { "filename": "image4.jpg", "annotations": ["airport", "two people", "more people"] }, { "filename": "image5.jpg", "annotations": ["airport", "one person"] } ]
Posted Last updated
.
Post not yet marked as solved
0 Replies
586 Views
Unfortunately, NLTagger's enumerateTags() API is broken. Even after following Apple's example from the developer documentation, which is meant to demonstrate how to identify parts of speech in a given text, the only tag returned for each word is OtherWord: I thought maybe it was a Simulator issue, so I quickly implemented an iOS app and copied over the code, but it didn't help. I even tried with Xcode 15.0 beta 7: same. Dear Core ML team, please fix/restore this functionality. Thanks!
Posted
by knyisztor.
Last updated
.
Post not yet marked as solved
12 Replies
4.1k Views
Hello, I'm new using CoreML and I'm trying to do a test app with the models that already exist. I'm having next error at the moment to classifier the image: [coreml] Failed to get the home directory when checking model path. I would like to receive your help to solve this error. Thanks.
Posted Last updated
.
Post not yet marked as solved
4 Replies
1k Views
I have tried many times. When I change the file or re-create it, it shows 404 error { "code": 400, "message": "InvalidArgumentError: Unable to unzip MLArchive", "reason": "There was a problem with your request.", "detailedMessage": "InvalidArgumentError: Unable to unzip MLArchive", "requestUuid": "699afb97-8328-4a83-b186-851f797942aa" }
Posted
by Abenx.
Last updated
.
Post not yet marked as solved
1 Replies
407 Views
Hi, As the MLModelCollection is deprecated, I have created mlarchive from mlpackage files which I can upload to cloud storage and download them on the device at runtime. But how do I use the mlarchive to create an instance of MLModel. If this is not possible then please guide me in what form can I upload an mlpackage to a cloud storage and then consume it in the app at runtime. I don't want to bundle the mlpackage inside the app as it increases the app size which is not acceptable to us.
Posted Last updated
.
Post not yet marked as solved
0 Replies
464 Views
I'm trying to get the WWDC2020 Sports Analysis code running. It's the project named BuildingAFeatureRichAppForSportsAnalysis. It seems that now the boardDetectionRequest fails when trying to run the code in the simulator. The main error that I get is Error Domain=com.apple.vis Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x6000024991d0 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline}}}. The problem is that I can't tell why the VNImageRequestHandler is failing when trying to detect the board. It doesn't say that it got a bad image. It doesn't say that it didn't detect a board. I'm running the code against the sample movie provided. I believe this used to work. The other error that I see is upon initialization in Common.warmUpVisionPipeline when trying to load I get 2023-09-07 12:58:59.239614-0500 ActionAndVision[3499:34083] [coreml] Failed to get the home directory when checking model path. From what I can tell in the debugger though the board detection model did load. Thanks.
Posted
by markj9.
Last updated
.
Post not yet marked as solved
6 Replies
1.5k Views
Hey, I'm a web developer developing a macos app for the first time. I need a vector database where data will be stored on the user's machine. I'm familiar with libraries like FAISS, but am aware that it does not have Swift bindings and from a brief look, appears fairly annoying to attempt to get working with a macos app. I'm wondering if Apple has a similar library available in their dev kit? I don't need much, just something to store the vectors in a database, do a cosine sim search on them and maybe add some additional metadata to each vector embedding. If not, is bridging libraries like this a common thing to do when developing ios/macos apps?
Posted
by jonflynng.
Last updated
.
Post not yet marked as solved
2 Replies
733 Views
In the ml-ane-transformers repo, there is a custom LayerNorm implementation for the Neural Engine-optimized shape of (B,C,1,S). The coremltools documentation makes it sound like the layer_norm MIL op would support this natively. In fact, the following code works on CPU: B,C,S = 1,768,512 g,b = 1, 0 @mb.program(input_specs=[mb.TensorSpec(shape=(B,C,1,S)),]) def ln_prog(x): gamma = (torch.ones((C,), dtype=torch.float32) * g).tolist() beta = (torch.ones((C), dtype=torch.float32) * b).tolist() return mb.layer_norm(x=x, axes=[1], gamma=gamma, beta=beta, name="y") However it fails when run on the Neural Engine, giving results that are scaled by an incorrect value. Should this work on the Neural Engine?
Posted
by smpanaro.
Last updated
.
Post not yet marked as solved
7 Replies
2.8k Views
We've 10 CoreML models in our app, each encrypted with a separate key generated in XCode. After opening and closing the app 6-7 times, the app crashes at model initialization with error: 2021-04-21 13:52:47.711729+0300 MyApp[95443:7341643] Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=9 "Failed to generate key request for 08494FB2-B070-440F-A8A5-CBD0823A258E with error: -42905" UserInfo={NSLocalizedDescription=Failed to generate key request for 08494FB2-B070-440F-A8A5-CBD0823A258E with error: -42905}: file MyApp/Model.swift, line 43 Looks like iPhone is blocking the app for suspicious behavior and the app fails to decrypt the model. We noticed that after ~10 hours the app is unlocked, it successfully decrypts and initializes the model. Opening and closing the app many times in a short period of time is indeed unnatural, but the most important question is how to avoid blocking? Would Apple block the app if a user opens and closes it 10 times during a day? How does the number of models in the app affect probability that the app will be blocked? Thanks!
Posted
by alievk91.
Last updated
.
Post not yet marked as solved
1 Replies
607 Views
Hi there, I'm trying to convert my CoreML model (it's actually .mlpackage) to .mpsgraphpackage so I can test the performance of my model with MPSGraph API. I run the code you provide in terminal but it just does nothing (command execute forever). In Activity Monitor terminal uses 0.0% of CPU. I My XCode version 15.0 beta 6 (15A5219j) and running in OS Sonoma 14.0 Beta (23A5312d)
Posted
by ohnatiuk.
Last updated
.
Post not yet marked as solved
2 Replies
1.5k Views
With the release of Xcode 13, a large section of my vision framework processing code became errors and cannot compile. All of these have became deprecated. This is my original code:  do {       // Perform VNDetectHumanHandPoseRequest       try handler.perform([handPoseRequest])       // Continue only when a hand was detected in the frame.       // Since we set the maximumHandCount property of the request to 1, there will be at most one observation.       guard let observation = handPoseRequest.results?.first else {         self.state = "no hand"         return       }       // Get points for thumb and index finger.       let thumbPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyThumb)       let indexFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyIndexFinger)       let middleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyMiddleFinger)       let ringFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyRingFinger)       let littleFingerPoints = try observation.recognizedPoints(forGroupKey: .handLandmarkRegionKeyLittleFinger)       let wristPoints = try observation.recognizedPoints(forGroupKey: .all)               // Look for tip points.       guard let thumbTipPoint = thumbPoints[.handLandmarkKeyThumbTIP],          let thumbIpPoint = thumbPoints[.handLandmarkKeyThumbIP],          let thumbMpPoint = thumbPoints[.handLandmarkKeyThumbMP],          let thumbCMCPoint = thumbPoints[.handLandmarkKeyThumbCMC] else {         self.state = "no tip"         return       }               guard let indexTipPoint = indexFingerPoints[.handLandmarkKeyIndexTIP],          let indexDipPoint = indexFingerPoints[.handLandmarkKeyIndexDIP],          let indexPipPoint = indexFingerPoints[.handLandmarkKeyIndexPIP],          let indexMcpPoint = indexFingerPoints[.handLandmarkKeyIndexMCP] else {         self.state = "no index"         return       }               guard let middleTipPoint = middleFingerPoints[.handLandmarkKeyMiddleTIP],          let middleDipPoint = middleFingerPoints[.handLandmarkKeyMiddleDIP],          let middlePipPoint = middleFingerPoints[.handLandmarkKeyMiddlePIP],          let middleMcpPoint = middleFingerPoints[.handLandmarkKeyMiddleMCP] else {         self.state = "no middle"         return       }               guard let ringTipPoint = ringFingerPoints[.handLandmarkKeyRingTIP],          let ringDipPoint = ringFingerPoints[.handLandmarkKeyRingDIP],          let ringPipPoint = ringFingerPoints[.handLandmarkKeyRingPIP],          let ringMcpPoint = ringFingerPoints[.handLandmarkKeyRingMCP] else {         self.state = "no ring"         return       }               guard let littleTipPoint = littleFingerPoints[.handLandmarkKeyLittleTIP],          let littleDipPoint = littleFingerPoints[.handLandmarkKeyLittleDIP],          let littlePipPoint = littleFingerPoints[.handLandmarkKeyLittlePIP],          let littleMcpPoint = littleFingerPoints[.handLandmarkKeyLittleMCP] else {         self.state = "no little"         return       }               guard let wristPoint = wristPoints[.handLandmarkKeyWrist] else {         self.state = "no wrist"         return       } ... } Now every line from thumbPoints onwards results in error, I have fixed the first part (not sure if it is correct or not as it cannot compile) to :         let thumbPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.thumb.rawValue)        let indexFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.indexFinger.rawValue)        let middleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.middleFinger.rawValue)        let ringFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.ringFinger.rawValue)        let littleFingerPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue)        let wristPoints = try observation.recognizedPoints(forGroupKey: VNHumanHandPoseObservation.JointsGroupName.littleFinger.rawValue) I tried many different things but just could not get the retrieving individual points to work. Can anyone help on fixing this?
Posted Last updated
.
Post not yet marked as solved
3 Replies
698 Views
Hey all, we are currently training a Hand Pose model with the current release of CreateML, and during the feature extraction phase, we get the following error: Assertion failed: (/AppleInternal/Library/BuildRoots/d9889869-120b-11ee-b796-7a03568b17ac/Library/Caches/com.apple.xbs/Sources/TuriCore/turicreate_oss/src/core/storage/DataTable_data/DataColumn_v2_block_manager.cpp:105): seg->blocks.size()>column_id [0 > 0] We have tried to search this online and mitigate the issue, but we are getting nowhere - has anyone else experienced this issue?
Posted
by dougkilby.
Last updated
.
Post not yet marked as solved
1 Replies
737 Views
Hey, Are there any limits to the windowDuration property of the AudioFeaturePrint transformer such as the minimum value or maximum value? If we create a model with the Create ML UI App, upon selecting the AudioFeaturePrint as the feature extractor, we cannot go below 0.5 seconds for the window duration. Is the limit same if we programmatically create a model using the AudioFeaturePrint?
Posted
by mspattan.
Last updated
.
Post marked as solved
3 Replies
886 Views
Is it possible to create an updatable sound classifier model which uses Apple's built in MLSoundClassifier available via Create ML that can be trained/personalized on device using Core ML? I tried to look up in quite a few places for a long while, however, I know that when on-device training was initially announced in 2019, updatable models were only restricted to non built-in classifiers, but any additional information that may have come out after 2019 in this regard has been hard to find.
Posted
by mspattan.
Last updated
.
Post not yet marked as solved
1 Replies
928 Views
Hello, I am a student and I am doing a search for my thesis on create ML and shape recognition and image processing, so for this subject I want to find the details of the steps used in create ML for this, such as the techniques used for pre-processing, and the methods of extracting characteristics, and the filters applied, ect...
Posted Last updated
.