Integrate machine learning models into your app using Core ML.

Posts under Core ML tag

118 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

FAISS vs Apple vector search library?
Hey, I'm a web developer developing a macos app for the first time. I need a vector database where data will be stored on the user's machine. I'm familiar with libraries like FAISS, but am aware that it does not have Swift bindings and from a brief look, appears fairly annoying to attempt to get working with a macos app. I'm wondering if Apple has a similar library available in their dev kit? I don't need much, just something to store the vectors in a database, do a cosine sim search on them and maybe add some additional metadata to each vector embedding. If not, is bridging libraries like this a common thing to do when developing ios/macos apps?
6
1
1.9k
Sep ’23
How to consume an mlarchive in app without MLModelCollection
Hi, As the MLModelCollection is deprecated, I have created mlarchive from mlpackage files which I can upload to cloud storage and download them on the device at runtime. But how do I use the mlarchive to create an instance of MLModel. If this is not possible then please guide me in what form can I upload an mlpackage to a cloud storage and then consume it in the app at runtime. I don't want to bundle the mlpackage inside the app as it increases the app size which is not acceptable to us.
1
0
484
Sep ’23
Sports Analysis Code
I'm trying to get the WWDC2020 Sports Analysis code running. It's the project named BuildingAFeatureRichAppForSportsAnalysis. It seems that now the boardDetectionRequest fails when trying to run the code in the simulator. The main error that I get is Error Domain=com.apple.vis Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x6000024991d0 {Error Domain=com.apple.CoreML Code=0 "Failed to evaluate model 0 in pipeline" UserInfo={NSLocalizedDescription=Failed to evaluate model 0 in pipeline}}}. The problem is that I can't tell why the VNImageRequestHandler is failing when trying to detect the board. It doesn't say that it got a bad image. It doesn't say that it didn't detect a board. I'm running the code against the sample movie provided. I believe this used to work. The other error that I see is upon initialization in Common.warmUpVisionPipeline when trying to load I get 2023-09-07 12:58:59.239614-0500 ActionAndVision[3499:34083] [coreml] Failed to get the home directory when checking model path. From what I can tell in the debugger though the board detection model did load. Thanks.
0
0
564
Sep ’23
MPS Graph Neural Network Training Produces NaN Loss on Xcode 15.0 beta 8 + iOS 17.0
Hello, I've been working on an app that involves training a neural network model on the iPhone. I've been using the Metal Performance Shaders Graph (MPS Graph) for this purpose. In the training process the loss becomes Nan on iOS17 (21A329). I noticed that the official sample code for Training a Neural Network using MPS Graph (link) works perfectly fine on Xcode 14.3.1 with iOS 16.6.1. However, when I run the same code on Xcode 15.0 beta 8 with iOS 17.0 (21A329), the training process produces a NaN loss in function updateProgressCubeAndLoss. The official sample code and my own app exhibit the same issue. Has anyone else experienced this issue? Is this a known bug, or is there something specific that needs to be adjusted for iOS 17? Any guidance would be greatly appreciated. Thank you!
1
0
732
Oct ’23
NLTagger's identifying parts of speech feature broken
Unfortunately, NLTagger's enumerateTags() API is broken. Even after following Apple's example from the developer documentation, which is meant to demonstrate how to identify parts of speech in a given text, the only tag returned for each word is OtherWord: I thought maybe it was a Simulator issue, so I quickly implemented an iOS app and copied over the code, but it didn't help. I even tried with Xcode 15.0 beta 7: same. Dear Core ML team, please fix/restore this functionality. Thanks!
0
1
675
Sep ’23
Error in core ML data training: Data Analysis stopped
annotation.js file [ { "filename": "image1.jpg", "annotations": ["terminal airport", "two people"] }, { "filename": "image2.jpg", "annotations": ["airport", "two people"] }, { "filename": "image3.jpg", "annotations": ["airport", "one person"] }, { "filename": "image4.jpg", "annotations": ["airport", "two people", "more people"] }, { "filename": "image5.jpg", "annotations": ["airport", "one person"] } ]
0
0
414
Sep ’23
Generating models with CoreML in Sonoma slowed significantly
I have accidentally updated to Sonoma, and found my CoreML models are generating nearly 7x slower since the update. I also no longer get the verbose information in terminal (i.e time taken per cycle, deviation from actual result etc). This is using xcode and swift developed for MacOS. The M1 laptop I am using is also under considerably less stress (i.e it is no longer getting warm) Is there a flag I need to set to increase performance, a button I need to press, any suggestions would be helpful. Please note this is 24hr+ since the update, so it should no longer be affected by any usual background tasks after an upgrade.
3
2
742
Oct ’23
Core ML Model performance far lower on iOS 17 vs iOS 16 (iOS 17 not using Neural Engine)
Hello, I posted an issue on the coremltools GitHub about my Core ML models not performing as well on iOS 17 vs iOS 16 but I'm posting it here just in case. TL;DR The same model on the same device/chip performs far slower (doesn't use the Neural Engine) on iOS 17 compared to iOS 16. Longer description The following screenshots show the performance of the same model (a PyTorch computer vision model) on an iPhone SE 3rd gen and iPhone 13 Pro (both use the A15 Bionic). iOS 16 - iPhone SE 3rd Gen (A15 Bioinc) iOS 16 uses the ANE and results in fast prediction, load and compilation times. iOS 17 - iPhone 13 Pro (A15 Bionic) iOS 17 doesn't seem to use the ANE, thus the prediction, load and compilation times are all slower. Code To Reproduce The following is my code I'm using to export my PyTorch vision model (using coremltools). I've used the same code for the past few months with sensational results on iOS 16. # Convert to Core ML using the Unified Conversion API coreml_model = ct.convert( model=traced_model, inputs=[image_input], outputs=[ct.TensorType(name="output")], classifier_config=ct.ClassifierConfig(class_names), convert_to="neuralnetwork", # compute_precision=ct.precision.FLOAT16, compute_units=ct.ComputeUnit.ALL ) System environment: Xcode version: 15.0 coremltools version: 7.0.0 OS (e.g. MacOS version or Linux type): Linux Ubuntu 20.04 (for exporting), macOS 13.6 (for testing on Xcode) Any other relevant version information (e.g. PyTorch or TensorFlow version): PyTorch 2.0 Additional context This happens across "neuralnetwork" and "mlprogram" type models, neither use the ANE on iOS 17 but both use the ANE on iOS 16 If anyone has a similar experience, I'd love to hear more. Otherwise, if I'm doing something wrong for the exporting of models for iOS 17+, please let me know. Thank you!
0
1
1.1k
Oct ’23
XCode stuck with certain Bundle ID
In the midst of developing an app, xcode just can't run on the same device I have been using. And give the error below. To fix, I have reset my laptop and iphone. And also delete the files in ~/Library/Developer/Xcode. And do rebuilding. Multiple times. Just doesn't work. And I CAN run on another handset; but that handset just older and not having enought RAM, so I can't depend on it. And then, I found out how to fix it. I just change to ANOTHER Bundle ID. And it works. The question of this post is, I want to use the original bundle ID. e.g. if it happens again later after I have upload the app to appstore, I can't just always change the bundle ID. I think the problem is on the handset, maybe some data of setting loaded to the handset. Any way I can fix this? An error occurred while communicating with a remote process. Domain: com.apple.dt.CoreDeviceError Code: 3 User Info: { DVTErrorCreationDateKey = "2023-10-15 10:51:36 +0000"; IDERunOperationFailingWorker = IDEInstallCoreDeviceWorker; } -- The connection was interrupted. Domain: com.apple.Mercury.error Code: 1000 User Info: { XPCConnectionDescription = "<SystemXPCPeerConnection 0x60000338fe70> { <connection: 0x600001c6a0d0> { name = com.apple.CoreDevice.CoreDeviceService, listener = false, pid = 0, euid = 4294967295, egid = 4294967295, asid = 4294967295 } }"; } -- Event Metadata: com.apple.dt.IDERunOperationWorkerFinished : { "device_isCoreDevice" = 1; "device_model" = "iPhone15,2"; "device_osBuild" = "17.1 (21B5066a)"; "device_platform" = "com.apple.platform.iphoneos"; "dvt_coredevice_version" = "348.1"; "dvt_mobiledevice_version" = "1643.2.4"; "launchSession_schemeCommand" = Run; "launchSession_state" = 1; "launchSession_targetArch" = arm64; "operation_duration_ms" = 300973; "operation_errorCode" = 1000; "operation_errorDomain" = "com.apple.dt.CoreDeviceError.3.com.apple.Mercury.error"; "operation_errorWorker" = IDEInstallCoreDeviceWorker; "operation_name" = IDERunOperationWorkerGroup; "param_debugger_attachToExtensions" = 0; "param_debugger_attachToXPC" = 1; "param_debugger_type" = 3; "param_destination_isProxy" = 0; "param_destination_platform" = "com.apple.platform.iphoneos"; "param_diag_MainThreadChecker_stopOnIssue" = 0; "param_diag_MallocStackLogging_enableDuringAttach" = 0; "param_diag_MallocStackLogging_enableForXPC" = 1; "param_diag_allowLocationSimulation" = 1; "param_diag_checker_tpc_enable" = 0; "param_diag_gpu_frameCapture_enable" = 3; "param_diag_gpu_shaderValidation_enable" = 0; "param_diag_gpu_validation_enable" = 1; "param_diag_memoryGraphOnResourceException" = 0; "param_diag_queueDebugging_enable" = 1; "param_diag_runtimeProfile_generate" = 0; "param_diag_sanitizer_asan_enable" = 0; "param_diag_sanitizer_tsan_enable" = 0; "param_diag_sanitizer_tsan_stopOnIssue" = 0; "param_diag_sanitizer_ubsan_stopOnIssue" = 0; "param_diag_showNonLocalizedStrings" = 0; "param_diag_viewDebugging_enabled" = 1; "param_diag_viewDebugging_insertDylibOnLaunch" = 1; "param_install_style" = 0; "param_launcher_UID" = 2; "param_launcher_allowDeviceSensorReplayData" = 0; "param_launcher_kind" = 0; "param_launcher_style" = 99; "param_launcher_substyle" = 8192; "param_runnable_appExtensionHostRunMode" = 0; "param_runnable_productType" = "com.apple.product-type.application"; "param_structuredConsoleMode" = 1; "param_testing_launchedForTesting" = 0; "param_testing_suppressSimulatorApp" = 0; "param_testing_usingCLI" = 0; "sdk_canonicalName" = "iphoneos17.0"; "sdk_osVersion" = "17.0"; "sdk_variant" = iphoneos; } -- System Information macOS Version 14.0 (Build 23A344) Xcode 15.0 (22265) (Build 15A240d) Timestamp: 2023-10-15T18:51:36+08:00
0
0
721
Oct ’23
NaN value when predict with model resnet
Coremltools: 6.2.0 When I run coreml model in python result is good: {'var_840': array([[-8.15439941e+02, 2.88793579e+02, -3.83110474e+02, -8.95208740e+02, -3.53131561e+02, -3.65339783e+02, -4.94590851e+02, 6.24686813e+01, -5.92614822e+01, -9.67470627e+01, -4.30247498e+02, -9.27047348e+01, 2.19661942e+01, -2.96691345e+02, -4.26566772e+02........ But when I run on xcode so result look like: [-inf,inf,nan,-inf,nan,nan,nan,nan,nan,-inf,-inf,-inf,-inf,-inf,-inf,nan,-inf,-inf,nan,-inf,nan,nan,-inf,nan,-inf,-inf,-inf,nan,nan,nan,nan,nan,nan,nan,nan,nan,nan,-inf,nan,nan,nan,nan,-inf,nan,-inf ....... Step1: Convert Resnet50 to coreml: import torch import torchvision # Load a pre-trained version of MobileNetV2 model. torch_model = torchvision.models.resnet50(pretrained=True) # Set the model in evaluation mode. torch_model.eval() # Trace the model with random data. example_input = torch.rand(1, 3, 224, 224) traced_model = torch.jit.trace(torch_model, example_input) out = traced_model(example_input) # Download class labels in ImageNetLabel.txt. # Set the image scale and bias for input image preprocessing. import coremltools as ct image_input = ct.ImageType(shape=example_input.shape, ) # Using image_input in the inputs parameter: # Convert to Core ML using the Unified Conversion API. model = ct.convert( traced_model, inputs=[image_input], compute_units=ct.ComputeUnit.CPU_ONLY, ) # Save the converted model. model.save("resnettest.mlmodel") # Print a confirmation message. print('model converted and saved') Step2: Test model coreml in python: import coremltools as ct import PIL import numpy as np # Load the model model = ct.models.MLModel('/Users/ngoclh/Downloads/resnettest.mlmodel') print(model) img_path = "/Users/ngoclh/gitlocal/DetectCirtochApp/DetectCirtochApp/resources/image.jpg" img = PIL.Image.open(img_path) img = img.resize([224, 224], PIL.Image.ANTIALIAS) coreml_out_dict = model.predict({"input_1" : img}) print(coreml_out_dict) Step3: Test coreml model in Xcode: func getFeature() { do { let deepLab = try VGG_emb.init() //mobilenet_emb.init()//cirtorch_emb.init() let image = UIImage(named: "image.jpg") let pixBuf = image!.pixelBuffer(width: 224, height: 224)! guard let output = try? deepLab.prediction(input_1: pixBuf) else { return } let names = output.featureNames print("ngoc names: ", names) for name in names { let feature = output.featureValue(for: name) print("ngoc feature: ", feature) } } catch { print(error) } }
1
0
688
Nov ’23
Why is the case that every operator is supported by the ANE but the model still runs on GPU
Hi everyone, Wondering if you know how the device decide which compute unit (GPU, CPU or ANE) to use when compute units are set to ALL? I'm working on optimizing a GPT2 model to run on ANE. I ran the performance report for the existing model and the report showed me operators not supported by ANE. Then I went onto remove these operators and converted the model to CoreML again. This time the performance report showed that every operator is supported by ANE but the device still prefers GPU when the compute units are set to ALL and perfers CPU when the compute units are set to CPU and ANE. ALL CPU and ANE Does anyone know why? Thank you in advance!
0
0
623
Oct ’23
How do we use the computational power of A17 Pro Neural Engine?
Hi. A17 Pro Neural Engine has 35 TOPS computational power. But many third-party benchmarks and articles suggest that it has a little more power than A16 Bionic. Some references are, Geekbench ML Core ML performance benchmark, 2023 edition How do we use the maximum power of A17 Pro Neural Engine? For example, I guess that logical devices of ANE on A17 Pro may be two, not one, so we may need to instantiate two Core ML models simultaneously for the purpose. Please let me know any technical hints.
1
0
1.5k
May ’24
the gpu and ane implementation of MIL resample ops
hello, I am a machine learning engineer, recently I need to run pytorch's grid_sample opration on iphone. so I use coremltools to convert pytorch grid_sample to MIL resample op which is officially supported. But when running on the phone, it is switched to the CPU instead of the GPU or ANE (xcode connected with phone, run offical performance benchmark). I would like to ask why there is no efficient GPU implementation? What I am looking forward to is running around 2ms, but 8ms with cpu
2
1
520
Oct ’23
CoreML PyTorch Conversion More Samples?
I'm trying to convert a PyTorch forward Transformer model to CoreML but am running into several issues, like these errors: "For mlprogram, inputs with infinite upper_bound is not allowed. Please set upper. bound" 570 • to a positive value in "RangeDim)" for the "inputs" param in ct.convert().' raise NotImplementedError ( 259 "inplace_ops pass doesn't yet support append op inside conditional" Are there any more samples besides https://developer.apple.com/videos/play/tech-talks/10154 The sample in that video an imageType is used as input but in my model text is the input (and the output). I also get warned that converting "torch script" is experimental but in the video it says it a torch script is required to convert (though I know the video is a few years old).
1
0
747
Nov ’23
CoreML model load failed with this error : Failed to set up decrypt context for /private/var/mobile/Containers/Data/Application/ACB94507-F8DE-494B-8499-B0CF75FC3B55/Library/Caches/temp.m/***.mlmodelc. error:-42905"
Hi there. We use a core ML model for image processing, and because loading core ml model take long time (~10 sec), we preload core ML model when app start time. but in some device, loading core ml model fails with such error. we download core ML model from server then load model from local storage. loading code looks like this. typical. MLModel.load(contentsOf: compliedUrl, configuration: config) once this error happen, it keeps fails until we restart the device. (+) In this article, I saw that it is related some "limitation of decrypt session" : https://developer.apple.com/forums/thread/707622 but it also happens to in-house test flight builds which are used only under 5 people. Can I know why this happens?
3
1
730
Nov ’23
Memory „Leak“ when using cpu+gpu
My app allows the user to select different stable diffusion models, and I noticed a very strange issue concerning memory management. When using the StableDiffusionPipeline (https://github.com/apple/ml-stable-diffusion) with cpu+gpu, around 1.5 GB of memory is not properly released after generateImages is called and the pipeline is released. When generating more images with a new StableDiffusionPipeline object, memory is reused and stays stable at around 1.5 GB after inference is complete. Everything, especially MLModels, are released properly. Guessing, MLModel seems to create a persistent cache. Here is the problem: When using a different MLModel afterwards, another 1.5 GB is not released and stays resident. Using a third model, this totales to 4.5 GB of unreleased, persistent memory. At first I thought that would be a bug in the StableDiffusionPipeline – but I was able to reproduce this behaviour in a very minimal objective-c sample without ARC: MLArrayBatchProvider *batchProvider = [[MLArrayBatchProvider alloc] initWithFeatureProviderArray:@[<VALID FEATURE PROVIDER>]]; MLModelConfiguration *config = [[MLModelConfiguration alloc] init]; config.computeUnits = MLComputeUnitsCPUAndGPU; MLModel *model = [[MLModel modelWithContentsOfURL:[NSURL fileURLWithPath:<VALID PATH TO .mlmodelc SD 1.5 FILE>] configuration:config error:&error] retain]; id<MLBatchProvider> returnProvider = [model predictionsFromBatch:batchProvider error:&error]; [model release]; [config release]; [batchProvider release]; After running this minimal code, 1.5 GB of persistent memory is present that is not released during the lifetime of the app. This only happens on macOS 14(.1) Sonoma and on iOS 17(.1), but not on macOS 13 Ventura. On Ventura, everything works as expected and the memory is released when predictionsFromBatch: is done and the model is released. Some observations: This only happens using cpu+gpu, not cpu+ane (since the memory is allocated out of process) and not using cpu-only It does not matter which stable diffusion model is used, I tried custom sd-derived models as well as the apple-provided sd 1.5 models I reproduced the issue on MBP 16" M1 Max with macOS 14.1, iPhone 12 mini with iOS 17.0.3 and iPad Pro M2 with iPadOS 17.1 The memory that "leaks" are mostly huge malloc block of 100-500 MB of size OR IOSurfaces This memory is allocated during predictionsFromBatch, not while loading the model Loading and unloading a model does not leak memory – only when predictionsFromBatch is called, the huge memory chunk is allocated and never freed during the lifetime of the app Does anybody have any clue what is going on? I highly suspect that I am missing something crucial, but my colleagues and me looked everywhere trying to find a method of releasing this leaked/cached memory.
2
0
824
Nov ’23
coreml resample opration do not support gpu
hello! I have converted a single grid_sample opration in pytorch to mlpackage using your coremltools, and open it with xcode for benchmarking. there is only one op which is called resample. and I run it with my mac m1 pro .but I found that it is only run on cpu, so the latency is not in my demand. can you support the resample with gpu, or can i implement it with metal by myself?
1
0
485
Nov ’23
Object recognition and tracking on visionOS
Hello! I would like to develop a visionOS application that tracks a single object in a user's environment. Skimming through the documentation I found out that this feature is currently unsupported in ARKit (we can only recognize images). But it seems it should be doable by combining CoreML and Vision frameworks. So I have a few questions: Is it the best approach or is there a simpler solution? What is the best way to train a CoreML model without access to the device? Will videos recorded by iPhone 15 be enough? Thank you in advance for all the answers.
1
0
600
Nov ’23