Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

116 Posts
Sort by:
Post not yet marked as solved
0 Replies
65 Views
Hey, im training an MLImageClassifier via the train()-method: guard let job = try? MLImageClassifier.train(trainingData: trainingData, parameters: modelParameter, sessionParameters: sessionParameters) else{ debugPrint("Training failed") return } Unfortunately the metrics of my MLProgress, which is created from the returning MLJob while training are empty. Code for listening on Progress: job.progress.publisher(for: \.fractionCompleted) .sink{[weak job] fractionCompleted in guard let job = job else { debugPrint("failure in creating job") return } guard let progress = MLProgress(progress: job.progress) else { debugPrint("failure in creating progress") return } print("ProgressPROGRESS: \(progress)") print("Progress: \(fractionCompleted)") } .store(in: &subscriptions) Printing the Progress ends in: MLProgress(elapsedTime: 2.2328420877456665, phase: CreateML.MLPhase.extractingFeatures, itemCount: 32, totalItemCount: Optional(39), metrics: [:]) Got the Same result when listening to MLCheckpoints, Metrics are empty aswell: MLCheckpoint(url: URLPATH.checkpoint, phase: CreateML.MLPhase.extractingFeatures, iteration: 32, date: 2024-04-18 11:21:18 +0000, metrics: [:]) Can some1 tell me how I can access the metrics while training? Thanks!
Posted Last updated
.
Post not yet marked as solved
1 Replies
92 Views
I'm using Filemaker, with Monkey Bread Software plugin's CoreML features, to find that it can only write to .mlmodelc. Are these (.mlmodel = .mlmodelc) the same? If not, how do you generate a .mlmodelc using XCode. Please let me know, thanks.
Posted Last updated
.
Post not yet marked as solved
0 Replies
102 Views
I hope this message finds you well. I recently had the opportunity to watch the insightful session titled "Improve Core ML Integration with Async Prediction" and was thoroughly impressed by the depth of information and the practical demonstration provided. The session offered valuable insights that I believe would greatly benefit my ongoing projects and my understanding of Core ML integration. As I am keen on implementing the demonstrated workflows and techniques within my own work, I am reaching out to kindly request access to the source code and any related material presented during the session. Having access to the code would enable me to better understand the concepts discussed and apply them more effectively in real-world scenarios. I believe that being able to review and experiment with the actual code would significantly enhance my learning experience and the implementation efficiency of my projects. It would also serve as a valuable resource for referencing best practices in Core ML integration and async prediction techniques. Thank you very much for considering my request. I greatly appreciate the effort that went into creating such an informative session and am looking forward to potentially exploring the material in greater depth. Best regards, Fabio G.
Posted
by fguzman82.
Last updated
.
Post not yet marked as solved
0 Replies
110 Views
I have an mlprogram of size 127.2MB it was created using tensorflow and then converted to CoreML. When I request a prediction the amount of memory shoots up to 2-2.5GB every time. I've tried using the optimization techniques in coremltools but nothing seems to work it still shoots up to the same 2-2.5GB of ram every time. I've attached a graph to see it doesn't seem to be a leak as the memory is then going back down.
Posted
by Michi314.
Last updated
.
Post not yet marked as solved
1 Replies
142 Views
The CoreML model worked correctly in the “Preview” of “CreateML”. However, after it is put into the Xcode project and replaced the “MobileNetV2” , it did not classify the images correctly, it returned one image with high confidence all the time no matter what image it is . The same code works fine when executed on real device. Can someone please assist on this ?
Posted Last updated
.
Post not yet marked as solved
0 Replies
182 Views
Hi everyone, is it possible to use a 3D USDZ file to train a model in Create ML, I see there is an image option but it would be good to use these files from Reality Composer from object capture? Or is this in the works for forthcoming Xcode updates? Many Thanks Stuart
Posted Last updated
.
Post not yet marked as solved
0 Replies
196 Views
Hello Developers, We are trying to convert Pytorch models to CoreML using coremltools, while converting we used jit.trace to create trace of model where we encountered a warning that if model has controlflow and conditions it is not advisable to use trace instead convert into TorchScript using jit.script, However after successful conversion of model into TorchScript, Now in the next step of conversion from TorchScript to CoreML here is the error we are getting when we tried to convert to coremltools python package. This root error is so abstract that we are not able to trace-back from where its occurring. AssertionError: Item selection is supported only on python list/tuple objects We trying to add this above error prompt into ChatGPT and we get something like the below response from ChatGPT. But unfortunately it's not working. The error indicates that the Core ML converter encountered a TorchScript operation involving item selection (indexing or slicing) on an object that it doesn't recognize as a Python list or tuple. The converter supports item selection only on these Python container types. This could happen if your model uses indexing on tensors or other types not recognized as list or tuple by the Core ML tools. You may need to revise the TorchScript code to ensure it only performs item selection on supported types or adjust the way tensors are indexed.
Posted
by ChiragJoc.
Last updated
.
Post not yet marked as solved
0 Replies
193 Views
I've been recently working on a VisionOS app which uses CoreMl to identify specific body parts and display a window with information of the identified body part, since the use of Vision Pro's cameras is blocked, I'm using an iPhone to perform image classification, and then send the label to the headset using Multipeer Connectivity, I'd like to display a volume once the user selects a body part, could my iPhone return enough spatial information for me to be able to fully take advantage of Vision Pro's mixed reality capabilities?
Posted Last updated
.
Post not yet marked as solved
0 Replies
354 Views
Hello I am making a rock paper scissors game using object detection with a model I made using create ml and a dataset I found online. The trained model works and I tried to implement it into Xcode but when I run my app I get this error This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler. I am still new to create ml and I cannot seem to find anything about making my model updatable in create ml.
Posted
by 1emil.
Last updated
.
Post not yet marked as solved
0 Replies
261 Views
I'm working with MLSoundClassifier to try to look for 2 different sounds in a live audio stream. I have been debating with the team if it is better to train 2 separate models, one for each different sound, or train 1 model on both sounds? Has anyone had any experience with this. Some of us believe that we have received better results with the separate models and some with 1 single model trained on both sounds. Thank you!
Posted
by mhbucklin.
Last updated
.
Post not yet marked as solved
1 Replies
396 Views
How do I add a already made CoreML model into my playground? I tried what people recommended online -- building a test project and get the .mlmodelc file and put that in the playground along with the autogenerated class for the model. However, I keep on getting so many errors. The errors: Unexpected duplicate tasks Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Unexpected duplicate tasks Showing Recent Issues Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel ZooClassifier.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language.
Posted Last updated
.
Post marked as solved
1 Replies
347 Views
I am trying to coremltools.converters.convert a traced PyTorch model and I got an error: PyTorch convert function for op 'intimplicit' not implemented I am trying to convert a RVC model from github. I traced the model with torch.jit.trace and it fails. So I traced down the problematic part to the ** layer : https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI/blob/main/infer/lib/infer_pack/modules.py#L188 import torch import coremltools as ct from infer.lib.infer_pack.modules import ** model = **(192, 5, dilation_rate=1, n_layers=16, ***_channels=256, p_dropout=0) model.remove_weight_norm() model.eval() test_x = torch.rand(1, 192, 200) test_x_mask = torch.rand(1, 1, 200) test_g = torch.rand(1, 256, 1) traced_model = torch.jit.trace(model, (test_x, test_x_mask, test_g), check_trace = True) x = ct.TensorType(name='x', shape=test_x.shape) x_mask = ct.TensorType(name='x_mask', shape=test_x_mask.shape) g = ct.TensorType(name='g', shape=test_g.shape) mlmodel = ct.converters.convert(traced_model, inputs=[x, x_mask, g]) I got an error RuntimeError: PyTorch convert function for op 'intimplicit' not implemented. How could I modify the **::forward so it does not generate an intimplicit operator ? Thanks David
Posted
by r2d3.
Last updated
.
Post not yet marked as solved
2 Replies
456 Views
I run a MiDaS CoreML model on the Device. It run well on VisionPro Simulator and iOS RealDevice. But crash on VisionPro device. crash mssage: /Library/Caches/com.apple.xbs/Sources/MetalPerformanceShaders/MPSCore/Utility/MPSLibrary.mm:550: failed assertion `MPSKernel MTLComputePipelineStateCache unable to load function ndArrayConvolution2DA14. Crashlog_com.moemiku.VisionMagicPhoto_2024-01-21-16-01-07.txt Crashlog_com.moemiku.VisionMagicPhoto_2024-01-21-16-00-39.txt
Posted
by wudijimao.
Last updated
.
Post not yet marked as solved
0 Replies
273 Views
Hi, I'm using apple/ml-stable-diffusion package with CoreML models running under GPU mode in my SwiftUI app. The problem that I have and every other implementation I have tested is that every time the model is changed the old model still persists I'm memory until the app is reset. This is the same for every app I have tested that uses this package. So my question is can I kill the sub processes or flush the memory. The package has memory freeing functions, but they don't affect the loaded model. Any clues as to where I might start looking?
Posted
by BloggsMr.
Last updated
.
Post not yet marked as solved
0 Replies
263 Views
I have multiple ML models along with a collection of supporting code designed to enhance their effectiveness. I want to encapsulate these assets within a package so I can add it to a few of my projects. Is it possible to encrypt the ML models when including them as resources within the package?
Posted
by DmytroN.
Last updated
.
Post not yet marked as solved
0 Replies
315 Views
Have a CoreML model that I run in my app Spatial Media Toolkit which lets you convert 2D photos to Spatial. Running the model on my 13" M1 mac gets 70ms inference. Running the exact same code on my Vision Pro takes 700ms. I'm working on adding video support but Vision Pro inference is feeling impossible due to 700ms per frame (20x realtime for for 30fps! 1 sec of video takes 20 sec!) There's a ModelConfiguration you can provide, and when I force CPU I get the same exact performance. Either it's only running on CPU, the NeuralEngine is throttled, or maybe GPU isn't allowed to help out. Disappointing but also feels like a software issue. Would be curious if anyone else has hit this/have any workarounds
Posted Last updated
.
Post not yet marked as solved
0 Replies
360 Views
Is 30x30 the maximum grid size on Create ML App? The input allows me to set any number higher than that, but on starting training, the number falls back to 30x30. Is that a limitation or a bug in the app?
Posted
by gcstr.
Last updated
.