Core ML

Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

114 results found
Post marked as solved
292 Views

Data Science on Apple Silicon: new distros and builds for R, Python, Julia?

This question does not come from a developer working on any of these languages. I am a data scientist working *in* these languages. But I'd like to see some clarity how these ecosystems will transition from Intel to Apple Silicon. Intel has specifically built tools for Python lately. R became much more efficient with Revolution (now Microsoft) bundling Intel's Math Kernel Library (and more) into R. R can also be much faster on the Mac with the Accelerate framework (esp. BLAS and LAPACK from veclib, though these are not the officially supported default for the Mac build). As we are investing into these platforms (both Apple hardware and our own codebase, not to mention human capital), it would be great to get more advance guidance on what performance we can expect on what front. Data scientists are more than just pro consumers needing an Adobe update for the new architecture (though for Matlab or Stata, the situation is similar), but less than full-blown developers who will use Swift anyway. Converters from coremltools can save some models (say, scikit-learn under Python) to use in apps. Does this promise any further optimization and support for Python on Apple Silicon?
Asked
Last updated .
Post marked as solved
45 Views

On WatchOS, is it possible to classify gestures in real time in the background?

I am thinking of developing a watchOS app that will record someones audio, whenever they raise their wrist to their face (so as to classify coughs and sneezes and help detect flu outbreaks). Of course, this is possible first-party (Siri's raise to wake & hand wash detection), but I am wondering if it's possible third-party. I have seen some similar questions asked on StackExchange, but they all seem to be suggesting that starting a Watch Kit Workout Session would do the trick. However, this presents a UI and precludes the user from using the app as normal. Moreover, if it is possible to classify gestures in the background, is it possible to subsequently record the audio or would that require the app to be opened? My sense upon researching this is that it's probably not possible, but I'm really excited about this idea and I know there's lots of stuff out there with Research Kit that I'm not aware of, so I wanted to ask.
Asked
Last updated .
Post marked as unsolved
19 Views

Model Collections failed to upload model

Hello i have a problem with uploading a model (243 MB) to a model collection when i do the same for my other collections where the models is smaller its works. But with this model of 243 MB is does not work and end up not creating a deployment on the collection. I am not sure if there is a limit on size or some other problem? if anyone have an ideer about what i can try to get my model uploaded.
Asked
by Wado88.
Last updated .
Post marked as unsolved
5 Views

`[MPSCNNConvolution encode...] Error: destination may not be nil'

I try to run mlmodel prediction and I get below error: /Library/Caches/com.apple.xbs/Sources/MetalImage/MetalImage-124.0.29/MPSNeuralNetwork/Filters/MPSCNNKernel.mm:752: failed assertion `[MPSCNNConvolution encode...] Error: destination may not be nil. I use IOS 14 and implement the same method. I attach the screen shot of the code where error occurs: Appreciate your advice mlmodel's swift file screenshot where error occurs - https://developer.apple.com/forums/content/attachment/1117d26a-6db8-4cbd-9d5e-616aac7ad308
Asked
by fidelis.
Last updated .
Post marked as unsolved
17 Views

CoreML model for classifying images of Mac computers?

Is there an image classification model available for identifying images of MacBooks (MBA/MBP), iMacs, Mac minis, and Mac Pros? background: I'm interested in developing an app that classifies objects in still images (specifically not in prerecorded video or live vision) with their model information like "MacBook Pro" or "iMac" etc and as much detail as possible to classify.
Asked
Last updated .
Post marked as unsolved
55 Views

using Encrypt model with the dashboard. can't get the archive file.

i have learn the vedio of WWDC2020. i use the objectC method :beginAccessingModelCollectionWithIdentifier:completionHandler: to retrieve the archive file from  the dashboard . where i recrive the wrong message: [coreml] MLModelCollection: namespace (87HGLU5YX9.DeploymentModelCollection) download failed. Error Domain=com.apple.trial Code=0 "Unknown error." UserInfo={NSLocalizedDescription=Unknown error.} the second time i running the app, i got the wrong message: [coreml] MLModelCollection: experimentId unexpectedly not returned device: iphone7 ios14.0.1 xcode: Version 12.0.1 (12A7300) ios deployment target 14.0 can you help me? thanks.
Asked
by akeem313.
Last updated .
Post marked as unsolved
16 Views

"No training data found. 0 invalid files found." when training a set of images into Core ML

I am attempting to load a set of images into Create ML so that I can train an image classifier model. However, when I select the folder that my training data is in, I get the error "No training data found. 0 invalid files found." My data is laid out as such. Parent folder -Training data (Selected in Create ML) -Individuals for each class -Image files -Test data -Individuals for each class -Image files My dataset can be viewed here - https://github.com/ivyjsgit/HOMUS-Bitmap/tree/master/Splitted
Asked
by AprilZion.
Last updated .
Post marked as unsolved
12 Views

Is it possible to support a user input for style via Style Transfer?

Lots of popular web apps (e.g. deepdreamgenerator.com) have support for Style Transfer that allows for the user to upload an image to be used as the "input style". Based on the current Style Transfer model creation flow with Create ML, it seems that you can only train a model based on a specific input style. A model that accepts arbitrary style inputs don't seem possible from the Core ML interface. Is there a way to do it? Maybe I would I just need to download and convert the deep-dream-style-transfer model that accepts any style into a CoreML model?
Asked
Last updated .
Post marked as unsolved
18 Views

MPSNNReduceFeatureChannelsArgumentMax Catalyst issue

I'm using CoreML for image segmentation. I have a VNCoreMLRequest to run a model that returns a MLMultiArray. To accelerate processing the model output, I use MPSNNReduceFeatureChannelsArgumentMax to reduce the multiarray output to a 2D array which I then convert to a grayscale image. This works great on iOS, but when running on Mac as a catalyst build, the output 2D array is all zeros. I'm running Version 12.2 beta 2 (12B5025f) on an iMac Pro. I'm not seeing any runtime errors. MPSNNReduceFeatureChannelsArgumentMax appears to not work on Mac Catalyst. I'm able to reduce the channels directly on the cpu by looping through all the array dimensions but it's very slow. This proves the model output works, just the metal reduce features fails. Anyone else using CoreML and Catalyst? Here's the bit of code that doesn't work: 										let buffer = self.queue.makeCommandBuffer() 										let filter = MPSNNReduceFeatureChannelsArgumentMax(device: self.device) 										filter.encode(commandBuffer: buffer!, sourceImage: probs, destinationImage: classes) 										 										// add a callback to handle the buffer's completion and commit the buffer 										buffer?.addCompletedHandler({ (_buffer) in 												let argmax = try! MLMultiArray(shape: [1, softmax.shape[1], softmax.shape[2]], dataType: .float32) 												classes.readBytes(argmax.dataPointer, 																					dataLayout: .featureChannelsxHeightxWidth, 																					imageIndex: 0) 												 												// unmap the discrete segmentation to RGB pixels 												guard var mask = codesToMask(argmax) else { 														return 												} 						 // display image in view DispatchQueue.main.async { 												 self.imageView.image = mask } 										})
Asked
Last updated .
Post marked as unsolved
11 Views

LeakyReLU Bug on A12/A13 iPhone Devices when using ANE

Running mlmodel's prediction on ANE, the leakyReLU layer causes large different output feature maps than running on CpuAndGpu and CPUOnly. And the results are totally wrong when using ANE, but results are right using gpu or cpu. If I just replace the leakyReLU with ReLU in the mlmodel, the output feature maps only have small difference and outputs are all correct. This problem occurs on iPhone with A12 and A13 chip, no matter the ios system is ios12, ios13, or ios14. But on A14 device, the problem is gone. Is it a bug in coreml framework on A12/A13 devices?
Asked
Last updated .
Post marked as solved
95 Views

Run model inference in background every x minutes

Hi! I have written an app which is downloading data every x minutes and needs that to be interpreted through a CoreML model and be stored on device so the user can check when entering the app again what happened during the last hours. Is this possible? Can this be done through the whole day? And which background function do I need for that? Thanks in advance!
Asked
Last updated .
Post marked as unsolved
12 Views

ML Model is not working

https://developer.apple.com/documentation/vision/recognizing_objects_in_live_capture We have downloaded the sample Object Classification source code and it works. When we try to import our own model the label is not showing and it is not working
Asked
by arn_swift.
Last updated .
Post marked as unsolved
25 Views

MLModelCollection beginAccessing returns empty deployments

Hi, I'm experimenting with the new MLModelCollection stuff in Xcode 12 and iOS 14 and I'm having trouble. I've created a Model Collection named "Collection" using the CoreML Model Deployment dashboard, added a Model to the collection, and created a deployment with an uploaded .mlarchive file. In the dashboard I can navigate to the deployment and see Model Assets, and I can download the assets (it appears to be a zip file that when opened contains weights and such). The mlmodel that I created the .mlarchive from works great when I drag and drop it into my project. In my iOS app, following the instructions in the MLModelCollection help files, I call: MLModelCollection.beginAccessing(identifier: "iNatTestCollectionID", completionHandler: modelCollectionAvailable) My modelCollectionAvailable method looks like:     func modelCollectionAvailable(result: Result<MLModelCollection, Error>) {         switch result {         case .success(let collection):             print("Model collection `\(collection.identifier)` is available.")             print("Model deployment id is \(collection.deploymentID)")             print("Model collection entries are \(collection.entries)")         case .failure(let error):             print("got \(error) when accessing collection")         }     } Unfortunately, the collection I get in the callback always has an empty deployment id and there are no entries in the collection. I've tried waiting a few days in case there were syncing or deployment issues, but still nothing. I do see this error in the console: 2020-11-03 15:12:27.600165-0800 ModelDeploymentTest[15009:910661] [coreml] MLModelCollection: experimentId unexpectedly not returned But I'm not sure what that means. Anyone have any advice? Thanks, alex
Asked
Last updated .
Post marked as unsolved
65 Views

How to release memory leaked by CoreML Model

I have writen a very simple test app using DeeplabV3 from Apple Website to recognize face segmentation from an image. From Instruments, I found that when the predication is done, MLModel object is not released. VM:IOAccelerator object about 50M remains in the memory. Stack Trace IOAccelResourceCreate [MLModel modelWithContentsOfURL:error:] @nonobjc MLModel._allocatinginit(contentsOf:) DeepLabV3FG16._allocationinit() ...... Original classes of MLModel and DeepLabV3FP16 are already released, but the vm is still there. How can I solve the memory leak?
Asked
Last updated .
Post marked as unsolved
41 Views

Classifying SwiftUI Paths using Core ML

I am working on an app that classifies drawings made by user. Currently, the user made drawings are represented as a set of SwiftUI Paths, and I have a trained Core ML model that takes images and outputs class names. I have written some code that is able to take in images in the form of UIImages and feed it into my classifier, but I am unsure of how I should adapt my code to take Paths. Here is my current code: import UIKit import CoreML import Vision import ImageIO import SwiftUI struct ImageClassifier{ &#9;&#9;var classifier = SymbolClassifier() &#9;&#9;func classify(image: CGImage) -> String?{ &#9;&#9;&#9;&#9;let pixelBuffer = image.pixelBuffer(width: 300, height: 300, orientation: CGImagePropertyOrientation.up)! &#9;&#9;&#9;&#9;let output = try? self.classifier.prediction(image: pixelBuffer) &#9;&#9;&#9;&#9;return output?.classLabel &#9;&#9;} &#9;&#9;func classifyUIImage(image: UIImage)-> String?{ &#9;&#9;&#9;&#9;guard let imageAsUIImage:CGImage = convertUIImageToCGImage(image: image) else{ &#9;&#9;&#9;&#9;&#9;&#9;return nil &#9;&#9;&#9;&#9;} &#9;&#9;&#9;&#9;return classify(image: imageAsUIImage) &#9;&#9;} &#9;&#9;func classifyPath(path:Path) -> String?{ &#9;&#9;&#9;&#9;//??? &#9;&#9;&#9;&#9;return nil &#9;&#9;} &#9;&#9;func convertUIImageToCGImage(image: UIImage) -> CGImage? { &#9;&#9;&#9;&#9;let inputImage = CIImage(image: image)! &#9;&#9;&#9;&#9;let context = CIContext(options: nil) &#9;&#9;&#9;&#9;return context.createCGImage(inputImage, from: inputImage.extent) &#9;&#9;} } Here - https://github.com/ivyjsgit/Manuscript-iOS/blob/main/Manuscript/Libraries/CGImage%2BCVPixelBuffer.swift is the image.pixelBuffer library and here - https://github.com/ivyjsgit/Manuscript-iOS/blob/main/Manuscript/Manuscript/SymbolClassifier.mlmodel is the model
Asked
by AprilZion.
Last updated .