Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

130 Posts
Sort by:
Post not yet marked as solved
1 Replies
314 Views
🐞Describe the bug When converting a PyTorch model where GN is used and input is dynamic, the GN conversion fails. The problem happens when converting PyTroch traced model -> CoreML It seems like here h and w are specified for integers, but dynamic input model's h and w are placeholder. Any advice/quick hack would be really appreciated. Trace Please run the code below to see the error. To Reproduce Here are the minimum code to reproduce the error import torch import torch.nn as nn import coremltools as ct class DynamicGN(nn.Module): def __init__(self): super().__init__() self.conv = nn.Conv2d(in_channels=3, out_channels=16, kernel_size=3, stride=1, padding=1, bias=False) self.gn = nn.GroupNorm(num_groups=4, num_channels=16) def forward(self, x): y = self.gn(self.conv(x)) return y def main(): model = DynamicGN() input = torch.ones((1,3,16,16)) output = model(input) traced_model = torch.jit.trace(model, input, check_trace=True) img_shape = ct.Shape(shape=(1, 3, ct.RangeDim(8, 128), ct.RangeDim(8, 128))) img = ct.TensorType(name='image', shape=img_shape) mlmodel = ct.convert(model=traced_model, inputs=[img]) if __name__ == '__main__': main() Error: ValueError: Cannot add const [1, 4, 4, is2, is3] System environment (please complete the following information): coremltools version (e.g., 3.0b5): 5.0b2 OS (e.g., MacOS, Linux): MacOS 11.3.1 macOS version (if applicable): XCode version (if applicable): How you install python (anaconda, virtualenv, system): python version (e.g. 3.7): 3.8.5 any other relevant information: PyTorch 1.9.0
Posted
by
Post not yet marked as solved
3 Replies
666 Views
Hi, I found that the Core ML on iOS 15 consumes almost 450MB larger memory than iOS 14 for prediction of my model. It may easily cause running out of memory crash on small memory devices. To reproduce the issue, I created a sample code as bellow. It shows around 450MB on iOS 15 but only about 4MB on iOS 14.X. I have already submitted the issue through Feedback Assistance. But does anyone have any idea to work around the issue? I've confirmed the the issue still exists on iOS 15.1 Beta 2. Any advice would be appreciated. import CoreML class ViewController: UIViewController {   @IBOutlet weak var memoryLabel: UILabel!   @IBAction func predAction(_ sender: Any) {     let modelConf = MLModelConfiguration()     modelConf.computeUnits = .cpuOnly     do {       let input = UnsafeMutablePointer<Float>.allocate(capacity: 512 * 1024 * 2)       input.initialize(repeating: 0.0, count: 512 * 1024 * 2)       let inputMultiArray = try MLMultiArray(dataPointer: input, shape: [1,512,1024,2], dataType: .float32, strides: [(512 * 1024 * 2) as NSNumber ,(1024 * 2) as NSNumber,2,1], deallocator: nil)       let model = try TestModel(configuration: modelConf)       let memBefore = os_proc_available_memory()       let _ = try model.prediction(input:TestModelInput(strided_slice_3_0: inputMultiArray))       let memAfter = os_proc_available_memory()       print("memory size for prediction",memBefore - memAfter)       memoryLabel.text = String(describing:(memBefore - memAfter) )             }     catch {       print("error")     }   }   override func viewDidLoad() {     super.viewDidLoad()     // Do any additional setup after loading the view.   } }
Posted
by
Post marked as solved
1 Replies
403 Views
I have hundreds of thousands of image files that are cropped images grouped into class folders appropriately that I would like to use in Create ML to training an object detection model. I do not have .json annotation files for any of those cropped images. Q1: Am I required to create the .json annotation file for each individual image and just set the bounding box coordinates to the four corners of the images since the full image is the object already cropped? Or is there a way to leverage what I have directly without creating all those .json files? Q2: Anyone have a handy script to help automated the creation of those files? :-) Thanks everyone.
Posted
by
Post not yet marked as solved
0 Replies
315 Views
A heads up to other developers using CoreML: Make sure to test your apps and CoreML models on an A15 device like the new iPhone 13. My app uses CoreML for a custom image segmentation model. Which runs fine on all previous devices, but hangs/crashes on my iPhone 13 Pro, and (based on customer reports) on other devices with the A15. The error seems to happen when part of the model is executing on the Neural Engine. I worked around it for now by not using the Neural Engine when running on A15 devices. modelConfig.computeUnits = UIDevice.current.hasA15Chip() ? .cpuAndGPU : .all where hasA15Chip() is a custom helper method. For Apple engineers: I provided additional information in FB9665812.
Posted
by
Post not yet marked as solved
1 Replies
380 Views
I have been asked to start using CoreML and convert our tensorflow ML model. So I wanted to follow the quickstart example documentation to see how to do this. Here are the steps I took : - Installation steps : 1. install Conda package installer - download Miniconda3 v3.9 package : https://repo.anaconda.com/miniconda/Miniconda3-py39_4.10.3-MacOSX-x86_64.pkg - run the package installation - create an environment shell : conda create --name coremltools-env - creates in <user>/opt/miniconda3/envs/coremltools-env - activate the environment : conda activate coremltools-env - install pip for this environment : conda install pip 2. install CoreML tools in the conda environment - pip install coremltools==5.0b5 (note this is a beta version, could not find the previous stable version) 3. install tensorflow within the conda environment - pip install tensorflow 4. install h5py - pip install h5py - run the python script to load (from a distance) the tensorflow model and convert it to CoreML I then run the python scripts as described in the documentation (except the test), and I get this error message : RuntimeWarning: You will not be able to run predict() on this Core ML model. Underlying exception message was: Error compiling model: "Error reading protobuf spec. validator error: Model specification version field missing or corrupt.". _warnings.warn( Note - I have included all the text of the message. Is this a problem with CoreML tools v5.0 beta 5 ? Should I try a previous version of CoreML tools, if so, where can I find the correct version numbers ?
Post not yet marked as solved
0 Replies
366 Views
Hi guys! I'm studying CoreML converting now. I want to convert a model which deals 3D point cloud data, but I can't make the code that determine input shape. 3d data sets shape depends on the number of points, and that is variable whenever LiDAR gets the 3d data. Is there any way I can do?
Posted
by
Post not yet marked as solved
1 Replies
501 Views
My app uses CoreML to run neural network (CoreML use CPU for most layers). Some time my performance is very good but after 30 seconds speed become slowly (fps is much less). I profiled it and found, that iOS use performance core for my app at the beginning. After 30 seconds it stops use Performance cores and starts use efficient cores (them frequency is less). You may see to on the screenshot. qos of my queue is .userInitiated. Also I tried .userInteractive, but is don't change anything. I assume that it is cores planning feature of iOS. But I cannot find any information about it. Is there documentation, which describe this behavior? Can I say iOS use performance core for my app all the time? I use iPhone XR with iOS 14.7.1.
Posted
by
Post not yet marked as solved
0 Replies
239 Views
Good morning everybody, I am looking for an API to apply Neural Style transfer to a video. You can easily do it using Xcode or Create ML but I was not able to find an API to do it inside an app. There is an API to do it for photos using a CVPixelBuffer (or an image / image URL) but I cannot find anything related to a video URL. Does anyone have ever tried Neural Style transfer with videos? If yes, could you please let me know how you were able to do it Thank you in advance for your help
Posted
by
Post marked as solved
2 Replies
632 Views
In my mobile application, I observe a memory leak when running inference with my image convolution model. The memory leak occurs when getting predictions from the model. Given a pointer to a loaded MLModel object called module and input feature provider feature_provider (of type MLDictionaryFeatureProvider*), the memory leak is observed each time a prediction is made by calling [module predictionFromFeatures:feature_provider error:NULL]; The amount of memory leaked between each iteration appears to be related to the output size of the model. Assuming the mobile GPU backend is running in half-precision (float16), I observe the following for the given output sizes; Output image of dimension [1,3,3840,2160] (of size 1*3*3840*2160*16bits/(8bits * 1000^2) == 49.7664MB) Constant increase in memory of approximately 91.7MB after each image prediction. Output image of dimension [1,3,2048,1080] (of size 1*3*2048*1080*16bits/(8bits * 1000^2) == 13.27104MB) Constant increase in memory of approximately 23.7MB after each image prediction. Is there a known issue with the CoreML MLModel's predictionFromFeatures which allocates memory each time it is called? Or is this the intended behaviour? At the moment this is limiting me from running inference on mobile devices, and I was wondering if anyone has a suggested workaround, patch, or advice? Thank you in advance, and please find the information to reproduce the issue below. To Reproduce To reproduce the problem, a simple model with three convolutions and one pixel-shuffle layer was converted from PyTorch to an MLModel. The MLModel was then run with a debugger in a mobile application. A breakpoint was set on the line computing the predictions in a loop and the memory use after each iteration was observed to increase. Alternatively to setting a breakpoint, the number of prediction iterations can be set to 50 (assuming output size is [1,3,3840,2160] and phone memory is 4GB), which causes the application to run out of memory at runtime. The PyTorch model: import torch.nn as nn class Model(nn.Module): def __init__(self): super().__init__() upscale_factor = 8 self.Conv1 = nn.Conv2d(in_channels = 48, out_channels = 48, kernel_size = 3, stride = 1) self.Conv2 = nn.Conv2d(48, 48, 3, 1) self.Conv3 = nn.Conv2d(48, 3 * (upscale_factor*upscale_factor), 3, 1) self.PS = nn.PixelShuffle(upscale_factor) def forward(self, x): Conv1 = self.Conv1(x) Conv2 = self.Conv2(Conv1) Conv3 = self.Conv3(Conv2) y = self.PS(Conv3) return y The PyTorch to MLModel converter: import torch import coremltools def convert_torch_to_coreml(torch_model, input_shapes, save_path): torchscript_model = torch.jit.script(torch_model) mlmodel = coremltools.converters.convert( torchscript_model, inputs=[coremltools.TensorType(name=f'input_{i}', shape=input_shape) for i, input_shape in enumerate(input_shapes)], ) mlmodel.save(save_path) Generate MLModel using the above definitions: if __name__ == "__main__": torch_model = Model() # input_shapes = [[1,48,256,135]] # 2K input_shapes = [[1,48,480,270]] # 4K coreml_model_path = "./toy.mlmodel" convert_torch_to_coreml(torch_model, input_shapes, coreml_model_path) Mobile application: The mobile application was generated using PyTorch's iOS TestApp and adapted for our use case. The adapted TestApp is available here.. The most relevant lines in the application for loading the model and running inference are included below: Set MLMultiArray pointer to input tensor's data pointer: + (MLMultiArray*) tensorToMultiArray:(at::Tensor) input { float* input_ptr = input.data_ptr<float>(); int batch = (int) input.size(0); int ch = (int) input.size(1); int height = (int) input.size(2); int width = (int) input.size(3); int pixels = ch * height * width; NSArray* shape = @[[NSNumber numberWithInt:batch][NSNumber numberWithInt: ch], [NSNumber numberWithInt: height], [NSNumber numberWithInt: width]]; MLMultiArray* output = [[MLMultiArray alloc] initWithShape:shape dataType:MLMultiArrayDataTypeFloat32 error:NULL]; float* output_ptr = (float *) output.dataPointer; for (int pixel_index = 0; pixel_index < pixels; ++pixel_index) { output_ptr[pixel_index] = input_ptr[pixel_index]; } return output; } Load model, set input feature provider, and run inference over multiple iterations: NSError* __autoreleasing __nullable* __nullable error = nil; NSString* modelPath = [NSString stringWithUTF8String:model_path.c_str()]; NSURL* modelURL = [NSURL fileURLWithPath:modelPath]; NSURL* compiledModel = [MLModel compileModelAtURL:modelURL error:error]; MLModel* module = [MLModel modelWithContentsOfURL:compiledModel error:NULL]; NSMutableDictionary* feature_inputs = [[NSMutableDictionary alloc] init]; for (int i = 0; i < inputs.size(); ++i) { NSString* key = [NSString stringWithFormat:@"input_%d", i]; [feature_inputs setValue:[Converter tensorToMultiArray: inputs[i].toTensor()] forKey: key]; } MLDictionaryFeatureProvider* feature_provider = [[[MLDictionaryFeatureProvider alloc] init] initWithDictionary:feature_inputs error:NULL]; // Running inference on the model results in memory leak for (int i = 0; i < iter; ++i) { [module predictionFromFeatures:feature_provider error:NULL]; } Complete example source The complete minimal example of both the MLModel generation and the TestApp are available here. System environment: Original environment: coremltools version: 5.0b5: OS: build on MacOS targetting iOS for mobile application: macOS version: Big Sur (version 11.4) iOS version: 14.7.1 (run on iPhone 12) XCode version: Version 12.5.1 (12E507) How you install python: Install from source python version: 3.8.10 How you install Pytorch: Install from source PyTorch version: 1.8.1. Update to 'latest' environment coremltools version: 5.0b5: OS: build on MacOS targetting iOS for mobile application: macOS version: Big Sur (version 11.4) iOS version: 15.0.2 (run on iPhone 12) XCode version: Version 13.0(13A233) How you install Python: Install from source python version: 3.8.10 How you install Pytorch: Install from source PyTorch version: 1.10.0-rc2 Additional Information Given the model definition and tensor output shapes above, the corresponding tensor input shapes for the model are as follows: Output shape of [1,3,3840,2160] has input shape [1,48,480,270] Output shape of [1,3,2048,1080] has input shape [1,48,256,135]
Posted
by
Post not yet marked as solved
2 Replies
218 Views
Hi, I have a custom YOLOv4 MLModel and when i try to open it on iPhone 12 and iPhone 12 Pro i get a crash instant with this: [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid blob shape": ANE does not support blob rank < 0 status=-7 It works fine on older devices, and I have no 13s iPhone to test on. I can't find more informations on this crash and I was wondering if someone has a clue to help me. Thanks !
Posted
by
Post not yet marked as solved
0 Replies
252 Views
When trying to train an image classifier with Create ML I hit the train button and after the feature extracting phase, the training tab chart is empty I have tried with different images and even training different models (one of them the typical dog vs cat model) but the result is the same, how can I get this to work?
Posted
by
Post not yet marked as solved
0 Replies
299 Views
I have trained a model with CreateML. If I test the results with the Preview option that comes with the mlmodel, it shows me some preditions with a given conficence, but if I go through Vision + CoreML to check the predictions, for the same images, the confidence is totally different. Here is an example of the output, console output is from the playground with vision + CoreML and the image footer is from the preview of the model itself. I have sent this model to a colleague that uses Coremltools in Python and the results are also different. Does the prediction affect where you are executing the model on?
Posted
by
Post not yet marked as solved
0 Replies
228 Views
Hi, in the Linux/Unix world, we prefer to use Nvidia GPU to do machine learning research as the GPU cores can speed up the processing significantly. How does it work in the Mac world? Since the M1 Macbook Pro do not support Nvidia GPU, what sort of hardware is recommended to do machine learning research on the Mac using 3rd party API or Apple's API? For this case, is M1 Max better than M1 Pro? Is it better to get a system with 32 GPU Cores rather than 24 GPU Cores? Does Apple ML API take advantage of these GPU Cores?
Posted
by
Post marked as solved
1 Replies
233 Views
I'm a beginner,I learn coreml A few months ago,I remember double click this model class(Red box in the Figure 1)or click arrow (Green box in the Figure 2)can view mlmodel code like this Figure 3 but now i cant view mlmodel code,i can only use “let model = YOLOv3Tiny()” in swift and command it to view. why ?or some setup? i use Xcode 13
Posted
by
Post not yet marked as solved
2 Replies
402 Views
Hi. I like to use - as I thought possibly – a Core ML model to identify the main clouts of an image. The idea is to detect used colors in fashion images to get a kind of a "color trend" in a set of images. I found this question in the forum already, but it never got an answers (as questions to the questions did not get answered by the initial poster): https://developer.apple.com/forums/thread/94324 Maybe, Core Models are not the way to do this (are there more about objects and texts)? Any hint to other techniques are welcome, too. The only approach I do not want to follow is to use online services as images have to get delivered to them – and usually are kept there. I want to realize an on-premise approach. Thanks for any hints!
Posted
by
Post not yet marked as solved
1 Replies
272 Views
I am trying to develop an app that classifies an image taken from camera or chosen from image library using a model trained using Apple's Core ML. The model is properly trained and tested. It showed no problem when I tested it using Preview after it had been added to the xcode project. But when I tried to get the prediction using Swift, the results were wrong and completely different from what Preview showed. It felt like the model was untrained. This is my code to access the prediction made by the model: let pixelImage = buffer(from: (image ?? UIImage(named: "imagePlaceholder"))!) self.imageView.image = image guard let result = try? imageClassifier!.prediction(image: pixelImage!) else { fatalError("unexpected error happened") } let className: String = result.classLabel let confidence: Double = result.classLabelProbs[result.classLabel] ?? 1.0 classifier.text = "\(className)\nWith Confidence:\n\(confidence)" print("the classification result is: \(className)\nthe confidence is: \(confidence)") imageClassifier is the model I have created using this line of code before the code segment: let imageClassifier = try? myImageClassifier(configuration: MLModelConfiguration()) myImageClassifier is the name of the ML model I created using Core ML. The image is correct but it shows a different result other than Preview even if I input the same image. It had to be converted to type UIImage to CVPixelBuffer since prediction only allows the input of type CVPixelBuffer. pixelImage in the code segment above is the image after it had been changed to type CVPixelBuffer. I don't know which part caused this error. I downloaded the sample project from this tutorial and the code executed without error and the results are correct when I instead implemented MobileNet, its Core ML model. Is there something wrong with the code or something wrong with the Core ML model I created? Any form of help would be appreciated.
Posted
by
Post not yet marked as solved
1 Replies
333 Views
In a section of my app I would like to recommend restaurants to users based on certain parameters. Some parameters have a higher weighting than others. In this WWDC Video a very similar app is made. Here if a user likes a dish a value of 1.0 is assigned to those specific keywords (that apple to the dish) with a value of -1.0 to all other keywords that don't apply to that dish. For my app if the user has ordered I then apply a value of 1.0 (to the keywords that apply) but if a user has just expressed interest (but not ordered) then can I apply a value of 0.5 (and -0.5) instead, would the model adapt to this?
Posted
by