Integrate machine learning models into your app using Core ML.

Core ML Documentation

Posts under Core ML tag

130 Posts
Sort by:
Post marked as solved
2 Replies
272 Views
Hi everyone, i am not pretty new anymore on swift but still have not many skills on it. I am facing some difficult issues through this tutorial from apple-docs. https://developer.apple.com/documentation/createml/creating_an_image_classifier_model/#overview I have successfully created many mlmodel already now as given in the tutorial. However when I come to the step to integrate it to Xcode, I am facing the Issue that I don't get any predictions from my mlmodel. I followed all steps in this tutorial, and downloaded the example code. The tutorial said "just change this model line, with your model and it works" but indeed it doesn't. With doesn't work, I mean that I don't get any predictions back when I use this example. I can start the application and test it with the iPhone simulator. But the only output that I get is "no predictions, please check console log". I searched down the code an could find out, that this is an error-message which appears from the code of MainViewController.swift (99:103) private func imagePredictionHandler(_ predictions: [ImagePredictor.Prediction]?) { guard let predictions = predictions else { updatePredictionLabel("No predictions. (Check console log.)") return } As i understand the code, it return the message when no predictions come back from the mlmodel. If I use mlmodel given by apple (as MobileNetV2 etc.) the example code is working every time (give predictions back). Thats why I am pretty sure, the issue has to be anywhere on my side but I can't figure it out. The mlmodel is trained with images from fruits360-dataset and self added some images of charts. To equal the values I tooked 70 pictures of each class. If I try this model in createML-Preview I can see the model is able to predict my validation-pictures. But when I integrate the model in Xcode it isn't able to give me that predictions for the exact same image. Do anyone know how to get this issue done? Im using the latest Xcode version. Thanks in advance
Posted
by
Post not yet marked as solved
0 Replies
169 Views
I have set up a project using DeepLabV3 for image semantic segmentation using the pre-trained DeepLabV3 model from Apple. This works consistently well for delineating (creating a black/white mask) the main object in a given image, but only as long as that main object is a person. When I try other random objects (coffee cups, tractor, stapler), with very clear outline/contrast, I get very poor results, if not a completely black mask. Is the pre-trained DeepLabV3 model just for people? If I need to do something (re-training, config, etc), what is it? Thanks, Neal
Posted
by
Post not yet marked as solved
3 Replies
291 Views
I am trying to deploy our first Core ML model through the dashboard at https://ml.developer.apple.com/. Immediately after I go into the site, I get a message that reads "Your request is invalid.". I click "+" to add a model collection, I enter a collection ID, a description and a single model ID and then tap on "Create". I get a message that reads "One of the fields was invalid.". I have tried changing the IDs and the description, but there is no way to make it work. I have been trying this for hours. Could you please guide me on how to make it work?
Posted
by
Post not yet marked as solved
3 Replies
352 Views
Hi there, I am trying to create a CoreML Custom layer that runs on the GPU, using Objective-C for CoreML setup and Metal for GPU programming. I have created the CoreML model with the custom layer and can successfully execute on the GPU, I wish to create an MTLBuffer from an input MTLTexture in my setup actual GPU execution, although I can't seem to do so, or get access to the memory address to the MTLTexture memory. When defining a custom layer in CoreML to run on the GPU, the following function needs to be defined, with the given prototype; (BOOL) encodeToCommandBuffer:(id<MTLCommandBuffer>)commandBuffer inputs:(NSArray<id<MTLTexture>> *)inputs outputs:(NSArray<id<MTLTexture>> *)outputs error:(NSError *__autoreleasing _Nullable *)error{ // GPU Setup, moving data, encoding, execution and so on here } Here, the inputs are passed as an NSArray of MTLTexture's, we then pass these texture's on to the Metal Shader for computation. My problem is that I want to pass an MTLBuffer to the Metal Shader, which points to the input data, say inputs[0], but I am having troubling copying the input MTLTexture to an MTLBuffer. I have tried using the MTLBlitCommandEncoder to copy the data from the MTLTexture to an MTLBuffer like so; id<MTLBuffer> test_buffer = [command_PSO.device newBufferWithLength:(8) options:MTLResourceStorageModeShared]; id <MTLBlitCommandEncoder> blitCommandEncoder = [commandBuffer blitCommandEncoder]; [blitCommandEncoder copyFromTexture:inputs[0] sourceSlice:0 sourceLevel:0 sourceOrigin:MTLOriginMake(0, 0, 0) sourceSize:MTLSizeMake(1, 1, 1) toBuffer:test_buffer destinationOffset:0 destinationBytesPerRow:8 destinationBytesPerImage:8]; [blitCommandEncoder endEncoding]; The above example should copy a single pixel from the MTLTexture, inputs[0], to the MTLBuffer, test_buffer, but this is not the case. MTLTextures, getBytes also doesn't work as the inputs have MTLResourceStorageModePrivate set. When I inspect the input MTLTexture I note that the attribute buffer = <null> and I'm wondering if this could be an issue since the texture was not created from a buffer, and perhaps doesn't store the address to memory easily, but surely we should be able to get the memory address somewhere? For further reference, here is the input MTLTexture definition; <CaptureMTLTexture: 0x282469500> -> <AGXA14FamilyTexture: 0x133d9bb00> label = <none> textureType = MTLTextureType2DArray pixelFormat = MTLPixelFormatRGBA16Float width = 8 height = 1 depth = 1 arrayLength = 1 mipmapLevelCount = 1 sampleCount = 1 cpuCacheMode = MTLCPUCacheModeDefaultCache storageMode = MTLStorageModePrivate hazardTrackingMode = MTLHazardTrackingModeTracked resourceOptions = MTLResourceCPUCacheModeDefaultCache MTLResourceStorageModePrivate MTLResourceHazardTrackingModeTracked usage = MTLTextureUsageShaderRead MTLTextureUsageShaderWrite shareable = 0 framebufferOnly = 0 purgeableState = MTLPurgeableStateNonVolatile swizzle = [MTLTextureSwizzleRed, MTLTextureSwizzleGreen, MTLTextureSwizzleBlue, MTLTextureSwizzleAlpha] isCompressed = 0 parentTexture = <null> parentRelativeLevel = 0 parentRelativeSlice = 0 buffer = <null> bufferOffset = 0 bufferBytesPerRow = 0 iosurface = 0x0 iosurfacePlane = 0 allowGPUOptimizedContents = YES label = <none>
Posted
by
Post not yet marked as solved
0 Replies
203 Views
Following the guide found here, I've been able to preview image classification in Create ML and Xcode. However, when I swap out the MobileNet model for my own and try running it as an app, images are not classified accurately. When I check the same images using my model in its Xcode preview tab, the guesses are accurate. I've tried changing this line to the different available options, but it doesn't seem to help: imageClassificationRequest.imageCropAndScaleOption = .centerCrop Does anyone know why a model would work well in preview but not while running in the app? Thanks in advance.
Posted
by
Post not yet marked as solved
0 Replies
220 Views
I converted PyTorch model to CoreML model and want to use it in iOS device. I wrote the swift code, but the code works normally in the simulator but not in the real device. My CoreML model information and experimental environment are as follows. Model Type: Neural Network / Size: 60.8 MB / Document Type: Core ML Model / Availability: iOS 14.0+ / torch == 1.8.1 / coremltools == 5.2.0 / Experimental iOS version: 15.0 / Device: iPhone 12 Pro / And the error message is: 2022-04-19 10:36:24.404508+0900 test[2488:1369794] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid blob shape": generic_elementwise_kernel: cannot broadcast: (1, 256, 128, 130) (1, 256, 128, 132)  status=-7 2022-04-19 10:36:24.404716+0900 test[2488:1369794] [coreml] Error in adding network -7. 2022-04-19 10:36:24.405000+0900 test[2488:1369794] [coreml] MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.} 2022-04-19 10:36:24.405029+0900 test[2488:1369794] [coreml] MLModelAsset: modelWithError: load failed with error Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.} test/ContentView.swift:16: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.} 2022-04-19 10:36:24.405408+0900 test[2488:1369794] test/ContentView.swift:16: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=com.apple.CoreML Code=0 "Error in declaring network." UserInfo={NSLocalizedDescription=Error in declaring network.} (lldb)  I would appreciate it if you could let me know if you know the solution or if you know the related references. Thank you for reading it.
Posted
by
Post not yet marked as solved
0 Replies
299 Views
I try to transfer this Xcode sample project to the Playground app project. However, when I move all swift files and the CoreML file called 'ExerciseClassifier.mlmodel' on the folder to the Playground app project, it shows the error, "Cannot find type 'ExerciseClassifier' in scope". What could I do to remove the error and make a proper working project on Playground?
Posted
by
Post not yet marked as solved
0 Replies
246 Views
Dear Teams, I try to transfer this Xcode sample project to the Playground app project. However, when I move all swift files and the CoreML file called 'ExerciseClassifier.mlmodel' on the folder to the Playground app project, it shows the error, "Cannot find type 'ExerciseClassifier' in scope". What could I do to remove the error and make a proper working project on Playground? Thanks.
Posted
by
Post not yet marked as solved
0 Replies
185 Views
The release version of my app will Not collect user data. I wanted to know if I can ask for users permission to collect trained ml models based on beta users data on test flight after presenting them the necessary documentations like privacy policy. I also don’t want the user to be able to use the beta app if they don’t agree to it. are these two things possible? Thanks
Posted
by
Post not yet marked as solved
0 Replies
219 Views
I created a style transfer model using CreateML and can not save the generated styled image to tempDirectory, unsure if it is to do with the way I create the pixelBuffer? (below): import Vision import CoreML import CoreVideo let model = style1() // set input size of the model var modelInputSize = CGSize(width: 512, height: 512) // create a cvpixel buffer var pixelBuffer: CVPixelBuffer? let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary CVPixelBufferCreate(kCFAllocatorDefault, Int(modelInputSize.width), Int(modelInputSize.height), kCVPixelFormatType_32BGRA, attrs, &pixelBuffer) // put bytes into pixelBuffer let context = CIContext() let argPathUrl = "file:///pathhere" let modelImageUrl: URL = URL(string: argPathUrl)!; guard let CiImageData = CIImage(contentsOf: modelImageUrl) else { return } context.render(CiImageData, to: pixelBuffer!) // predict image let output = try? model.prediction(image: pixelBuffer!) let predImage = CIImage(cvPixelBuffer: (output?.stylizedImage)!) let context2 = CIContext() let format = kCIFormatRGBA16 try! context2.writePNGRepresentation(of: predImage, to: FileManager.default.temporaryDirectory.appendingPathComponent("testcgi.png"), format: format, colorSpace: CGColorSpace(name: CGColorSpace.sRGB)!, options: [:]) let saveUrl = "testcgi.png" return;
Posted
by
Post not yet marked as solved
4 Replies
437 Views
Good Morning, I'm not sure whether I am alone but the BERTSQUAD seems to not work anymore (https://developer.apple.com/machine-learning/models/#text) since iOS 15.4 update. I tried different configurations and the basic example model and it does not work at all. Do you have also the issue? If yes, is there a workaround to make it work with the iOS update? Thank you in advance for your help
Posted
by
Post not yet marked as solved
0 Replies
350 Views
Hi everyone I'm trying to use createML hand action classifier to detect some simple actions, I'm having some trouble because the model only detects one hand at a time in the scene(even in the preview of the model, without any coding) and I would need both hands, is it a bug or am I doing something wrong? Thank you in advance
Posted
by
Post not yet marked as solved
0 Replies
276 Views
I would like to know if there are some best practices for integrating Core ML models into a Core Image pipeline, especially when it comes to support for tiling. We are using a CIImageProcessorKernel for integrating an MLModel-based filtering step into our filter chain. The wrapping CIFilter that actually calls the kernel handles the scaling of the input image to the size the model input requires. In the roi(forInput:arguments:outputRect:) method the kernel signals that it always requires the full extent of the input image in order to produce an output (since MLModels don't support tiling). In the process(with:arguments:output:) method, the kernel is performing the prediction of the model on the input pixel buffer and then copies the result into the output buffer. This works well until the filter chain is getting more and more complex and input images become larger. At this point, Core Image wants to perform tiling to stay within the memory limits. It can't tile the input image of the kernel since we defined the ROI to be the whole image. However, it is still calling the process(…) method multiple times, each time demanding a different tile/region of the output to be rendered. But since the model doesn't support producing only a part of the output, we effectively have to process the whole input image again for each output tile that should be produced. We already tried caching the result of the model run between consecutive calls to process(…). However, we are unable to identify that the next call still belongs to the same rendering call, but for a different tile, instead of being a different rendering entirely, potentially with a different input image. If we'd have access to the digest that Core Image computes for an image during processing, we would be able to detect if the input changed between calls to process(…). But this is not part of the CIImageProcessorInput. What is the best practice here to avoid needless reevaluation of the model? How does Apple handle that in their ML-based filters like CIPersonSegmentation?
Posted
by
Post not yet marked as solved
0 Replies
289 Views
My activity classifier is used in tennis sessions, where there are necessarily multiple people on the court. There is also a decent chance other courts' players will be in the shot, depending on the angle and lens. For my training data, would it be best to crop out adjacent courts?
Posted
by
Post not yet marked as solved
0 Replies
289 Views
For a Create ML activity classifier, I’m classifying “playing” tennis (the points or rallies) and a second class “not playing” to be the negative class. I’m not sure what to specify for the action duration parameter given how variable a tennis point or rally can be, but I went with 10 seconds since it seems like the average duration for both the “playing” and “not playing” labels. When choosing this parameter however, I’m wondering if it affects performance, both speed of video processing and accuracy. Would the Vision framework return more results with smaller action durations?
Posted
by
Post not yet marked as solved
1 Replies
288 Views
When importing a CSV file with ~ 50 columns and ~200 rows the "MLDataTable(contentsOf: inputDataPath, options: parsingOptions)" command has issues parsing. Much of the data has "0" in the fields but sporadically there are decimal values. If I have a column where I have say 180 "0"s and the last 20 columns have decimal values the column identifies as an "Int" and lines are dropped during the parsing process. Is there a way provide Column Type Hints? Is there a way to force a column type? Is MLDataTable only looking at a handful of rows and determining the column type?
Posted
by
Post not yet marked as solved
1 Replies
382 Views
I need to build a model to add to my app and tried following the Apple docs here. No luck because I get an error that is discussed on this thread on the forum. I'm still not clear on why the error is occurring and can't resolve it. I wonder if CreateML inside Playgrounds is still supported at all? I tried using the CreateML app that you can access through developer tools but it just crashes my Mac (2017 MBP - is it just too much of a brick to use for ML at this point? I should think not because I've recently built and trained relatively simple models using Tensorflow. + Python on this machine, and the classifier I'm trying to make now is really simple and doesn't have a huge dataset).
Posted
by