Create ML

RSS for tag

Create machine learning models for use in your app using Create ML.

Create ML Documentation

Posts under Create ML tag

65 Posts
Sort by:
Post not yet marked as solved
0 Replies
624 Views
I am trying to train an Object Detection Model using transfer learning with a small dataset ( roughly 650 Images and two classes ) using Create ML v2.0 (53.2.2) with prefer external GPU checked. I am using a 2018 Mac mini 3.2 ghz I7 16 gb of ram and AMD Radeon Pro 580 eGPU. The problem I am having is that I can only do about 3500 iterations before I run out of memory and I need to pause the training. When I resume training my Loss increases again and it takes a while for it to get back down to where it was before I paused. So I am wondering if there is a better way to setup the hardware or any other suggestions so I can get through all of the iterations without having to pause. I don't recall having this issue with Create ML v1.0, so any suggestions would be appreciated.
Posted
by bbarry.
Last updated
.
Post not yet marked as solved
0 Replies
428 Views
I tried Create ML to train MNIST dataset which has very small images of 0-10 digits. It's the first time I use Create ML but its training speed is still too slow based on what I learnt, MNIST is a very small dataset. I am using a MacBook Pro 2021, 16 inch, with M1 pro + 16GB ram + 1TB SSD. I check the activity monitor and saw that CPU reaches 100%. 14/16 GB of Memory are used, 2GB for cache and 12.5GB of swap used. Memory used by the MLRecipeExecutionService process is 19.55GB. If I double click to see the details, the Virtual Memory Size is 410GB. I ran sudo powermetrics and observe that GPU power is ~50-60mw, which means GPU is not used for training. When I check Disk usage in Activity Monitor, I saw that process MLRecipeExecutionService contributed 1.1TB of Bytes Write. The entire MNIST dataset is only 17.5MB. I don't understand why it's so slow, and so much resources were used. Based on what I've learnt about Machine Learning, this is irregular.
Posted
by Huakun.
Last updated
.
Post not yet marked as solved
2 Replies
429 Views
Being brand new to create ML I tried to run my own ML project. When creating my own image classifier (same with tabular classification) I fail from the start. When selecting valid training data create ML says "Data Analysis stopped". I'm using Create ML Version 3.0 (78.7). Any suggestions?
Posted
by MarcoGMuc.
Last updated
.
Post not yet marked as solved
0 Replies
299 Views
I have trained a model with CreateML. If I test the results with the Preview option that comes with the mlmodel, it shows me some preditions with a given conficence, but if I go through Vision + CoreML to check the predictions, for the same images, the confidence is totally different. Here is an example of the output, console output is from the playground with vision + CoreML and the image footer is from the preview of the model itself. I have sent this model to a colleague that uses Coremltools in Python and the results are also different. Does the prediction affect where you are executing the model on?
Posted Last updated
.
Post not yet marked as solved
0 Replies
252 Views
When trying to train an image classifier with Create ML I hit the train button and after the feature extracting phase, the training tab chart is empty I have tried with different images and even training different models (one of them the typical dog vs cat model) but the result is the same, how can I get this to work?
Posted Last updated
.
Post not yet marked as solved
1 Replies
364 Views
Hi, is it possible to get the code for the demo app used in this presentation for the dynamic style transfer example please? thanks
Posted
by Saddif.
Last updated
.
Post marked as solved
6 Replies
1.2k Views
Just updated to Xcode 13. A Swift source code imports CreateML but got "Failed to build module 'CreateML'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 5.4 (swiftlang-1205.0.24.14 clang-1205.0.19.54)', while this compiler is 'Apple Swift version 5.5 (swiftlang-1300.0.31.1 clang-1300.0.29.1)'). Please select a toolchain which matches the SDK.". Any idea what is it?
Posted
by po chun.
Last updated
.
Post not yet marked as solved
1 Replies
379 Views
I'm getting an error very early in the the process and these tutorials seems very simple so I'm stumped. This tutorial seems straightforward but I can't make it past the step where I drag in image sets in. https://developer.apple.com/documentation/createml/creating_an_image_classifier_model video tutorial: https://www.youtube.com/watch?v=DSOknwpCnJ4 I have 1 folder titled "Training Data" with 2 sub-folders "img1" and "img2". When I drag my folder "Training Data" into the Training Data section I get the error: "No training data found. 0 invalid files found." I have no idea what is causing this. Images are .jpg and taken from my phone. I only have 6 total images in the initial test. I've tried it with and without an annotations.json file created in COCO Annotator, that didn't make a difference same error with or without. Big Sur 11.5.2 Create ML 3.0
Posted Last updated
.
Post marked as solved
1 Replies
404 Views
I have hundreds of thousands of image files that are cropped images grouped into class folders appropriately that I would like to use in Create ML to training an object detection model. I do not have .json annotation files for any of those cropped images. Q1: Am I required to create the .json annotation file for each individual image and just set the bounding box coordinates to the four corners of the images since the full image is the object already cropped? Or is there a way to leverage what I have directly without creating all those .json files? Q2: Anyone have a handy script to help automated the creation of those files? :-) Thanks everyone.
Posted
by brockgs.
Last updated
.
Post not yet marked as solved
0 Replies
299 Views
I'm bad at English... I'm using create ML. I'm using X code. The csv file cannot be read. The same thing happens with core ML API. The structure of csv is as follows. ** text,labelA,labelB,labelC,labelD hello,2,3,4,5 good,3,4,5,6 ** I tried some. I don't think the structure is wrong... I think it's a setting problem. Please help me.
Posted
by Senki.
Last updated
.
Post not yet marked as solved
0 Replies
347 Views
Hello, I've tried to label keypoints of my data by labelme, VGG, Make Sense... etc. But their output of annotation json or csv file did not work for createML. Is there any other tool or conversion method ? By the way, I also did not find any demo format of keypoints json file. Does anyone know? Thanks!
Posted
by Robers.
Last updated
.
Post not yet marked as solved
2 Replies
649 Views
I would like to generate and run ML program inside an app. I got familiar with the coremltools and MIL format, however I can't seem to find any resources on how to generate mlmodel/mlpackage files using Swift on the device. Is there any Swift equivalent of coremltools? Or is there a way to translate MIL description of a ML program into instance of a MLModel? Or something similar.
Posted
by mlajtos.
Last updated
.
Post not yet marked as solved
2 Replies
536 Views
Is there anyway we can set the number of threads used during coreML inference? My model is relatively small and the overhead of launching new threads is too expensive. When using TensorFlow C API, forcing to single thread results in significant decrease in CPU usage. (So far coreML with multiple threads has 3 times the cpu usage compares to TensorFlow with single thread). Also, wondering if anyone has compared the performance between TensorFlow in C and coreML?
Posted
by Brianyan.
Last updated
.
Post not yet marked as solved
1 Replies
587 Views
I have a MacBook Pro M1 (16 GB RAM), and testing CreateML's StyleTransfer model training. When I press «Train», it starts processing and fails with error «Could not create buffer with format BGRA -6662». During the «processing» it allocates about 4.5 GB of RAM. I guess it runs out of memory, however, I've closed all other programs and I can see that there's lot of free RAM when it fails. It happens even if I use just 3 small (500*500) images, 1 for each training, content and validation. So, how to fix it?
Posted
by int_32.
Last updated
.
Post not yet marked as solved
1 Replies
377 Views
I am trying to build an app that uses CoreML. However, I would like the data that was used to build the model to grow and the model to predict taking that growth into account. So, at the end of the day the more the user uses the app the smarter the app gets at predicting what the user will select next. For example: If the user is presented with a variety of clothes to choose from and the user selects pants, the app will present a list of colors to choose from and let's say the user chooses blue, the next time the user chooses pants the blue color is ranked higher than it was the previous time. Is this possible to do? And how do I make selection updates? Thanks in advance for any ideas or suggestions.
Posted
by iakar.
Last updated
.
Post not yet marked as solved
3 Replies
2.2k Views
I have trouble with importing CreateMLUI in the PlayGround. What I have tried: – Creating Mac OS blank playground – Restarting Xcode more than 5 times – Removing all the other boilerplate code – Reinstalling xCode completely – Checking PlayGround Settings platform But anyway the same: No such module “CreateMLUI” How to import this thing? Any suggestions?
Posted
by maxkalik.
Last updated
.
Post not yet marked as solved
0 Replies
485 Views
Hello, I have an object detection model that I integrated into an app. When I put an image on the preview for the Object Detection File, it classifies the image correctly. However, if I put the same image onto the app, it classifies it differently with different values. I am confused as to how this is happening. Here is my code: import UIKit import CoreML import Vision import ImageIO class SecondViewController: UIViewController, UINavigationControllerDelegate { @IBOutlet weak var photoImageView: UIImageView! lazy var detectionRequest: VNCoreMLRequest = { do { let model = try VNCoreMLModel(for: EarDetection2().model) let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in self?.processDetections(for: request, error: error) }) request.imageCropAndScaleOption = .scaleFit return request } catch { fatalError("Failed to load Vision ML model: \(error)") } }() @IBAction func testPhoto(_ sender: UIButton) { let vc = UIImagePickerController() vc.sourceType = .photoLibrary vc.delegate = self present(vc, animated: true) } @IBOutlet weak var results: UILabel! func updateDetections(for image: UIImage) { let orientation = CGImagePropertyOrientation(rawValue: UInt32(image.imageOrientation.rawValue)) guard let ciImage = CIImage(image: image) else { fatalError("Unable to create \(CIImage.self) from \(image).") } DispatchQueue.global(qos: .userInitiated).async { let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation!) do { try handler.perform([self.detectionRequest]) } catch { print("Failed to perform detection.\n\(error.localizedDescription)") } } } func processDetections(for request: VNRequest, error: Error?) { DispatchQueue.main.async { guard let results = request.results else { print("Unable to detect anything.\n\(error!.localizedDescription)") return } let detections = results as! [VNRecognizedObjectObservation] self.drawDetectionsOnPreview(detections: detections) } } func drawDetectionsOnPreview(detections: [VNRecognizedObjectObservation]) { guard let image = self.photoImageView?.image else { return } let imageSize = image.size let scale: CGFloat = 0 UIGraphicsBeginImageContextWithOptions(imageSize, false, scale) for detection in detections { image.draw(at: CGPoint.zero) print(detection.labels.map({"\($0.identifier) confidence: \($0.confidence)"}).joined(separator: "\n")) print("------------") results.text = (detection.labels.map({"\($0.identifier) confidence: \($0.confidence)"}).joined(separator: "\n")) // The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner. let boundingBox = detection.boundingBox let rectangle = CGRect(x: boundingBox.minX*image.size.width, y: (1-boundingBox.minY-boundingBox.height)*image.size.height, width: boundingBox.width*image.size.width, height: boundingBox.height*image.size.height) UIColor(red: 0, green: 1, blue: 0, alpha: 0.4).setFill() UIRectFillUsingBlendMode(rectangle, CGBlendMode.normal) } let newImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() self.photoImageView?.image = newImage } } extension SecondViewController: UIImagePickerControllerDelegate { func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) { picker.dismiss(animated: true) guard let image = info[.originalImage] as? UIImage else { return } self.photoImageView?.image = image updateDetections(for: image) } } I attached pictures of the model preview and the app preview (it may be hard to tell but they are the same image). I have also attached pictures of my files and storyboard. Any help would be great! Thanks in advance!
Posted
by SkillzApp.
Last updated
.