Create ML

RSS for tag

Create machine learning models for use in your app using Create ML.

Create ML Documentation

Posts under Create ML tag

65 Posts
Sort by:
Post marked as solved
1 Replies
404 Views
I have hundreds of thousands of image files that are cropped images grouped into class folders appropriately that I would like to use in Create ML to training an object detection model. I do not have .json annotation files for any of those cropped images. Q1: Am I required to create the .json annotation file for each individual image and just set the bounding box coordinates to the four corners of the images since the full image is the object already cropped? Or is there a way to leverage what I have directly without creating all those .json files? Q2: Anyone have a handy script to help automated the creation of those files? :-) Thanks everyone.
Posted
by
Post not yet marked as solved
0 Replies
299 Views
I'm bad at English... I'm using create ML. I'm using X code. The csv file cannot be read. The same thing happens with core ML API. The structure of csv is as follows. ** text,labelA,labelB,labelC,labelD hello,2,3,4,5 good,3,4,5,6 ** I tried some. I don't think the structure is wrong... I think it's a setting problem. Please help me.
Posted
by
Post marked as solved
6 Replies
1.2k Views
Just updated to Xcode 13. A Swift source code imports CreateML but got "Failed to build module 'CreateML'; this SDK is not supported by the compiler (the SDK is built with 'Apple Swift version 5.4 (swiftlang-1205.0.24.14 clang-1205.0.19.54)', while this compiler is 'Apple Swift version 5.5 (swiftlang-1300.0.31.1 clang-1300.0.29.1)'). Please select a toolchain which matches the SDK.". Any idea what is it?
Posted
by
Post not yet marked as solved
0 Replies
347 Views
Hello, I've tried to label keypoints of my data by labelme, VGG, Make Sense... etc. But their output of annotation json or csv file did not work for createML. Is there any other tool or conversion method ? By the way, I also did not find any demo format of keypoints json file. Does anyone know? Thanks!
Posted
by
Post not yet marked as solved
2 Replies
647 Views
I would like to generate and run ML program inside an app. I got familiar with the coremltools and MIL format, however I can't seem to find any resources on how to generate mlmodel/mlpackage files using Swift on the device. Is there any Swift equivalent of coremltools? Or is there a way to translate MIL description of a ML program into instance of a MLModel? Or something similar.
Posted
by
Post not yet marked as solved
1 Replies
376 Views
I am trying to build an app that uses CoreML. However, I would like the data that was used to build the model to grow and the model to predict taking that growth into account. So, at the end of the day the more the user uses the app the smarter the app gets at predicting what the user will select next. For example: If the user is presented with a variety of clothes to choose from and the user selects pants, the app will present a list of colors to choose from and let's say the user chooses blue, the next time the user chooses pants the blue color is ranked higher than it was the previous time. Is this possible to do? And how do I make selection updates? Thanks in advance for any ideas or suggestions.
Posted
by
Post not yet marked as solved
2 Replies
535 Views
Is there anyway we can set the number of threads used during coreML inference? My model is relatively small and the overhead of launching new threads is too expensive. When using TensorFlow C API, forcing to single thread results in significant decrease in CPU usage. (So far coreML with multiple threads has 3 times the cpu usage compares to TensorFlow with single thread). Also, wondering if anyone has compared the performance between TensorFlow in C and coreML?
Posted
by
Post not yet marked as solved
0 Replies
485 Views
Hello, I have an object detection model that I integrated into an app. When I put an image on the preview for the Object Detection File, it classifies the image correctly. However, if I put the same image onto the app, it classifies it differently with different values. I am confused as to how this is happening. Here is my code: import UIKit import CoreML import Vision import ImageIO class SecondViewController: UIViewController, UINavigationControllerDelegate { @IBOutlet weak var photoImageView: UIImageView! lazy var detectionRequest: VNCoreMLRequest = { do { let model = try VNCoreMLModel(for: EarDetection2().model) let request = VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in self?.processDetections(for: request, error: error) }) request.imageCropAndScaleOption = .scaleFit return request } catch { fatalError("Failed to load Vision ML model: \(error)") } }() @IBAction func testPhoto(_ sender: UIButton) { let vc = UIImagePickerController() vc.sourceType = .photoLibrary vc.delegate = self present(vc, animated: true) } @IBOutlet weak var results: UILabel! func updateDetections(for image: UIImage) { let orientation = CGImagePropertyOrientation(rawValue: UInt32(image.imageOrientation.rawValue)) guard let ciImage = CIImage(image: image) else { fatalError("Unable to create \(CIImage.self) from \(image).") } DispatchQueue.global(qos: .userInitiated).async { let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation!) do { try handler.perform([self.detectionRequest]) } catch { print("Failed to perform detection.\n\(error.localizedDescription)") } } } func processDetections(for request: VNRequest, error: Error?) { DispatchQueue.main.async { guard let results = request.results else { print("Unable to detect anything.\n\(error!.localizedDescription)") return } let detections = results as! [VNRecognizedObjectObservation] self.drawDetectionsOnPreview(detections: detections) } } func drawDetectionsOnPreview(detections: [VNRecognizedObjectObservation]) { guard let image = self.photoImageView?.image else { return } let imageSize = image.size let scale: CGFloat = 0 UIGraphicsBeginImageContextWithOptions(imageSize, false, scale) for detection in detections { image.draw(at: CGPoint.zero) print(detection.labels.map({"\($0.identifier) confidence: \($0.confidence)"}).joined(separator: "\n")) print("------------") results.text = (detection.labels.map({"\($0.identifier) confidence: \($0.confidence)"}).joined(separator: "\n")) // The coordinates are normalized to the dimensions of the processed image, with the origin at the image's lower-left corner. let boundingBox = detection.boundingBox let rectangle = CGRect(x: boundingBox.minX*image.size.width, y: (1-boundingBox.minY-boundingBox.height)*image.size.height, width: boundingBox.width*image.size.width, height: boundingBox.height*image.size.height) UIColor(red: 0, green: 1, blue: 0, alpha: 0.4).setFill() UIRectFillUsingBlendMode(rectangle, CGBlendMode.normal) } let newImage = UIGraphicsGetImageFromCurrentImageContext() UIGraphicsEndImageContext() self.photoImageView?.image = newImage } } extension SecondViewController: UIImagePickerControllerDelegate { func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) { picker.dismiss(animated: true) guard let image = info[.originalImage] as? UIImage else { return } self.photoImageView?.image = image updateDetections(for: image) } } I attached pictures of the model preview and the app preview (it may be hard to tell but they are the same image). I have also attached pictures of my files and storyboard. Any help would be great! Thanks in advance!
Posted
by
Post marked as solved
1 Replies
677 Views
I am trying to make an Image Classifier but I keep getting a warning 'init()' is deprecated: Use init(configuration:) instead and handle errors appropriately. I was wondering if it matters because the app gets made and the classifier works. Just wondering what the warning means. Thanks in advance!
Posted
by
Post not yet marked as solved
0 Replies
487 Views
Create a sample project in Creat ML and choose Activity Classifier. When lowering the sample rate, the preview timeline gets shorter instead of getting longer. Furthermore, it seems the entire preview timeline breaks (can't scroll) if the sample rate is anything other than 50. Try it out: Train a model with sample rate 50, then try training it with sample rate 10.
Posted
by
Post not yet marked as solved
1 Replies
551 Views
DataFrame(contentsOfJSONFile:url) (and it's MLDataTable equivalent) assumes that the rows to be imported are at the root level of the JSON structure. However, I'm downloading a fairly large dataset that has "header" information, such as date created and validity period, with the targeted array at a subsidiary level. The DataFrame initialiser has JSONReadingOptions, but these don't apply to this situation (i.e. there's nothing that helps). It seems that I'm faced with the options of 1) stripping the extraneous data from the JSON file, to leave just the array or 2) decoding the JSON file into a bespoke struct then converting its array into a DataFrame - which removes a lot of the benefits of using DataFrame in the first place. Any thoughts? Cheers, Michaela
Posted
by
Post not yet marked as solved
0 Replies
527 Views
Hi, I try the activity classifier and want to show the results with an app on a IPhone SE. I try the example analog https://apple.github.io/turicreate/docs/userguide/activity_classifier/export_coreml.html When calling the prediction function: ...    func performModelPrediction () -> String? { ... EXC_BAD_ACCESS (code=1, address=0x0) was thrown in line:         let modelPrediction = try! activityClassificationModel.prediction( ^^^^^^^^here was the Error shown   I can trak it down to the source of the mlmodel:     func prediction(input: MyActivityClassifier2Input, options: MLPredictionOptions) throws -> MyActivityClassifier2Output {         let outFeatures = try model.prediction(from: input, options:options)         return MyActivityClassifier2Output(features: outFeatures)     } I created the model with ml creator, working with Xcode 13 beta. What I am doing wrong ? Do you have any hints ? Is there a better example for activity classifier ? Don't hesitate to ask for further details. -Hans here my code: // //  ViewController.swift //  motionstor4yboard // //  Created by Hans Regler on 19.07.21. // import Foundation import UIKit import CoreML import CoreMotion class ViewController: UIViewController {     override func viewDidLoad() {         debugPrint("info: start viewDidLoad ... ")         super.viewDidLoad()         // Do any additional setup after loading the view, typically from a nib.                 // Connect data:         self.startDeviceMotion()     }      // Define some ML Model constants for the recurrent network   struct ModelConstants {     static let numOfFeatures = 6     // Must be the same value you used while training     static let predictionWindowSize = 100     static let sensorsUpdateInterval = 1.0 / 10.0     static let stateInLength = 400   }      // Initialize the model, layers, and sensor data arrays     let activityClassificationModel = MyActivityClassifier2()     var currentIndexInPredictionWindow = 0     let accelDataX = try! MLMultiArray(shape: [ModelConstants.predictionWindowSize] as [NSNumber], dataType: MLMultiArrayDataType.double)     let accelDataY = try! MLMultiArray(shape: [ModelConstants.predictionWindowSize] as [NSNumber], dataType: MLMultiArrayDataType.double)     let accelDataZ = try! MLMultiArray(shape: [ModelConstants.predictionWindowSize] as [NSNumber], dataType: MLMultiArrayDataType.double)     let gyroDataX = try! MLMultiArray(shape: [ModelConstants.predictionWindowSize] as [NSNumber], dataType: MLMultiArrayDataType.double)     let gyroDataY = try! MLMultiArray(shape: [ModelConstants.predictionWindowSize] as [NSNumber], dataType: MLMultiArrayDataType.double)     let gyroDataZ = try! MLMultiArray(shape: [ModelConstants.predictionWindowSize] as [NSNumber], dataType: MLMultiArrayDataType.double)     var stateOutput = try! MLMultiArray(shape:[ModelConstants.stateInLength as NSNumber], dataType: MLMultiArrayDataType.double)     // Initialize CoreMotion Manager   let motionManager = CMMotionManager()      func startDeviceMotion() { //         guard motionManager.isDeviceMotionAvailable else {     guard motionManager.isAccelerometerAvailable, motionManager.isGyroAvailable else {             debugPrint("Core Motion Data Unavailable!")             return         }          motionManager.accelerometerUpdateInterval = TimeInterval(ModelConstants.sensorsUpdateInterval)     motionManager.gyroUpdateInterval = TimeInterval(ModelConstants.sensorsUpdateInterval)     motionManager.startAccelerometerUpdates(to: .main) { accelerometerData, error in         guard let accelerometerData = accelerometerData else {             print("Error: accelerometerData = accelerometerData")             return }         // Add the current data sample to the data array         self.addAccelSampleToDataArray(accelSample: accelerometerData)     }             }      func addAccelSampleToDataArray (accelSample: CMAccelerometerData) {     // Add the current accelerometer reading to the data array     accelDataX[[currentIndexInPredictionWindow] as [NSNumber]] = accelSample.acceleration.x as NSNumber     accelDataY[[currentIndexInPredictionWindow] as [NSNumber]] = accelSample.acceleration.y as NSNumber     accelDataZ[[currentIndexInPredictionWindow] as [NSNumber]] = accelSample.acceleration.z as NSNumber     // Update the index in the prediction window data array     currentIndexInPredictionWindow += 1     // If the data array is full, call the prediction method to get a new model prediction.     // We assume here for simplicity that the Gyro data was added to the data arrays as well.     if (currentIndexInPredictionWindow == ModelConstants.predictionWindowSize) {         if let predictedActivity = performModelPrediction() {             // Use the predicted activity here             // ...             // Start a new prediction window             currentIndexInPredictionWindow = 0         }     } }          func performModelPrediction () -> String? {         // Perform model prediction         let modelPrediction = try! activityClassificationModel.prediction(             acceleration_x: accelDataX,             acceleration_y: accelDataY,             acceleration_z: accelDataZ,             rotation_x: gyroDataX,             rotation_y: gyroDataY,             rotation_z: gyroDataZ,             stateIn: stateOutput)         // Update the state vector         stateOutput = modelPrediction.stateOut         // Return the predicted activity - the activity with the highest probability         return modelPrediction.label     }
Posted
by
Post not yet marked as solved
0 Replies
795 Views
Most examples, including within documentation, of using CoreML with iOS involve the creation of the Model under Xcode on a Mac and then inclusion of the Xcode generated MLFeatureProvider class into the iOS app and (re)compiling the app.  However, it’s also possible to download an uncompiled model directly into an iOS app  and then compile it (background tasks) - but there’s no MLFeatureProvider class.  The same applies when using CreateML in an iOS app (iOS 15 beta) - there’s no automatically generated MLFeatureProvider.  So how do you get one?  I’ve seen a few queries on here and elsewhere related to this problem, but couldn’t find any clear examples of a solution.  So after some experimentation, here’s my take on how to go about it: Firstly, if you don’t know what features the Model uses, print the model description e.g. print("Model: ",mlModel!.modelDescription). Which gives Model:   inputs: (     "course : String",     "lapDistance : Double",     "cumTime : Double",     "distance : Double",     "lapNumber : Double",     "cumDistance : Double",     "lapTime : Double" ) outputs: (     "duration : Double" ) predictedFeatureName: duration ............ A prediction is created by guard **let durationOutput = try? mlModel!.prediction(from: runFeatures) ** …… where runFeatures is an instance of a class that provides a set of feature names and the value of each feature to be used in making a prediction.  So, for my model that predicts run duration from course, lap number, lap time etc the RunFeatures class is: class RunFeatures : MLFeatureProvider {     var featureNames: Set = ["course","distance","lapNumber","lapDistance","cumDistance","lapTime","cumTime","duration"]     var course : String = "n/a"     var distance : Double = -0.0     var lapNumber : Double = -0.0     var lapDistance : Double = -0.0     var cumDistance : Double = -0.0     var lapTime : Double = -0.0     var cumTime : Double = -0.0          func featureValue(for featureName: String) -> MLFeatureValue? {         switch featureName {         case "distance":             return MLFeatureValue(double: distance)         case "lapNumber":             return MLFeatureValue(double: lapNumber)         case "lapDistance":             return MLFeatureValue(double: lapDistance)         case "cumDistance":             return MLFeatureValue(double: cumDistance)         case "lapTime":             return MLFeatureValue(double: lapTime)         case "cumTime":             return MLFeatureValue(double: cumTime)         case "course":             return MLFeatureValue(string: course)         default:             return MLFeatureValue(double: -0.0)         }     } } Then in my DataModel, prior to prediction, I create an instance of RunFeatures with the input values on which I want to base the prediction: var runFeatures = RunFeatures() runFeatures.distance = 3566.0 runFeatures.lapNumber = 1.0 runFeatures.lapDistance = 1001.0  runFeatures.lapTime = 468.0  runFeatures.cumTime = 468.0  runFeatures.cumDistance = 1001.0  runFeatures.course = "Wishing Well Loop" NOTE there’s no need to provide the output feature (“duration”) here, nor in the featureValue method above but it is required in featureNames. Then get the prediction with guard let durationOutput = try? mlModel!.prediction(from: runFeatures)  Regards, Michaela
Posted
by
Post not yet marked as solved
1 Replies
587 Views
I have a MacBook Pro M1 (16 GB RAM), and testing CreateML's StyleTransfer model training. When I press «Train», it starts processing and fails with error «Could not create buffer with format BGRA -6662». During the «processing» it allocates about 4.5 GB of RAM. I guess it runs out of memory, however, I've closed all other programs and I can see that there's lot of free RAM when it fails. It happens even if I use just 3 small (500*500) images, 1 for each training, content and validation. So, how to fix it?
Posted
by
Post not yet marked as solved
0 Replies
624 Views
I am trying to train an Object Detection Model using transfer learning with a small dataset ( roughly 650 Images and two classes ) using Create ML v2.0 (53.2.2) with prefer external GPU checked. I am using a 2018 Mac mini 3.2 ghz I7 16 gb of ram and AMD Radeon Pro 580 eGPU. The problem I am having is that I can only do about 3500 iterations before I run out of memory and I need to pause the training. When I resume training my Loss increases again and it takes a while for it to get back down to where it was before I paused. So I am wondering if there is a better way to setup the hardware or any other suggestions so I can get through all of the iterations without having to pause. I don't recall having this issue with Create ML v1.0, so any suggestions would be appreciated.
Posted
by
Post not yet marked as solved
18 Replies
2.3k Views
hello, When I used xcode to generate the model encryption key, an error was reported, the error was 'Failed to Generate Encryption Key and Sign in with you Apple ID in the Apple ID pane in System Preferences and retry '.But I have logged in my apple id in the system preferences, and this error still occurs.I reinstalled xcode and re-logged in to my apple id. This error still exists. Xcode Version 12.4 macOS Catalina 10.15.7 thanks
Posted
by
Post not yet marked as solved
3 Replies
965 Views
Hi, I'm new to Create ML. I was trying to create a simple sentiment analysis model. My input file is cleaned JSON data from Apple reviews with two fields, "label" (pos/neg) and "text" (the content with sentiment). Training runs successfully. However, when I try to perform a test with similar data with the same fields in either JSON or CSV format, I continue to get the following error. Testing Error: Expected directory at URL at "filename.json" I've tried using different data sets but always receive the same error. Does anyone have any idea what I'm doing wrong?
Posted
by
Post marked as solved
5 Replies
1.5k Views
Hello everybody, I am trying to run inference on a CoreML Model created by me using CreateML. I am following the sample code provided by Apple on the CoreML documentation page and every time I try to classify an image I get this error: "Could not create Espresso context". Has this ever happened to anyone? How did you solve it? Here is my code: import Foundation import Vision import UIKit import ImageIO final class ButterflyClassification {          var classificationResult: Result?          lazy var classificationRequest: VNCoreMLRequest = {                  do {             let model = try VNCoreMLModel(for: ButterfliesModel_1(configuration: MLModelConfiguration()).model)                          return VNCoreMLRequest(model: model, completionHandler: { [weak self] request, error in                                  self?.processClassification(for: request, error: error)             })         }         catch {             fatalError("Failed to lead model.")         }     }()     func processClassification(for request: VNRequest, error: Error?) {                  DispatchQueue.main.async {                          guard let results = request.results else {                 print("Unable to classify image.")                 return             }                          let classifications = results as! [VNClassificationObservation]                          if classifications.isEmpty {                                  print("No classification was provided.")                 return             }             else {                                  let firstClassification = classifications[0]                 self.classificationResult = Result(speciesName: firstClassification.identifier, confidence: Double(firstClassification.confidence))             }         }     }     func classifyButterfly(image: UIImage) - Result? {                  guard let ciImage = CIImage(image: image) else {             fatalError("Unable to create ciImage")         }                  DispatchQueue.global(qos: .userInitiated).async {                          let handler = VNImageRequestHandler(ciImage: ciImage, options: [:])             do {                 try handler.perform([self.classificationRequest])             }             catch {                 print("Failed to perform classification.\n\(error.localizedDescription)")             }         }                  return classificationResult     } } Thank you for your help!
Posted
by