Create ML

RSS for tag

Create machine learning models for use in your app using Create ML.

Create ML Documentation

Posts under Create ML tag

65 Posts
Sort by:
Post not yet marked as solved
1 Replies
384 Views
I need to build a model to add to my app and tried following the Apple docs here. No luck because I get an error that is discussed on this thread on the forum. I'm still not clear on why the error is occurring and can't resolve it. I wonder if CreateML inside Playgrounds is still supported at all? I tried using the CreateML app that you can access through developer tools but it just crashes my Mac (2017 MBP - is it just too much of a brick to use for ML at this point? I should think not because I've recently built and trained relatively simple models using Tensorflow. + Python on this machine, and the classifier I'm trying to make now is really simple and doesn't have a huge dataset).
Posted
by
Post not yet marked as solved
0 Replies
239 Views
i am using the tabular regression method of CreateML. i see where it prints a couple of metrics such as root mean square error on the validation data, but i dont see any way to export the raw fitted numbers (e.g. training prediction), or validation numbers (e.g. validation prediction), or out of sample "testing" numbers (from the testing data set). is this possible in CreateML directly? the reason this is necessary is that you sometimes want to plot actual versus predicted and compute other metrics for regressions.
Posted
by
Post not yet marked as solved
0 Replies
290 Views
For a Create ML activity classifier, I’m classifying “playing” tennis (the points or rallies) and a second class “not playing” to be the negative class. I’m not sure what to specify for the action duration parameter given how variable a tennis point or rally can be, but I went with 10 seconds since it seems like the average duration for both the “playing” and “not playing” labels. When choosing this parameter however, I’m wondering if it affects performance, both speed of video processing and accuracy. Would the Vision framework return more results with smaller action durations?
Posted
by
Post marked as solved
3 Replies
432 Views
Hi I have been the following WWDC21 "dynamic training on iOS" - I have been able to get the training working, with an output of the iterations etc being printed out in the console as training progresses. However I am unable to retrieve the checkpoints or result/model once training has completed (or is in progress) nothing in the callback fires. If I try to create a model from the sessionDirectory - it returns nil (even though training has clearly completed). Please can someone help or provide pointers on how to access the results/checkpoints so that I can make a MlModel and use it. var subscriptions = [AnyCancellable]()         let job = try! MLStyleTransfer.train(trainingData: datasource, parameters: trainingParameters, sessionParameters: sessionParameters) job.result.sink { result in             print("result ", result)         }         receiveValue: { model in try? model.write(to: sessionDirectory)             let compiledURL = try? MLModel.compileModel(at: sessionDirectory)             let mlModel = try? MLModel(contentsOf: compiledURL!)         }         .store(in: &subscriptions) This also does not work: job.checkpoints.sink { checkpoint in // Process checkpoint  let model = MLStyleTransfer(trainingData: checkpoint) } .store(in: &subscriptions)         } This is the printout in the console: Using CPU to create model +--------------+--------------+--------------+--------------+--------------+ | Iteration    | Total Loss   | Style Loss   | Content Loss | Elapsed Time | +--------------+--------------+--------------+--------------+--------------+ | 1            | 64.9218      | 54.9499      | 9.97187      | 3.92s        | 2022-02-20 15:14:37.056251+0000 DynamicStyle[81737:9175431] [ServicesDaemonManager] interruptionHandler is called. -[FontServicesDaemonManager connection]_block_invoke | 2            | 61.7283      | 24.6832      | 8.30343      | 9.87s        | | 3            | 59.5098      | 27.7834      | 11.7603      | 16.19s       | | 4            | 56.2737      | 16.163       | 10.985       | 22.35s       | | 5            | 53.0747      | 12.2062      | 12.0783      | 28.08s       | +--------------+--------------+--------------+--------------+--------------+ Any help would be appreciated on how to retrieve models. Thanks
Posted
by
Post not yet marked as solved
0 Replies
291 Views
My activity classifier is used in tennis sessions, where there are necessarily multiple people on the court. There is also a decent chance other courts' players will be in the shot, depending on the angle and lens. For my training data, would it be best to crop out adjacent courts?
Posted
by
Post not yet marked as solved
0 Replies
245 Views
Hi at all I ordered a mac pro (8-core & 580X) to use CreateML. The start of the training is flawless. As soon as I pause the training and then want to resume it, I get the message "Unable to resume training" & "Archive does not contain an DataTable". The same problem also occurred with the 16" M1 max ... I'm frustrated... what am I doing wrong? Is there a problem with CreateML? Thanks for the support in advance.
Posted
by
Post marked as solved
1 Replies
304 Views
Hello. I created a Core ML app for iOS with my machine learning classification model. However, I'm trying to make a macOS version of the app, but the analysis relies heavily on UIKit, which is not available on macOS. How should I apply the image analysis to image files on macOS?
Posted
by
Post not yet marked as solved
1 Replies
193 Views
Hello, is there a possibility to use the actionClassifier in CreateML ro create a fitnessApp that can recognize the action AND GIVE CORRECTION feedbacks to the user by using the the recognized keypoints? Maybe 3 keypoints as an angle and give feedback? How can I access those joints in Xcode?
Posted
by
Post not yet marked as solved
0 Replies
204 Views
Following the guide found here, I've been able to preview image classification in Create ML and Xcode. However, when I swap out the MobileNet model for my own and try running it as an app, images are not classified accurately. When I check the same images using my model in its Xcode preview tab, the guesses are accurate. I've tried changing this line to the different available options, but it doesn't seem to help: imageClassificationRequest.imageCropAndScaleOption = .centerCrop Does anyone know why a model would work well in preview but not while running in the app? Thanks in advance.
Posted
by
Post not yet marked as solved
1 Replies
229 Views
Hello everyone, I am working on a simple ML project. I trained a custom model on classifying the images of US dollar bill notes. Everything seems good to me and I don't know why the classification label isn't being updated with any value. Files: https://codeshare.io/OdXzMW
Posted
by
Post not yet marked as solved
3 Replies
284 Views
I'm trying to make an app that'll suggest a color based on a keyword the user inputs. I store prediction from the model to a [String: Double] array and then compute the output color. However, I'm stuck on two strange errors. Here's my code: extension Array where Element == [String: Double]{ func average() -> RGBList{ var returnValue = (r: 0, g: 0, b: 0) var r = 0 var g = 0 var b = 0 for key in self{ r = Int(key[0].split(",")[0]) * key[1] ... } ... } }
Posted
by
Post marked as solved
2 Replies
275 Views
Hi everyone, i am not pretty new anymore on swift but still have not many skills on it. I am facing some difficult issues through this tutorial from apple-docs. https://developer.apple.com/documentation/createml/creating_an_image_classifier_model/#overview I have successfully created many mlmodel already now as given in the tutorial. However when I come to the step to integrate it to Xcode, I am facing the Issue that I don't get any predictions from my mlmodel. I followed all steps in this tutorial, and downloaded the example code. The tutorial said "just change this model line, with your model and it works" but indeed it doesn't. With doesn't work, I mean that I don't get any predictions back when I use this example. I can start the application and test it with the iPhone simulator. But the only output that I get is "no predictions, please check console log". I searched down the code an could find out, that this is an error-message which appears from the code of MainViewController.swift (99:103) private func imagePredictionHandler(_ predictions: [ImagePredictor.Prediction]?) { guard let predictions = predictions else { updatePredictionLabel("No predictions. (Check console log.)") return } As i understand the code, it return the message when no predictions come back from the mlmodel. If I use mlmodel given by apple (as MobileNetV2 etc.) the example code is working every time (give predictions back). Thats why I am pretty sure, the issue has to be anywhere on my side but I can't figure it out. The mlmodel is trained with images from fruits360-dataset and self added some images of charts. To equal the values I tooked 70 pictures of each class. If I try this model in createML-Preview I can see the model is able to predict my validation-pictures. But when I integrate the model in Xcode it isn't able to give me that predictions for the exact same image. Do anyone know how to get this issue done? Im using the latest Xcode version. Thanks in advance
Posted
by
Post not yet marked as solved
2 Replies
230 Views
What's New with CreateML discusses Repetition Counting, and says to see the sample code and the article linked to this session. There is no mention of Repetition Count in any documentation, and it is not linked in the article related to the session, nor is it anywhere to be found in the WWDC22 Sample code. Rumor was that the sample code was called "CountMyActions", but it is no where to be found. Please link the sample code to the reference, and include it in the list of WWDC sample code. -- Glen
Posted
by
Post not yet marked as solved
1 Replies
185 Views
Hello, pretty new to CreateML and machine learning as a whole, but surprise surprise, I'm trying to train a model. I have a bunch of annotated images exported from IBM Cloud Annotations for use with CreateML, and I have no problem using them as training data. Unfortunately, I have no idea where to implement Augmentation settings. I'm aware that they're available in the Playground implementation of CreateML, but I haven't tried it nor do I really want to. But in CreateML, I see no setting where I can enable augmentation, nor anywhere I can directly modify the code to enable it that way. Again, this is an object detection project. If I'm missing something help would be greatly appreciated. Thanks!
Posted
by
Post not yet marked as solved
0 Replies
142 Views
I've been using VNRecognizeTextRequest, VNImageRequestHandler, VNRecognizedTextObservation, and VNRecognizedText all successfully (in Objective C) to identify about 25% of bright LED/LCD characters depicting a number string (arranged in several date formats) on a scanned photograph. I first crop to the constant area where the characters are located, and do some Core Image filters to optimize display of the characters in black and white to remove background clutter as much as possible. Only when the characters are nearly perfect, not over- or under-exposed, do I get a return string with all the characters. As an example, an LED image of 93 5 22 will often return 93 S 22, or a 97 4 14 may return 97 Y 14. I can easily substitute the letters with commonly confused numbers, but I would prefer to raise the text recognition to something more than 25% (it will probably never be greater than 50%-75%. So, I thought I could use Create ML to create a model (based on the text recognition model Apple has already created), with training folders labeled with each numeric LED/LCD characters 1, 2, 3..., blurred, with noise, over/under exposed, etc. and improve the recognition. Can I use Create ML to do this? Do I use Image Object Detection, or is it Text Classification to return a text string with something like "93 5 22" that I can manipulate later with Regular Expressions?
Posted
by