Create ML

RSS for tag

Create machine learning models for use in your app using Create ML.

Create ML Documentation

Posts under Create ML tag

59 Posts
Sort by:
Post not yet marked as solved
2 Replies
332 Views
I encountered an error while experimenting with the new CreateMLComponents in playground with the following code: import CreateMLComponents import CoreML var fullyConnected = FullyConnectedNetworkRegressor<Float>.init() fullyConnected.hiddenUnitCounts = [2] let feature: AnnotatedFeature<MLShapedArray<Float>, Float> = .init(feature: .init(scalars: [2, 3], shape: [2]), annotation: 5) let fitted = try? await fullyConnected.fitted(to: [feature, feature]) print(fitted) The generated error message is included (partially) at the end of this post. I later found out that this same code works fine in an actual app. Any insights? The error message: Playground execution failed: error: Execution was interrupted, reason: EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0). The process has been left at the point where it was interrupted, use "thread return -x" to return to the state before expression evaluation. * thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_INSTRUCTION (code=EXC_I386_INVOP, subcode=0x0)   * frame #0: 0x00007ffb2093728d SwiftNN`SwiftNN.Tensor.scalar<œÑ_0_0 where œÑ_0_0: SwiftNN.TensorScalar>(as: œÑ_0_0.Type) -> œÑ_0_0 + 157     frame #1: 0x00007ffb20937cbb SwiftNN`SwiftNN.Tensor.playgroundDescription.getter : Any + 91     frame #2: 0x000000010da43ac4 PlaygroundLogger`___lldb_unnamed_symbol491 + 820     frame #3: 0x000000010da45dbd PlaygroundLogger`___lldb_unnamed_symbol505 + 189 (some more lines ...) PlaygroundLogger`___lldb_unnamed_symbol428 + 124     frame #65: 0x000000010da41dad PlaygroundLogger`playground_log_hidden + 269     frame #66: 0x000000010ca59aba $__lldb_expr14`async_MainTY1_ at CreateMLComp.xcplaygroundpage:12:5     frame #67: 0x000000010ca59fb0 $__lldb_expr14`thunk for @escaping @convention(thin) @async () -> () at <compiler-generated>:0     frame #68: 0x000000010ca5a0c0 $__lldb_expr14`partial apply for thunk for @escaping @convention(thin) @async () -> () at <compiler-generated>:0
Posted
by Alan_Z.
Last updated
.
Post not yet marked as solved
3 Replies
71 Views
I've restarted my ColorProposer app and renamed it Chroma and the app suggests a color for a string. My main function is here: import Foundation import NaturalLanguage import CoreML func predict(for string: String) -> SingleColor? {     var model: MLModel     var predictor: NLModel     do {         model = try ChromaClassifier(configuration: .init()).model     } catch {         print("NIL MDL")         return ni     }     do {         predictor = try NLModel(mlModel: model)     } catch {         print("NIL PREDICT")         return nil     }     let colorKeys = predictor.predictedLabelHypotheses(for: string, maximumCount: 1) // set the maximumCount to 1...7     print(colorKeys)     var color: SingleColor = .init(red: 0, green: 0, blue: 0)     for i in colorKeys {         coor.morphing((ColorKeys.init(rawValue: i.key) ?? .white).toColor().percentage(of: i.value))         print(color)     }     return color } extension SingleColor {     mutating func morphing(_ color: SingleColor) {         self.blue += color.blue         self.green += color.green         self.red += color.red     }     func percentage(of percentage: Double) -> SingleColor {         return .init(red: slf.red * percentage, green: self.green * percentage, blue: self.blue * percentage)     } } struct SingleColor: Codable, Hashable, Identifiable {     var id: UUID {         get {             return .init()         }     }     var red: Double     var green: Double     var blue: Double     var color: Color {         get {             return Color(red: red / 255, green: green / 255, blue: blue / 255)         }     } } enum ColorKeys: String, CaseIterable {     case red = "RED"     case orange = "ORG"     case yellow = "YLW"     case green = "GRN"     case mint = "MNT"     case blue = "BLU"     case violet = "VLT"     case white = "WHT" } extension ColorKeys {     func toColor() -> SingleColor {         print(self)         switch self {         case .red:             return .init(red: 255, green: 0, blue: 0)         case .orange:             return .init(red: 255, green: 125, blue: 0)         case .yellow:             return .init(red: 255, green: 255, blue: 0)         case .green:             return .init(red: 0, green: 255, blue: 0)         case .mint:             return .init(red: 0, green: 255, blue: 255)         case .blue:             return .init(red: 0, green: 0, blue: 255)         case .violet:             return .init(red: 255, green: 0, blue: 255)         case .white:             return .init(red: 255, green: 255, blue: 255)         }     } } here's my view, quite simple: import SwiftUI import Combine struct ContentView: View {     @AppStorage("Text") var text: String = ""     let timer = Timer.publish(every: 1, on: .main, in: .common).autoconnect()     @State var color: Color? = .white     var body: some View {       TextField("Text...", text: $text).padding().background(color).onReceive(timer) { _ in             color = predict(for: text)?.color             print(color)         }     } } But the problem of not updating the view still persists. In prints, I discovered a really strange issue: The line of print(colorKeys) is always the same.
Posted Last updated
.
Post not yet marked as solved
3 Replies
185 Views
COCO has set a standard in Object Detection task dataset format. Is there a tool for translating this dataset format to CreateML format so that data can be used in CreateML for training and evaluation? I've found Roboflow but I would rather use a python script rather than this platform as it seems too complex for my needs.
Posted
by derrkater.
Last updated
.
Post not yet marked as solved
1 Replies
157 Views
I'm facing an "Unexpected error" in CreatML while training **Action Classification **. I've added more than 100 videos and followed the same steps as per the apple doc but still, an error occurred and there is not much description for an error as well. I've tried different things but nothing worked. so, If anyone had faced this kind of issue before and resolved that then let me know.
Posted
by JolChr.
Last updated
.
Post not yet marked as solved
1 Replies
170 Views
I am trying to build a "Sing that Tune" game. For example: The app will tell the user to sing, "Row row your boat." The user will sing "Row row your boat" into the microphone. If the user's melody is close enough to the actual melody, the game is won. My question: Since I'm dealing with live audio that might be "correct" but not "exact," is the best strategy to use ShazamKit and an SHCustomCatalog, or is it better to use Create ML and sound classification? I know Create ML model can learn the difference between a baby and a firetruck, but can it learn the difference between a good guess and a wrong guess of a sung melody? Thank you, Eli
Posted Last updated
.
Post not yet marked as solved
1 Replies
986 Views
I am using turicreate to train a custom object detection model, however it only supports yolov2. Has anyone tried to port v4 or v5 to CoreML . Is their a utility to do this? I have a couple of v4 PyTorch examples I was going to train then try to do this.
Posted Last updated
.
Post not yet marked as solved
1 Replies
191 Views
Hi, I'm trying to build an object detector model using create ml. I have updated my X code version from 13.4 to 13.41. The model created in createml, version 12.20, works and the model created in version 12.40 does not work. No build errors have occurred in both cases. After upgrading the X code version, the model can be built, but it is not recognized and does not respond when the camera is pointed at the object. I would like to know if there is a way to deal with this problem.
Posted
by trtrsc.
Last updated
.
Post not yet marked as solved
0 Replies
237 Views
I've been using VNRecognizeTextRequest, VNImageRequestHandler, VNRecognizedTextObservation, and VNRecognizedText all successfully (in Objective C) to identify about 25% of bright LED/LCD characters depicting a number string (arranged in several date formats) on a scanned photograph. I first crop to the constant area where the characters are located, and do some Core Image filters to optimize display of the characters in black and white to remove background clutter as much as possible. Only when the characters are nearly perfect, not over- or under-exposed, do I get a return string with all the characters. As an example, an LED image of 93 5 22 will often return 93 S 22, or a 97 4 14 may return 97 Y 14. I can easily substitute the letters with commonly confused numbers, but I would prefer to raise the text recognition to something more than 25% (it will probably never be greater than 50%-75%. So, I thought I could use Create ML to create a model (based on the text recognition model Apple has already created), with training folders labeled with each numeric LED/LCD characters 1, 2, 3..., blurred, with noise, over/under exposed, etc. and improve the recognition. Can I use Create ML to do this? Do I use Image Object Detection, or is it Text Classification to return a text string with something like "93 5 22" that I can manipulate later with Regular Expressions?
Posted
by DrMiller.
Last updated
.
Post not yet marked as solved
1 Replies
269 Views
Hello, pretty new to CreateML and machine learning as a whole, but surprise surprise, I'm trying to train a model. I have a bunch of annotated images exported from IBM Cloud Annotations for use with CreateML, and I have no problem using them as training data. Unfortunately, I have no idea where to implement Augmentation settings. I'm aware that they're available in the Playground implementation of CreateML, but I haven't tried it nor do I really want to. But in CreateML, I see no setting where I can enable augmentation, nor anywhere I can directly modify the code to enable it that way. Again, this is an object detection project. If I'm missing something help would be greatly appreciated. Thanks!
Posted
by gluebaby.
Last updated
.
Post not yet marked as solved
2 Replies
290 Views
What's New with CreateML discusses Repetition Counting, and says to see the sample code and the article linked to this session. There is no mention of Repetition Count in any documentation, and it is not linked in the article related to the session, nor is it anywhere to be found in the WWDC22 Sample code. Rumor was that the sample code was called "CountMyActions", but it is no where to be found. Please link the sample code to the reference, and include it in the list of WWDC sample code. -- Glen
Posted Last updated
.
Post marked as solved
2 Replies
331 Views
Hi everyone, i am not pretty new anymore on swift but still have not many skills on it. I am facing some difficult issues through this tutorial from apple-docs. https://developer.apple.com/documentation/createml/creating_an_image_classifier_model/#overview I have successfully created many mlmodel already now as given in the tutorial. However when I come to the step to integrate it to Xcode, I am facing the Issue that I don't get any predictions from my mlmodel. I followed all steps in this tutorial, and downloaded the example code. The tutorial said "just change this model line, with your model and it works" but indeed it doesn't. With doesn't work, I mean that I don't get any predictions back when I use this example. I can start the application and test it with the iPhone simulator. But the only output that I get is "no predictions, please check console log". I searched down the code an could find out, that this is an error-message which appears from the code of MainViewController.swift (99:103) private func imagePredictionHandler(_ predictions: [ImagePredictor.Prediction]?) { guard let predictions = predictions else { updatePredictionLabel("No predictions. (Check console log.)") return } As i understand the code, it return the message when no predictions come back from the mlmodel. If I use mlmodel given by apple (as MobileNetV2 etc.) the example code is working every time (give predictions back). Thats why I am pretty sure, the issue has to be anywhere on my side but I can't figure it out. The mlmodel is trained with images from fruits360-dataset and self added some images of charts. To equal the values I tooked 70 pictures of each class. If I try this model in createML-Preview I can see the model is able to predict my validation-pictures. But when I integrate the model in Xcode it isn't able to give me that predictions for the exact same image. Do anyone know how to get this issue done? Im using the latest Xcode version. Thanks in advance
Posted
by Benhuen.
Last updated
.
Post not yet marked as solved
3 Replies
331 Views
I'm trying to make an app that'll suggest a color based on a keyword the user inputs. I store prediction from the model to a [String: Double] array and then compute the output color. However, I'm stuck on two strange errors. Here's my code: extension Array where Element == [String: Double]{ func average() -> RGBList{ var returnValue = (r: 0, g: 0, b: 0) var r = 0 var g = 0 var b = 0 for key in self{ r = Int(key[0].split(",")[0]) * key[1] ... } ... } }
Posted Last updated
.
Post not yet marked as solved
1 Replies
278 Views
Hello everyone, I am working on a simple ML project. I trained a custom model on classifying the images of US dollar bill notes. Everything seems good to me and I don't know why the classification label isn't being updated with any value. Files: https://codeshare.io/OdXzMW
Posted Last updated
.
Post not yet marked as solved
18 Replies
2.5k Views
hello, When I used xcode to generate the model encryption key, an error was reported, the error was 'Failed to Generate Encryption Key and Sign in with you Apple ID in the Apple ID pane in System Preferences and retry '.But I have logged in my apple id in the system preferences, and this error still occurs.I reinstalled xcode and re-logged in to my apple id. This error still exists. Xcode Version 12.4 macOS Catalina 10.15.7 thanks
Posted
by lake-tang.
Last updated
.
Post not yet marked as solved
0 Replies
282 Views
Following the guide found here, I've been able to preview image classification in Create ML and Xcode. However, when I swap out the MobileNet model for my own and try running it as an app, images are not classified accurately. When I check the same images using my model in its Xcode preview tab, the guesses are accurate. I've tried changing this line to the different available options, but it doesn't seem to help: imageClassificationRequest.imageCropAndScaleOption = .centerCrop Does anyone know why a model would work well in preview but not while running in the app? Thanks in advance.
Posted Last updated
.
Post not yet marked as solved
1 Replies
236 Views
Hello, is there a possibility to use the actionClassifier in CreateML ro create a fitnessApp that can recognize the action AND GIVE CORRECTION feedbacks to the user by using the the recognized keypoints? Maybe 3 keypoints as an angle and give feedback? How can I access those joints in Xcode?
Posted
by Ticallist.
Last updated
.
Post not yet marked as solved
0 Replies
856 Views
Most examples, including within documentation, of using CoreML with iOS involve the creation of the Model under Xcode on a Mac and then inclusion of the Xcode generated MLFeatureProvider class into the iOS app and (re)compiling the app.  However, it’s also possible to download an uncompiled model directly into an iOS app  and then compile it (background tasks) - but there’s no MLFeatureProvider class.  The same applies when using CreateML in an iOS app (iOS 15 beta) - there’s no automatically generated MLFeatureProvider.  So how do you get one?  I’ve seen a few queries on here and elsewhere related to this problem, but couldn’t find any clear examples of a solution.  So after some experimentation, here’s my take on how to go about it: Firstly, if you don’t know what features the Model uses, print the model description e.g. print("Model: ",mlModel!.modelDescription). Which gives Model:   inputs: (     "course : String",     "lapDistance : Double",     "cumTime : Double",     "distance : Double",     "lapNumber : Double",     "cumDistance : Double",     "lapTime : Double" ) outputs: (     "duration : Double" ) predictedFeatureName: duration ............ A prediction is created by guard **let durationOutput = try? mlModel!.prediction(from: runFeatures) ** …… where runFeatures is an instance of a class that provides a set of feature names and the value of each feature to be used in making a prediction.  So, for my model that predicts run duration from course, lap number, lap time etc the RunFeatures class is: class RunFeatures : MLFeatureProvider {     var featureNames: Set = ["course","distance","lapNumber","lapDistance","cumDistance","lapTime","cumTime","duration"]     var course : String = "n/a"     var distance : Double = -0.0     var lapNumber : Double = -0.0     var lapDistance : Double = -0.0     var cumDistance : Double = -0.0     var lapTime : Double = -0.0     var cumTime : Double = -0.0          func featureValue(for featureName: String) -> MLFeatureValue? {         switch featureName {         case "distance":             return MLFeatureValue(double: distance)         case "lapNumber":             return MLFeatureValue(double: lapNumber)         case "lapDistance":             return MLFeatureValue(double: lapDistance)         case "cumDistance":             return MLFeatureValue(double: cumDistance)         case "lapTime":             return MLFeatureValue(double: lapTime)         case "cumTime":             return MLFeatureValue(double: cumTime)         case "course":             return MLFeatureValue(string: course)         default:             return MLFeatureValue(double: -0.0)         }     } } Then in my DataModel, prior to prediction, I create an instance of RunFeatures with the input values on which I want to base the prediction: var runFeatures = RunFeatures() runFeatures.distance = 3566.0 runFeatures.lapNumber = 1.0 runFeatures.lapDistance = 1001.0  runFeatures.lapTime = 468.0  runFeatures.cumTime = 468.0  runFeatures.cumDistance = 1001.0  runFeatures.course = "Wishing Well Loop" NOTE there’s no need to provide the output feature (“duration”) here, nor in the featureValue method above but it is required in featureNames. Then get the prediction with guard let durationOutput = try? mlModel!.prediction(from: runFeatures)  Regards, Michaela
Posted Last updated
.