Create ML

RSS for tag

Create machine learning models for use in your app using Create ML.

Create ML Documentation

Posts under Create ML tag

44 Posts
Sort by:
Post not yet marked as solved
0 Replies
31 Views
Hi everyone, is it possible to use a 3D USDZ file to train a model in Create ML, I see there is an image option but it would be good to use these files from Reality Composer from object capture? Or is this in the works for forthcoming Xcode updates? Many Thanks Stuart
Posted Last updated
.
Post not yet marked as solved
2 Replies
89 Views
Note: I posted this to the feedback assistant but haven't gotten a response for 3months =( FB13482199 I am trying to train a large image classifier. I have a training run for ~300000 images. Each image has a folder and the file names within the folders are somewhat random. 381 classes. I am on an M2 Pro, Sonoma 14.0 running CreateML Version 5.0 (121.1). I would prefer not to pursue the pytorch/HF -> coremltools route. CreateML seems to consistently crash ~25000-30000 images in during the feature extraction phase with "Unexpected Error". It does not seem to be due to an out of memory issue. I am looking for some guidance since it seems impossible to debug why this is consistently crashing. My initial assumption was that it could be due to blank/corrupt files. I do not think that is the case. I also checked if there were any special characters in the data/folders. I wasn't able to go through all, but did try some programatic regex. Don't think this is the case either. I attached the sysdiagnose results in feedback assistant after the crash happened. I did notice when going into /var/logs there was some write issue saying that Mac had written too much to disk. Note: I also tried Xcode 15.2-beta this time and the associated CoreML version. My questions: How can I fix this? How should I go about debugging CreateML errors in the future? 'Unexpected Error' - where can I go about getting the exact createml logs on my device? This is far too broad of an error statement Please let me know. As a note, I did successfully train a past model on ~100000 images. I am planning to 10-15x that if this run is successful. Please help, spent a lot of time gathering the extra data and to date have been an occasional power user of createml. Haven't heard back from Apple since December =/. I assume I'm not the only one with this problem, so looking for any instructions to hands on debug and help others. Thx!
Posted
by jzooms.
Last updated
.
Post not yet marked as solved
0 Replies
160 Views
Context So basically I've trained my model for object detection with +4k images. Under preview I'm able to check the prediction for Image "A" which detects two labels with 100% and its Bounding Boxes look accurate. The problem itself However, inside the Swift Playground, when I try to perform object detection using the same model and same Image I don't get same results. What I expected Is that after performing the request and processing the array of VNRecognizedObjectObservation would show the very same results that appear in CreateML Preview. Notes: So the way I'm importing the model into playground is just by drag and drop. I've trained the images using JPEG format. The test Image is rotated so that it looks vertical using MacOS Finder rotation tool. I've tried, while creating VNImageRequestHandlerto pass a different orientation, with the same result. Swift Playground code This is the code I'm using. import UIKit import Vision do{ let model = try MYMODEL_FROMCREATEML(configuration: MLModelConfiguration()) let mlModel = model.model let coreMLModel = try VNCoreMLModel(for: mlModel) let request = VNCoreMLRequest(model: coreMLModel) { request, error in guard let results = request.results as? [VNRecognizedObjectObservation] else { return } results.forEach { result in print(result.labels) print(result.boundingBox) } } let image = UIImage(named: "TEST_IMAGE.HEIC")! let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!) try requestHandler.perform([request]) } catch { print(error) } Additional Notes & Uncertainties Not sure if this is relevant, but just in case: I've trained the model using pictures I took from my iPhone using 48MP HEIC format. All photos were on vertical position. With a python script I overwrote the EXIF orientation to 1 (Normal). This was in order to be able to annotate the images using the CVAT tool and then convert to CreateML annotation format. Assumption #1 Since I've read that Object Detection in Create ML is based on YOLOv3 architecture which inside the first layer resizes the image dimension, meaning that I don't have to worry about using very large images to train my model. Is this correct? Assumption #2 Also makes me asume that the same thing happens when I try to make a prediction?
Posted
by joe_dev.
Last updated
.
Post not yet marked as solved
2 Replies
345 Views
I have trained a model to classify some symbols using Create ML. In my app I am using VNImageRequestHandler and VNCoreMLRequest to classify image data. If I use a CVPixelBuffer obtained from an AVCaptureSession then the classifier runs as I would expect. If I point it at the symbols it will work fairly accurately, so I know the model is trained fairly correctly and works in my app. If I try to use a cgImage that is obtained by cropping a section out of a larger image (from the gallery), then the classifier does not work. It always seems to return the same result (although the confidence is not a 1.0 and varies for each image, it will be to within several decimal points of it, eg 9.9999). If I pause the app when I have the cropped image and use the debugger to obtain the cropped image (via the little eye icon and then open in preview), then drop the image into the Preview section of the MLModel file or in Create ML, the model correctly classifies the image. If I scale the cropped image to be the same size as I get from my camera, and convert the cgImage to a CVPixelBuffer with same size and colour space to be the same as the camera (1504, 1128, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) then I get some difference in ouput, it's not accurate, but it returns different results if I specify the 'centerCrop' or 'scaleFit' options. So I know that 'something' is happening, but it's not the correct thing. I was under the impression that passing a cgImage to the VNImageRequestHandler would perform the necessary conversions, but experimentation shows this is not the case. However, when using the preview tool on the model or in Create ML this conversion is obviously being done behind the scenes because the cropped part is being detected. What am I doing wrong. tl;dr my model works, as backed up by using video input directly and also dropping cropped images into preview sections passing the cropped images directly to the VNImageRequestHandler does not work modifying the cropped images can produce different results, but I cannot see what I should be doing to get reliable results. I'd like my app to behave the same way the preview part behaves, I give it a cropped part of an image, it does some processing, it goes to the classifier, it returns a result same as in Create ML.
Posted
by Bergasms.
Last updated
.
Post not yet marked as solved
0 Replies
215 Views
Hello I am making a rock paper scissors game using object detection with a model I made using create ml and a dataset I found online. The trained model works and I tried to implement it into Xcode but when I run my app I get this error This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler. I am still new to create ml and I cannot seem to find anything about making my model updatable in create ml.
Posted
by 1emil.
Last updated
.
Post not yet marked as solved
1 Replies
280 Views
How do I add a already made CoreML model into my playground? I tried what people recommended online -- building a test project and get the .mlmodelc file and put that in the playground along with the autogenerated class for the model. However, I keep on getting so many errors. The errors: Unexpected duplicate tasks Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Unexpected duplicate tasks Showing Recent Issues Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel ZooClassifier.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language.
Posted Last updated
.
Post not yet marked as solved
0 Replies
264 Views
Is 30x30 the maximum grid size on Create ML App? The input allows me to set any number higher than that, but on starting training, the number falls back to 30x30. Is that a limitation or a bug in the app?
Posted
by gcstr.
Last updated
.
Post not yet marked as solved
0 Replies
172 Views
Where can I find CreateML logs? I'd like to inspect log lines if they exist to diagnose what kind of error the app encounters when I provide it training data for a multi-label image classifier and the UI displays "Data Analysis stopped". I do see some crash reports for "MLRecipeExecutionService" in the Console app which seem related, but I haven't spotted anything useful there yet.
Posted Last updated
.
Post marked as solved
5 Replies
874 Views
I have been attempting to debug this for over 10 hours... I am working on implementing Apple's MobileNetV2 CoreML model into a Swift Playgrounds. I performed the following steps Compiled CoreML model in regular Xcode project Moved Compiled CoreML (MobileNetV2.mlmodelc) model to Resources folder of Swift Playground Copy Paste the model class (MobileNetV2.swift) into the Sources folder of Swift Playground Use UIImage extensions to resize and convert UIImage into CVbuffer Implement basic code to run the model. However, every time I run this, it keeps giving me this error: MobileNetV2.swift:100: Fatal error: Unexpectedly found nil while unwrapping an Optional value From the automatically generated model class function: /// URL of model assuming it was installed in the same bundle as this class class var urlOfModelInThisBundle : URL { let bundle = Bundle(for: self) return bundle.url(forResource: "MobileNetV2", withExtension:"mlmodelc")! } The model builds perfectly, this is my contentView Code: import SwiftUI struct ContentView: View { func test() -> String{ // 1. Load the image from the 'Resources' folder. let newImage = UIImage(named: "img") // 2. Resize the image to the required input dimension of the Core ML model // Method from UIImage+Extension.swift let newSize = CGSize(width: 224, height: 224) guard let resizedImage = newImage?.resizeImageTo(size: newSize) else { fatalError("⚠️ The image could not be found or resized.") } // 3. Convert the resized image to CVPixelBuffer as it is the required input // type of the Core ML model. Method from UIImage+Extension.swift guard let convertedImage = resizedImage.convertToBuffer() else { fatalError("⚠️ The image could not be converted to CVPixelBugger") } // 1. Create the ML model instance from the model class in the 'Sources' folder let mlModel = MobileNetV2() // 2. Get the prediction output guard let prediction = try? mlModel.prediction(image: convertedImage) else { fatalError("⚠️ The model could not return a prediction") } // 3. Checking the results of the prediction let mostLikelyImageCategory = prediction.classLabel let probabilityOfEachCategory = prediction.classLabelProbs var highestProbability: Double { let probabilty = probabilityOfEachCategory[mostLikelyImageCategory] ?? 0.0 let roundedProbability = (probabilty * 100).rounded(.toNearestOrEven) return roundedProbability } return("\(mostLikelyImageCategory): \(highestProbability)%") } var body: some View { VStack { let _ = print(test()) Image(systemName: "globe") .imageScale(.large) .foregroundColor(.accentColor) Text("Hello, world!") Image(uiImage: UIImage(named: "img")!) } } } Upon printing my bundle contents, I get these: ["_CodeSignature", "metadata.json", "__PlaceholderAppIcon76x76@2x~ipad.png", "Info.plist", "__PlaceholderAppIcon60x60@2x.png", "coremldata.bin", "{App Name}", "PkgInfo", "Assets.car", "embedded.mobileprovision"] Anything would help 🙏 For additional reference, here are my UIImage extensions in ExtImage.swift: //Huge thanks to @mprecke on github for these UIImage extension function. import Foundation import UIKit extension UIImage { func resizeImageTo(size: CGSize) -> UIImage? { UIGraphicsBeginImageContextWithOptions(size, false, 0.0) self.draw(in: CGRect(origin: CGPoint.zero, size: size)) let resizedImage = UIGraphicsGetImageFromCurrentImageContext()! UIGraphicsEndImageContext() return resizedImage } func convertToBuffer() -> CVPixelBuffer? { let attributes = [ kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue ] as CFDictionary var pixelBuffer: CVPixelBuffer? let status = CVPixelBufferCreate( kCFAllocatorDefault, Int(self.size.width), Int(self.size.height), kCVPixelFormatType_32ARGB, attributes, &pixelBuffer) guard (status == kCVReturnSuccess) else { return nil } CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!) let rgbColorSpace = CGColorSpaceCreateDeviceRGB() let context = CGContext( data: pixelData, width: Int(self.size.width), height: Int(self.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) context?.translateBy(x: 0, y: self.size.height) context?.scaleBy(x: 1.0, y: -1.0) UIGraphicsPushContext(context!) self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height)) UIGraphicsPopContext() CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0)) return pixelBuffer } }
Posted Last updated
.
Post not yet marked as solved
0 Replies
369 Views
In theory, sending signals from iPhone apps to and from the brain with non-invasive technology could be achieved through a combination of brain-computer interface (BCI) technologies, machine learning algorithms, and mobile app development. Brain-Computer Interface (BCI): BCI technology can be used to record brain signals and translate them into commands that can be understood by a computer or a mobile device. Non-invasive BCIs, such as electroencephalography (EEG), can track brain activity using sensors placed on or near the head[6]. For instance, a portable, non-invasive, mind-reading AI developed by UTS uses an AI model called DeWave to translate EEG signals into words and sentences[3]. Machine Learning Algorithms: Machine learning algorithms can be used to analyze and interpret the brain signals recorded by the BCI. These algorithms can learn from large quantities of EEG data to translate brain signals into specific commands[3]. Mobile App Development: A mobile app can be developed to receive these commands and perform specific actions on the iPhone. The app could also potentially send signals back to the brain using technologies like transcranial magnetic stimulation (TMS), which can deliver information to the brain[5]. However, it's important to note that while this technology is theoretically possible, it's still in the early stages of development and faces significant technical and ethical challenges. Current non-invasive BCIs do not have the same level of fidelity as invasive devices, and the practical application of these systems is still limited[1][3]. Furthermore, ethical considerations around privacy, consent, and the potential for misuse of this technology must also be addressed[13]. Sources [1] You can now use your iPhone with your brain after a major breakthrough | Semafor https://www.semafor.com/article/11/01/2022/you-can-now-use-your-iphone-with-your-brain [2] ! Are You A Robot? https://www.sciencedirect.com/science/article/pii/S1110866515000237 [3] Portable, non-invasive, mind-reading AI turns thoughts into text https://techxplore.com/news/2023-12-portable-non-invasive-mind-reading-ai-thoughts.html [4] Elon Musk's Neuralink implants brain chip in first human https://www.reuters.com/technology/neuralink-implants-brain-chip-first-human-musk-says-2024-01-29/ [5] BrainNet: A Multi-Person Brain-to-Brain Interface for Direct Collaboration Between Brains - Scientific Reports https://www.nature.com/articles/s41598-019-41895-7 [6] Brain-computer interfaces and the future of user engagement https://www.fastcompany.com/90802262/brain-computer-interfaces-and-the-future-of-user-engagement [7] Mobile App + Wearable For Neurostimulation - Accion Labs https://www.accionlabs.com/mobile-app-wearable-for-neurostimulation [8] Signal Generation, Acquisition, and Processing in Brain Machine Interfaces: A Unified Review https://www.frontiersin.org/articles/10.3389/fnins.2021.728178/full [9] Mind-reading technology has arrived https://www.vox.com/future-perfect/2023/5/4/23708162/neurotechnology-mind-reading-brain-neuralink-brain-computer-interface [10] Synchron Brain Implant - Breakthrough Allows You to Control Your iPhone With Your Mind - Grit Daily News https://gritdaily.com/synchron-brain-implant-controls-tech-with-the-mind/ [11] Mind uploading - Wikipedia https://en.wikipedia.org/wiki/Mind_uploading [12] BirgerMind - Express your thoughts loudly https://birgermind.com [13] Elon Musk wants to merge humans with AI. How many brains will be damaged along the way? https://www.vox.com/future-perfect/23899981/elon-musk-ai-neuralink-brain-computer-interface [14] Models of communication and control for brain networks: distinctions, convergence, and future outlook https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7655113/ [15] Mind Control for the Masses—No Implant Needed https://www.wired.com/story/nextmind-noninvasive-brain-computer-interface/ [16] Elon Musk unveils Neuralink’s plans for brain-reading ‘threads’ and a robot to insert them https://www.theverge.com/2019/7/16/20697123/elon-musk-neuralink-brain-reading-thread-robot [17] Essa and Kotte https://arxiv.org/pdf/2201.04229.pdf [18] Synchron's Brain Implant Breakthrough Lets Users Control iPhones And iPads With Their Mind https://hothardware.com/news/brain-implant-breakthrough-lets-you-control-ipad-with-your-mind [19] An Apple Watch for Your Brain https://www.thedeload.com/p/an-apple-watch-for-your-brain [20] Toward an information theoretical description of communication in brain networks https://direct.mit.edu/netn/article/5/3/646/97541/Toward-an-information-theoretical-description-of [21] A soft, wearable brain–machine interface https://news.ycombinator.com/item?id=28447778 [22] Portable neurofeedback App https://www.psychosomatik.com/en/portable-neurofeedback-app/ [23] Intro to Brain Computer Interface http://learn.neurotechedu.com/introtobci/
Posted
by ztick.
Last updated
.
Post not yet marked as solved
0 Replies
272 Views
After training my dataset, the training, validation, and testing sets all show 0% in detection accuracy and all my test photos show false negative. The dataset has 1032 photos and 2 classes, and I used Roboflow for the image annotation. For network, I choose full network. If there is any way to fix this?
Posted Last updated
.
Post not yet marked as solved
0 Replies
322 Views
I created a Hand Pose model using CreateML and integrated it into my SwiftUI project app. While coding, I referred to the Apple Developer documentation app for the necessary code. However, when I ran the app on an iPhone 14, the camera didn't display any effects or finger numbers as expected. note: I've already tested the ML model separately, and it works fine. the code: import CoreML import SceneKit import SwiftUI import Vision import ARKit struct ARViewContainer: UIViewControllerRepresentable { let arViewController: ARViewController let model: modelHand func makeUIViewController(context: UIViewControllerRepresentableContext<ARViewContainer>) -> ARViewController { arViewController.model = model return arViewController } func updateUIViewController(_ uiViewController: ARViewController, context: UIViewControllerRepresentableContext<ARViewContainer>) { // Update the view controller if needed } } class ARViewController: UIViewController, ARSessionDelegate { var frameCounter = 0 let handPosePredictionInterval = 10 var model: modelHand! var effectNode: SCNNode? override func viewDidLoad() { super.viewDidLoad() let arView = ARSCNView(frame: view.bounds) view.addSubview(arView) let session = ARSession() session.delegate = self let configuration = ARWorldTrackingConfiguration() configuration.frameSemantics = .personSegmentationWithDepth arView.session.run(configuration) } func session(_ session: ARSession, didUpdate frame: ARFrame) { let pixelBuffer = frame.capturedImage let handPoseRequest = VNDetectHumanHandPoseRequest() handPoseRequest.maximumHandCount = 1 handPoseRequest.revision = VNDetectHumanHandPoseRequestRevision1 let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, options: [:]) do { try handler.perform([handPoseRequest]) } catch { assertionFailure("Hand Pose Request failed: \(error)") } guard let handPoses = handPoseRequest.results, !handPoses.isEmpty else { return } if frameCounter % handPosePredictionInterval == 0 { if let handObservation = handPoses.first as? VNHumanHandPoseObservation { do { let keypointsMultiArray = try handObservation.keypointsMultiArray() let handPosePrediction = try model.prediction(poses: keypointsMultiArray) let confidence = handPosePrediction.labelProbabilities[handPosePrediction.label]! print("Confidence: \(confidence)") if confidence > 0.9 { print("Rendering hand pose effect: \(handPosePrediction.label)") renderHandPoseEffect(name: handPosePrediction.label) } } catch { fatalError("Failed to perform hand pose prediction: \(error)") } } } } func renderHandPoseEffect(name: String) { switch name { case "One": print("Rendering effect for One") if effectNode == nil { effectNode = addParticleNode(for: "One") } default: print("Removing all particle nodes") removeAllParticleNode() } } func removeAllParticleNode() { effectNode?.removeFromParentNode() effectNode = nil } func addParticleNode(for poseName: String) -> SCNNode { print("Adding particle node for pose: \(poseName)") let particleNode = SCNNode() return particleNode } } struct ContentView: View { let model = modelHand() var body: some View { ARViewContainer(arViewController: ARViewController(), model: model) } } #Preview { ContentView() }
Posted
by rimah.
Last updated
.
Post marked as solved
1 Replies
366 Views
Hello Apple Developer community, I hope this message finds you well. I am currently facing an issue with Create ML in Xcode, and I am seeking assistance from the knowledgeable members of this forum. Any help or guidance would be greatly appreciated. Problem Description: I am encountering an unexpected issue when attempting to create a classification model for images using Create ML in Xcode. Upon opening Create ML, the application closes unexpectedly when I choose to create a new image classification model. Steps I Have Taken: I have already tried the following steps to troubleshoot the issue: Updated Xcode and macOS to the latest versions. Restarted Xcode and my computer. Created a new sample project to isolate the issue. Despite these efforts, the problem persists. System Information: Xcode Version: 15.2 macOS Version: Sonoma 14.0 I am on a tight deadline for a project, and resolving this issue quickly is crucial. Your help is invaluable, and I thank you in advance for any support you can provide. Best regards.
Posted
by JuanLos.
Last updated
.
Post not yet marked as solved
3 Replies
712 Views
I am trying to implement a ML model with Core ML in a playground for a Student Challenge project, but I can not get it to work. I have already tried everything I found online but nothing seems to work (the tutorials where posted long time ago). Anyone knows how to do this with Xcode 15 and the most recent updates?
Posted Last updated
.
Post not yet marked as solved
1 Replies
528 Views
Hi, In Xcode 14 I was able to train linear regression models with Create ML using large CSV files (I tested on about 30000 items and 5 features): However, in Xcode 15 (I tested on 15.0.1 and 15.1), the training continuously stays in the "Processing" state: When using a dataset with 900 items, everything works fine. I filed a feedback for this issue: FB13516799. Does anybody else have this issue / can reproduce it?
Posted
by CMDdev.
Last updated
.
Post not yet marked as solved
4 Replies
447 Views
I'm following Apple WWDC video (https://developer.apple.com/videos/play/wwdc2021/10037/) about how to create a recommendation model. But I'm getting this error when I run the project on that like of code from their tutorial. "Column keywords has element of unsupported type Dictionary<String, Double>." Here is the block of code took from the transcript of WWDC video that cause me issue: func featuresFromMealAndKeywords(meal: String, keywords: [String]) -> [String: Double] { // Capture interactions between content (the dish keywords) and context (meal) by // adding a copy of each keyword modified to include the meal. let featureNames = keywords + keywords.map { meal + ":" + $0 } // For each keyword, create an entry in a dictionary of features with a value of 1.0. return featureNames.reduce(into: [:]) { features, name in features[name] = 1.0 } } var trainingKeywords: [[String: Double]] = [] var trainingTargets: [Double] = [] for item in userPurchasedItems { // Add in the positive example. trainingKeywords.append( featuresFromMealAndKeywords(meal: item.meal, keywords: item.keywords)) trainingTargets.append(1.0) // Add in the negative example. let negativeKeywords = allKeywords.subtracting(item.keywords) trainingKeywords.append( featuresFromMealAndKeywords(meal: item.meal, keywords: Array(negativeKeywords))) trainingTargets.append(-1.0) } // Create the training data. var trainingData = DataFrame() trainingData.append(column: Column(name: "keywords" contents: trainingKeywords)) trainingData.append(column: Column(name: "target", contents: trainingTargets)) // Create the model. let model = try MLLinearRegressor(trainingData: trainingData, targetColumn: "target") Did DataFrame implementation changed since then and doesn't support Dictionary anymore? I'm at lost right now on how to reproduce their example.
Posted Last updated
.
Post not yet marked as solved
1 Replies
381 Views
I created a word tagging model in CreateML and am trying to make predictions with it using the following code: let text = "$30.00 7/1/2023" let model = TaggingModel() let input = TaggingModelInput(text: text) guard let output = try? model.prediction(input: input) else { fatalError("Unexpected runtime error.") } However, the output separates "$" and "30.00" as separate tokens as well as "7", "/", "1", "/", etc. Is there any way to make sure prices and dates get grouped together and to simply separate tokens based on whitespace? Any help is appreciated!
Posted
by esch.
Last updated
.
Post not yet marked as solved
1 Replies
464 Views
Hello, I'm trying to train a MLImageClassifier dataset using Swift using the function MLImageClassifier.train. It doesn't change the dataset size (I have the same problem with a smaller one), but when the train reaches the 9 completedUnitCount of 10, even if the CPU usage is still high, seems to happen a soft lock that doesn't never brings the model to its completion (or error). The dataset is made of jpg images, using the CreateML app doesn't appear any problem during the training. There is any known issue with CreateML training APIs about part 9 of the process? There is any information about this part of the training job? Thank you
Posted Last updated
.