Create ML

RSS for tag

Create machine learning models for use in your app using Create ML.

Posts under Create ML tag

50 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Object Tracking with Rotation of Objects
Hey, In the "Explore object tracking for visionOS" session we explore how a Globe can be tracked, and objects can be anchored to various positions. My question is if the physical Globe is rotated, will the anchored objects also respond to this in real-time? I would like to overlap a virtual map on top of a physical globe, so when the user rotates the physical globe, the virtual map also seamlessly responds. Is this possible using Object Tracking? Thanks
2
1
74
7h
Hundreds of AI models mining and indexing data on MAC OS.
Hi, this is the 3rd time I'm trying to post this on the forum, apple moderators ignoring it. I'm a deep learning expert with a specialization of image processing. I want to know why I have hundreds of AI models on my Mac that are indexing everything on my computer while it is idle, using programs like neuralhash that I can't find any information about. I can understand if they are being used to enhance the user experience on Spotlight, Siri, Photos, and other applications, but I couldn't find the necessary information on the web. Usually, (spyware) software like this uses them to classify files in an X/Y coordinate system. This feels like a more advanced version of stuxnet. find / -type f -name "*.weights" > ai_models.txt find / -type f -name "*labels*.txt" > ai_model_labels.txt Some of the classes from the files; file_name: SCL_v0.3.1_9c7zcipfrc_558001-labels-v3.txt document_boarding_pass document_check_or_checkbook document_currency_or_bill document_driving_license document_office_badge document_passport document_receipt document_social_security_number hier_curation hier_document hier_negative curation_meme file_name: SceneNet5_detection_labels-v8d.txt CVML_UNKNOWN_999999 aircraft automobile bicycle bird bottle bus canine consumer_electronics feline fruit furniture headgear kite fish computer_monitor motorcycle musical_instrument document people food sign watersport train ungulates watercraft flower appliance sports_equipment tool
4
2
799
1d
WWDC24 - What's New in Create ML - Time Series Forecasting
The What’s New in Create ML session in WWDC24 went into great depth with time-series forecasting models (beginning at: 15:14) and mentioned these new models, capabilities, and tools for iOS 18. So, far, all I can find is API documentation. I don’t see any other session in WWDC24 covering these new time-series forecasting Create ML features. Is there more substance/documentation on how to use these with Create ML? Maybe I am looking in the wrong place but I am fairly new with ML. Are there any food truck / donut shop demo/sample code like in the video? It is of great interest to get ahead of the curve on this within business applications that may take advantage of this with inventory / ordering data.
2
1
151
2d
"accelerate everyday tasks" in apps without intents?
From https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac/: Powered by Apple Intelligence, Siri becomes more deeply integrated into the system experience. With richer language-understanding capabilities, Siri is more natural, more contextually relevant, and more personal, with the ability to simplify and accelerate everyday tasks. From https://developer.apple.com/apple-intelligence/: Siri is more natural, more personal, and more deeply integrated into the system. Apple Intelligence provides Siri with enhanced action capabilities, and developers can take advantage of pre-defined and pre-trained App Intents across a range of domains to not only give Siri the ability to take actions in your app, but to make your app’s actions more discoverable in places like Spotlight, the Shortcuts app, Control Center, and more. SiriKit adopters will benefit from Siri’s enhanced conversational capabilities with no additional work. And with App Entities, Siri can understand content from your app and provide users with information from your app from anywhere in the system. Based on this, as well as the video at https://developer.apple.com/videos/play/wwdc2024/10133/ , my understanding is that in order for Siri to be able to execute tasks in applications, those applications must implement the Siri Intents API. Can someone at Apple please clarify: will it be possible for Siri or some other aspect of Apple Intelligence / Core ML / Create ML to take actions in applications which do not support these APIs (e.g. web apps, Citrix apps, legacy apps)? Thank you!
0
1
213
3d
CreateML hyperparameters
Hi, I try to create some machine learning model for each stock in S&P500 index. When creating the model(Boosted tree model) I try to make it more successfully by doing hyper parameters using GridSearchCV. It takes so long to create one model so I don't want to think of creating all stocks models. I tried to work with CreateML and swift but it looks like it takes longer to run than sklearn on python. My question is how can I make the process faster? is there any hyper parameters on CreateML on swift (I couldn't find it at docs) and how can I run this code on my GPU? (should be much faster).
0
0
260
May ’24
MLUpdateTask returning no model
Hello, I have created a Neural Network → K Nearest Neighbors Classifier with python. # followed by k-Nearest Neighbors for classification. import coremltools import coremltools.proto.FeatureTypes_pb2 as ft from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder import copy # Take the SqueezeNet feature extractor from the Turi Create model. base_model = coremltools.models.MLModel("SqueezeNet.mlmodel") base_spec = base_model._spec layers = copy.deepcopy(base_spec.neuralNetworkClassifier.layers) # Delete the softmax and innerProduct layers. The new last layer is # a "flatten" layer that outputs a 1000-element vector. del layers[-1] del layers[-1] preprocessing = base_spec.neuralNetworkClassifier.preprocessing # The Turi Create model is a classifier, which is treated as a special # model type in Core ML. But we need a general-purpose neural network. del base_spec.neuralNetworkClassifier.layers[:] base_spec.neuralNetwork.layers.extend(layers) # Also copy over the image preprocessing options. base_spec.neuralNetwork.preprocessing.extend(preprocessing) # Remove other classifier stuff. base_spec.description.ClearField("metadata") base_spec.description.ClearField("predictedFeatureName") base_spec.description.ClearField("predictedProbabilitiesName") # Remove the old classifier outputs. del base_spec.description.output[:] # Add a new output for the feature vector. output = base_spec.description.output.add() output.name = "features" output.type.multiArrayType.shape.append(1000) output.type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32 # Connect the last layer to this new output. base_spec.neuralNetwork.layers[-1].output[0] = "features" # Create the k-NN model. knn_builder = KNearestNeighborsClassifierBuilder(input_name="features", output_name="label", number_of_dimensions=1000, default_class_label="???", number_of_neighbors=3, weighting_scheme="inverse_distance", index_type="linear") knn_spec = knn_builder.spec knn_spec.description.input[0].shortDescription = "Input vector" knn_spec.description.output[0].shortDescription = "Predicted label" knn_spec.description.output[1].shortDescription = "Probabilities for each possible label" knn_builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10)) # Use the same name as in the neural network models, so that we # can use the same code for evaluating both types of model. knn_spec.description.predictedProbabilitiesName = "labelProbability" knn_spec.description.output[1].name = knn_spec.description.predictedProbabilitiesName # Put it all together into a pipeline. pipeline_spec = coremltools.proto.Model_pb2.Model() pipeline_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION pipeline_spec.isUpdatable = True pipeline_spec.description.input.extend(base_spec.description.input[:]) pipeline_spec.description.output.extend(knn_spec.description.output[:]) pipeline_spec.description.predictedFeatureName = knn_spec.description.predictedFeatureName pipeline_spec.description.predictedProbabilitiesName = knn_spec.description.predictedProbabilitiesName # Add inputs for training. pipeline_spec.description.trainingInput.extend([base_spec.description.input[0]]) pipeline_spec.description.trainingInput[0].shortDescription = "Example image" pipeline_spec.description.trainingInput.extend([knn_spec.description.trainingInput[1]]) pipeline_spec.description.trainingInput[1].shortDescription = "True label" pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(base_spec) pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(knn_spec) pipeline_spec.pipelineClassifier.pipeline.names.extend(["FeatureExtractor", "kNNClassifier"]) coremltools.utils.save_spec(pipeline_spec, "../Models/FaceDetection.mlmodel") it is from the following tutorial: https://machinethink.net/blog/coreml-training-part3/ It Works and I were am to include it into my project: I want to train the model via the MLUpdateTask: ar batchInputs: [MLFeatureProvider] = [] let imageconstraint = (model.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint) let imageOptions: [MLFeatureValue.ImageOption: Any] = [ .cropAndScale: VNImageCropAndScaleOption.scaleFill.rawValue] var featureProviders = [MLFeatureProvider]() //URLS where images are stored let trainingData = ImageManager.getImagesAndLabel() for data in trainingData{ let label = data.key for imgURL in data.value{ let featureValue = try MLFeatureValue(imageAt: imgURL, constraint: imageconstraint!, options: imageOptions) if let pixelBuffer = featureValue.imageBufferValue{ let featureProvider = FaceDetectionTrainingInput(image: pixelBuffer, label: label) batchInputs.append(featureProvider)}} let trainingData = MLArrayBatchProvider(array: batchInputs) When calling the MLUpdateTask as follows, the context.model from completionHandler is null. Unfortunately there is no other Information available from the compiler. do{ debugPrint(context) try context.model.write(to: ModelManager.targetURL) } catch{ debugPrint("Error saving the model \(error)") } }) updateTask.resume() I get the following error when I want to access the context.model: Thread 5: EXC_BAD_ACCESS (code=1, address=0x0) Can some1 more experienced tell me how to fix this? It seems like I am missing some parameters? I am currently not splitting the Data when training into train and test data. only preprocessing im doing is scaling the image down to 227x227 pixels. Thanks!
0
0
276
Apr ’24
CreateML Preview Tab Miscalculating Sample Duration
I'm training an activity classifier with CreateML and when I add samples to the Preview tab, the length of the sample it displays does not match its actual length. I have set prediction window size to 15 and sample rate to 10. The activity is roughly 1.5 seconds. When I put a 1.49 second sample into preview, it says it is 00:00.06 seconds: and when I put a 12.91 second sample into preview, it says it is 00:00.52 seconds: Here is the code I am using to print out sensor data in csv format: if motionManager.isDeviceMotionAvailable { motionManager.deviceMotionUpdateInterval = 0.1 motionManager.startDeviceMotionUpdates(to: .main) { data, error in guard let data = data, let startTime = self.startTime else { return } let timestamp = Date().timeIntervalSince(startTime) let xAcc = data.userAcceleration.x let yAcc = data.userAcceleration.y let zAcc = data.userAcceleration.z let xRotRate = data.rotationRate.x let yRotRate = data.rotationRate.y let zRotRate = data.rotationRate.z let roll = data.attitude.roll let pitch = data.attitude.pitch let yaw = data.attitude.yaw let row = "\(timestamp),\(xAcc),\(yAcc),\(zAcc),\(xRotRate),\(yRotRate),\(zRotRate),\(roll),\(pitch),\(yaw)" print(row) } } And here is the data for the 1.49 second sample mentioned above:
0
0
250
Apr ’24
Add new Labels to MLImageClassifier of existing Checkpoint/Session
Hey, i just created and trained an MLImageClassifier via the MLImageclassifier.train() method (https://developer.apple.com/documentation/createml/mlimageclassifier/train(trainingdata:parameters:sessionparameters:)) For my Trainingdata (MLImageclassifier.DataSource) i am using my directoy structure, so i got an images folder with subfolders of person1, person2, person3 etc. which contain images of the labeled persons (https://developer.apple.com/documentation/createml/mlimageclassifier/datasource/labeleddirectories(at:)) I am saving the checkpoints and sessions in my appdirectory, so i can create an MLIMageClassifier from an exisiting MLSession and/or MLCheckpoint. My question is: is there any way to add new labels, optimally from my directoy strucutre, to an MLImageClassifier which i create from an existing MLCheckpoint/MLSession? So like adding a person4 and training my pretrained Classifier with only that person4. Or is it simply not possible and i have to train from the beginning everytime i want to add a new label? Unfortunately i cannot find anything in the API. Thanks!
0
0
373
Apr ’24
No Metrics available in MLJob
Hey, im training an MLImageClassifier via the train()-method: guard let job = try? MLImageClassifier.train(trainingData: trainingData, parameters: modelParameter, sessionParameters: sessionParameters) else{ debugPrint("Training failed") return } Unfortunately the metrics of my MLProgress, which is created from the returning MLJob while training are empty. Code for listening on Progress: job.progress.publisher(for: \.fractionCompleted) .sink{[weak job] fractionCompleted in guard let job = job else { debugPrint("failure in creating job") return } guard let progress = MLProgress(progress: job.progress) else { debugPrint("failure in creating progress") return } print("ProgressPROGRESS: \(progress)") print("Progress: \(fractionCompleted)") } .store(in: &subscriptions) Printing the Progress ends in: MLProgress(elapsedTime: 2.2328420877456665, phase: CreateML.MLPhase.extractingFeatures, itemCount: 32, totalItemCount: Optional(39), metrics: [:]) Got the Same result when listening to MLCheckpoints, Metrics are empty aswell: MLCheckpoint(url: URLPATH.checkpoint, phase: CreateML.MLPhase.extractingFeatures, iteration: 32, date: 2024-04-18 11:21:18 +0000, metrics: [:]) Can some1 tell me how I can access the metrics while training? Thanks!
0
0
312
Apr ’24
What is the maximum data processing speed?
For example: we use DocKit for birdwatching, so we have an unknown field distance and direction. Distance = ? Direction = ? For example, the rock from which the observation is made. The task is to recognize the number of birds caught in the frame, add a detection frame and collect statistics. Question: What is the maximum number of frames processed with custom object recognition? If not enough, can I do the calculations myself and transfer to DokKit for fast movement?
0
0
316
Apr ’24
CreateML crashes with Unexpected Error on Feature Extraction
Note: I posted this to the feedback assistant but haven't gotten a response for 3months =( FB13482199 I am trying to train a large image classifier. I have a training run for ~300000 images. Each image has a folder and the file names within the folders are somewhat random. 381 classes. I am on an M2 Pro, Sonoma 14.0 running CreateML Version 5.0 (121.1). I would prefer not to pursue the pytorch/HF -> coremltools route. CreateML seems to consistently crash ~25000-30000 images in during the feature extraction phase with "Unexpected Error". It does not seem to be due to an out of memory issue. I am looking for some guidance since it seems impossible to debug why this is consistently crashing. My initial assumption was that it could be due to blank/corrupt files. I do not think that is the case. I also checked if there were any special characters in the data/folders. I wasn't able to go through all, but did try some programatic regex. Don't think this is the case either. I attached the sysdiagnose results in feedback assistant after the crash happened. I did notice when going into /var/logs there was some write issue saying that Mac had written too much to disk. Note: I also tried Xcode 15.2-beta this time and the associated CoreML version. My questions: How can I fix this? How should I go about debugging CreateML errors in the future? 'Unexpected Error' - where can I go about getting the exact createml logs on my device? This is far too broad of an error statement Please let me know. As a note, I did successfully train a past model on ~100000 images. I am planning to 10-15x that if this run is successful. Please help, spent a lot of time gathering the extra data and to date have been an occasional power user of createml. Haven't heard back from Apple since December =/. I assume I'm not the only one with this problem, so looking for any instructions to hands on debug and help others. Thx!
2
0
418
Mar ’24
Object Detection using Vision performs different than in Create ML Preview
Context So basically I've trained my model for object detection with +4k images. Under preview I'm able to check the prediction for Image "A" which detects two labels with 100% and its Bounding Boxes look accurate. The problem itself However, inside the Swift Playground, when I try to perform object detection using the same model and same Image I don't get same results. What I expected Is that after performing the request and processing the array of VNRecognizedObjectObservation would show the very same results that appear in CreateML Preview. Notes: So the way I'm importing the model into playground is just by drag and drop. I've trained the images using JPEG format. The test Image is rotated so that it looks vertical using MacOS Finder rotation tool. I've tried, while creating VNImageRequestHandlerto pass a different orientation, with the same result. Swift Playground code This is the code I'm using. import UIKit import Vision do{ let model = try MYMODEL_FROMCREATEML(configuration: MLModelConfiguration()) let mlModel = model.model let coreMLModel = try VNCoreMLModel(for: mlModel) let request = VNCoreMLRequest(model: coreMLModel) { request, error in guard let results = request.results as? [VNRecognizedObjectObservation] else { return } results.forEach { result in print(result.labels) print(result.boundingBox) } } let image = UIImage(named: "TEST_IMAGE.HEIC")! let requestHandler = VNImageRequestHandler(cgImage: image.cgImage!) try requestHandler.perform([request]) } catch { print(error) } Additional Notes & Uncertainties Not sure if this is relevant, but just in case: I've trained the model using pictures I took from my iPhone using 48MP HEIC format. All photos were on vertical position. With a python script I overwrote the EXIF orientation to 1 (Normal). This was in order to be able to annotate the images using the CVAT tool and then convert to CreateML annotation format. Assumption #1 Since I've read that Object Detection in Create ML is based on YOLOv3 architecture which inside the first layer resizes the image dimension, meaning that I don't have to worry about using very large images to train my model. Is this correct? Assumption #2 Also makes me asume that the same thing happens when I try to make a prediction?
0
0
492
Mar ’24
CoreML Image Classification Model - What Preprocessing Is Required For Static Images
I have trained a model to classify some symbols using Create ML. In my app I am using VNImageRequestHandler and VNCoreMLRequest to classify image data. If I use a CVPixelBuffer obtained from an AVCaptureSession then the classifier runs as I would expect. If I point it at the symbols it will work fairly accurately, so I know the model is trained fairly correctly and works in my app. If I try to use a cgImage that is obtained by cropping a section out of a larger image (from the gallery), then the classifier does not work. It always seems to return the same result (although the confidence is not a 1.0 and varies for each image, it will be to within several decimal points of it, eg 9.9999). If I pause the app when I have the cropped image and use the debugger to obtain the cropped image (via the little eye icon and then open in preview), then drop the image into the Preview section of the MLModel file or in Create ML, the model correctly classifies the image. If I scale the cropped image to be the same size as I get from my camera, and convert the cgImage to a CVPixelBuffer with same size and colour space to be the same as the camera (1504, 1128, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange) then I get some difference in ouput, it's not accurate, but it returns different results if I specify the 'centerCrop' or 'scaleFit' options. So I know that 'something' is happening, but it's not the correct thing. I was under the impression that passing a cgImage to the VNImageRequestHandler would perform the necessary conversions, but experimentation shows this is not the case. However, when using the preview tool on the model or in Create ML this conversion is obviously being done behind the scenes because the cropped part is being detected. What am I doing wrong. tl;dr my model works, as backed up by using video input directly and also dropping cropped images into preview sections passing the cropped images directly to the VNImageRequestHandler does not work modifying the cropped images can produce different results, but I cannot see what I should be doing to get reliable results. I'd like my app to behave the same way the preview part behaves, I give it a cropped part of an image, it does some processing, it goes to the classifier, it returns a result same as in Create ML.
2
0
658
Mar ’24
This neural network model does not have a parameter for requested key 'precisionRecallCurves'
Hello I am making a rock paper scissors game using object detection with a model I made using create ml and a dataset I found online. The trained model works and I tried to implement it into Xcode but when I run my app I get this error This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler. I am still new to create ml and I cannot seem to find anything about making my model updatable in create ml.
0
2
608
Mar ’24
CoreML in playgrounds
How do I add a already made CoreML model into my playground? I tried what people recommended online -- building a test project and get the .mlmodelc file and put that in the playground along with the autogenerated class for the model. However, I keep on getting so many errors. The errors: Unexpected duplicate tasks Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Target 'help' (project 'help') has write command with output /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Intermediates.noindex/help.build/Debug-iphonesimulator/help.build/adc7818afdf4ae03fd98cdd618954541.sb Unexpected duplicate tasks Showing Recent Issues Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel Target 'help' (project 'help'): CoreMLModelCompile /Users/cpulipaka/Library/Developer/Xcode/DerivedData/help-appuguzbduqvojfwkaxtnqkozecv/Build/Intermediates.noindex/Previews/help/Products/Debug-iphonesimulator/help.app/ /Users/cpulipaka/Desktop/help.swiftpm/Resources/ZooClassifier.mlmodel ZooClassifier.mlmodel: No predominant language detected. Set COREML_CODEGEN_LANGUAGE to preferred language.
1
0
577
Feb ’24