Integrate machine learning models into your app using Core ML.

Posts under Core ML tag

118 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

Add new Labels to MLImageClassifier of existing Checkpoint/Session
Hey, i just created and trained an MLImageClassifier via the MLImageclassifier.train() method (https://developer.apple.com/documentation/createml/mlimageclassifier/train(trainingdata:parameters:sessionparameters:)) For my Trainingdata (MLImageclassifier.DataSource) i am using my directoy structure, so i got an images folder with subfolders of person1, person2, person3 etc. which contain images of the labeled persons (https://developer.apple.com/documentation/createml/mlimageclassifier/datasource/labeleddirectories(at:)) I am saving the checkpoints and sessions in my appdirectory, so i can create an MLIMageClassifier from an exisiting MLSession and/or MLCheckpoint. My question is: is there any way to add new labels, optimally from my directoy strucutre, to an MLImageClassifier which i create from an existing MLCheckpoint/MLSession? So like adding a person4 and training my pretrained Classifier with only that person4. Or is it simply not possible and i have to train from the beginning everytime i want to add a new label? Unfortunately i cannot find anything in the API. Thanks!
0
0
484
Apr ’24
MLUpdateTask returning no model
Hello, I have created a Neural Network → K Nearest Neighbors Classifier with python. # followed by k-Nearest Neighbors for classification. import coremltools import coremltools.proto.FeatureTypes_pb2 as ft from coremltools.models.nearest_neighbors import KNearestNeighborsClassifierBuilder import copy # Take the SqueezeNet feature extractor from the Turi Create model. base_model = coremltools.models.MLModel("SqueezeNet.mlmodel") base_spec = base_model._spec layers = copy.deepcopy(base_spec.neuralNetworkClassifier.layers) # Delete the softmax and innerProduct layers. The new last layer is # a "flatten" layer that outputs a 1000-element vector. del layers[-1] del layers[-1] preprocessing = base_spec.neuralNetworkClassifier.preprocessing # The Turi Create model is a classifier, which is treated as a special # model type in Core ML. But we need a general-purpose neural network. del base_spec.neuralNetworkClassifier.layers[:] base_spec.neuralNetwork.layers.extend(layers) # Also copy over the image preprocessing options. base_spec.neuralNetwork.preprocessing.extend(preprocessing) # Remove other classifier stuff. base_spec.description.ClearField("metadata") base_spec.description.ClearField("predictedFeatureName") base_spec.description.ClearField("predictedProbabilitiesName") # Remove the old classifier outputs. del base_spec.description.output[:] # Add a new output for the feature vector. output = base_spec.description.output.add() output.name = "features" output.type.multiArrayType.shape.append(1000) output.type.multiArrayType.dataType = ft.ArrayFeatureType.FLOAT32 # Connect the last layer to this new output. base_spec.neuralNetwork.layers[-1].output[0] = "features" # Create the k-NN model. knn_builder = KNearestNeighborsClassifierBuilder(input_name="features", output_name="label", number_of_dimensions=1000, default_class_label="???", number_of_neighbors=3, weighting_scheme="inverse_distance", index_type="linear") knn_spec = knn_builder.spec knn_spec.description.input[0].shortDescription = "Input vector" knn_spec.description.output[0].shortDescription = "Predicted label" knn_spec.description.output[1].shortDescription = "Probabilities for each possible label" knn_builder.set_number_of_neighbors_with_bounds(3, allowed_range=(1, 10)) # Use the same name as in the neural network models, so that we # can use the same code for evaluating both types of model. knn_spec.description.predictedProbabilitiesName = "labelProbability" knn_spec.description.output[1].name = knn_spec.description.predictedProbabilitiesName # Put it all together into a pipeline. pipeline_spec = coremltools.proto.Model_pb2.Model() pipeline_spec.specificationVersion = coremltools._MINIMUM_UPDATABLE_SPEC_VERSION pipeline_spec.isUpdatable = True pipeline_spec.description.input.extend(base_spec.description.input[:]) pipeline_spec.description.output.extend(knn_spec.description.output[:]) pipeline_spec.description.predictedFeatureName = knn_spec.description.predictedFeatureName pipeline_spec.description.predictedProbabilitiesName = knn_spec.description.predictedProbabilitiesName # Add inputs for training. pipeline_spec.description.trainingInput.extend([base_spec.description.input[0]]) pipeline_spec.description.trainingInput[0].shortDescription = "Example image" pipeline_spec.description.trainingInput.extend([knn_spec.description.trainingInput[1]]) pipeline_spec.description.trainingInput[1].shortDescription = "True label" pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(base_spec) pipeline_spec.pipelineClassifier.pipeline.models.add().CopyFrom(knn_spec) pipeline_spec.pipelineClassifier.pipeline.names.extend(["FeatureExtractor", "kNNClassifier"]) coremltools.utils.save_spec(pipeline_spec, "../Models/FaceDetection.mlmodel") it is from the following tutorial: https://machinethink.net/blog/coreml-training-part3/ It Works and I were am to include it into my project: I want to train the model via the MLUpdateTask: ar batchInputs: [MLFeatureProvider] = [] let imageconstraint = (model.model.modelDescription.inputDescriptionsByName["image"]?.imageConstraint) let imageOptions: [MLFeatureValue.ImageOption: Any] = [ .cropAndScale: VNImageCropAndScaleOption.scaleFill.rawValue] var featureProviders = [MLFeatureProvider]() //URLS where images are stored let trainingData = ImageManager.getImagesAndLabel() for data in trainingData{ let label = data.key for imgURL in data.value{ let featureValue = try MLFeatureValue(imageAt: imgURL, constraint: imageconstraint!, options: imageOptions) if let pixelBuffer = featureValue.imageBufferValue{ let featureProvider = FaceDetectionTrainingInput(image: pixelBuffer, label: label) batchInputs.append(featureProvider)}} let trainingData = MLArrayBatchProvider(array: batchInputs) When calling the MLUpdateTask as follows, the context.model from completionHandler is null. Unfortunately there is no other Information available from the compiler. do{ debugPrint(context) try context.model.write(to: ModelManager.targetURL) } catch{ debugPrint("Error saving the model \(error)") } }) updateTask.resume() I get the following error when I want to access the context.model: Thread 5: EXC_BAD_ACCESS (code=1, address=0x0) Can some1 more experienced tell me how to fix this? It seems like I am missing some parameters? I am currently not splitting the Data when training into train and test data. only preprocessing im doing is scaling the image down to 227x227 pixels. Thanks!
0
0
371
Apr ’24
CreateML hyperparameters
Hi, I try to create some machine learning model for each stock in S&P500 index. When creating the model(Boosted tree model) I try to make it more successfully by doing hyper parameters using GridSearchCV. It takes so long to create one model so I don't want to think of creating all stocks models. I tried to work with CreateML and swift but it looks like it takes longer to run than sklearn on python. My question is how can I make the process faster? is there any hyper parameters on CreateML on swift (I couldn't find it at docs) and how can I run this code on my GPU? (should be much faster).
0
0
377
May ’24
The CoreML runtime is inconsistent.
for (int i = 0; i < 1000; i++){ double st_tmp = CFAbsoluteTimeGetCurrent(); retBuffer = [self.enhancer enhance:pixelBuffer error:&error]; double et_tmp = CFAbsoluteTimeGetCurrent(); NSLog(@"[enhance once] %f ms ", (et_tmp - st_tmp) * 1000); } When I run a CoreML model using the above code, I notice that the runtime gradually decreases at the beginning. output: [enhance once] 14.965057 ms [enhance once] 12.727022 ms [enhance once] 12.818098 ms [enhance once] 11.829972 ms [enhance once] 11.461020 ms [enhance once] 10.949016 ms [enhance once] 10.712981 ms [enhance once] 10.367990 ms [enhance once] 10.077000 ms [enhance once] 9.699941 ms [enhance once] 9.370089 ms [enhance once] 8.634090 ms [enhance once] 7.659078 ms [enhance once] 7.061005 ms [enhance once] 6.729007 ms [enhance once] 6.603003 ms [enhance once] 6.427050 ms [enhance once] 6.376028 ms [enhance once] 6.509066 ms [enhance once] 6.452084 ms [enhance once] 6.549001 ms [enhance once] 6.616950 ms [enhance once] 6.471038 ms [enhance once] 6.462932 ms [enhance once] 6.443977 ms [enhance once] 6.683946 ms [enhance once] 6.538987 ms [enhance once] 6.628990 ms ... In most deep learning inference frameworks, there is usually a warmup process, but typically, only the first inference is slower. Why does CoreML have a decreasing runtime at the beginning? Is there a way to make only the first inference time longer, while keeping the rest consistent? I use the CoreML model in the (void)display_pixels:(IJKOverlay *)overlay function.
1
1
504
Jun ’24
Loading CoreML model increases app size?
Hi, i have been noticing some strange issues with using CoreML models in my app. I am using the Whisper.cpp implementation which has a coreML option. This speeds up the transcribing vs Metal. However every time i use it, the app size inside iphone settings -> General -> Storage increases - specifically the "documents and data" part, the bundle size stays consistent. The Size of the app seems to increase by the same size of the coreml model, and after a few reloads it can increase to over 3-4gb! I thought that maybe the coreml model (which is in the bundle) is being saved to file - but i can't see where, i have tried to use instruments and xcode plus lots of printing out of cache and temp directory etc, deleting the caches etc.. but no effect. I have downloaded the container of the iphone from xcode and inspected it, there are some files stored inthe cache but only a few kbs, and even though the value in the settings-> storage shows a few gb, the container is only a few mb. Please can someone help or give me some guidance on what to do to figure out why the documents and data is increasing? where could this folder be pointing to that is not in the xcode downloaded container?? This is the repo i am using https://github.com/ggerganov/whisper.cpp the swiftui app and objective-C app both do the same thing i am witnessing when using coreml. Thanks in advance for any help, i am totally baffled by this behaviour
4
2
493
3d
Hundreds of AI models mining and indexing data on MAC OS.
Hi, this is the 3rd time I'm trying to post this on the forum, apple moderators ignoring it. I'm a deep learning expert with a specialization of image processing. I want to know why I have hundreds of AI models on my Mac that are indexing everything on my computer while it is idle, using programs like neuralhash that I can't find any information about. I can understand if they are being used to enhance the user experience on Spotlight, Siri, Photos, and other applications, but I couldn't find the necessary information on the web. Usually, (spyware) software like this uses them to classify files in an X/Y coordinate system. This feels like a more advanced version of stuxnet. find / -type f -name "*.weights" > ai_models.txt find / -type f -name "*labels*.txt" > ai_model_labels.txt Some of the classes from the files; file_name: SCL_v0.3.1_9c7zcipfrc_558001-labels-v3.txt document_boarding_pass document_check_or_checkbook document_currency_or_bill document_driving_license document_office_badge document_passport document_receipt document_social_security_number hier_curation hier_document hier_negative curation_meme file_name: SceneNet5_detection_labels-v8d.txt CVML_UNKNOWN_999999 aircraft automobile bicycle bird bottle bus canine consumer_electronics feline fruit furniture headgear kite fish computer_monitor motorcycle musical_instrument document people food sign watersport train ungulates watercraft flower appliance sports_equipment tool
4
2
1.2k
Jun ’24
PyTorch to CoreML Model inaccuracy
I am currently working on a 2D pose estimator. I developed a PyTorch vision transformer based model with 17 joints in COCO format for the same and then converted it to CoreML using CoreML tools version 6.2. The model was trained on a custom dataset. However, upon running the converted model on iOS, I observed a significant drop in accuracy. You can see it in this video (https://youtu.be/EfGFrOZQGtU) that demonstrates the outputs of the PyTorch model (on the left) and the CoreML model (on the right). Could you please confirm if this drop in accuracy is expected and suggest any possible solutions to address this issue? Please note that all preprocessing and post-processing techniques remain consistent between the models. P.S. While converting I also got the following warning. : TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): P.P.S. When we initialize the CoreML model on iOS 17.0, we get this error: Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (9), must be [1-8] or 20. Validation failure: Invalid Pool kernel width (13), must be [1-8] or 20. This neural network model does not have a parameter for requested key 'precisionRecallCurves'. Note: only updatable neural network models can provide parameter values and these values are only accessible in the context of an MLUpdateTask completion or progress handler.
2
0
653
8h
"accelerate everyday tasks" in apps without intents?
From https://www.apple.com/newsroom/2024/06/introducing-apple-intelligence-for-iphone-ipad-and-mac/: Powered by Apple Intelligence, Siri becomes more deeply integrated into the system experience. With richer language-understanding capabilities, Siri is more natural, more contextually relevant, and more personal, with the ability to simplify and accelerate everyday tasks. From https://developer.apple.com/apple-intelligence/: Siri is more natural, more personal, and more deeply integrated into the system. Apple Intelligence provides Siri with enhanced action capabilities, and developers can take advantage of pre-defined and pre-trained App Intents across a range of domains to not only give Siri the ability to take actions in your app, but to make your app’s actions more discoverable in places like Spotlight, the Shortcuts app, Control Center, and more. SiriKit adopters will benefit from Siri’s enhanced conversational capabilities with no additional work. And with App Entities, Siri can understand content from your app and provide users with information from your app from anywhere in the system. Based on this, as well as the video at https://developer.apple.com/videos/play/wwdc2024/10133/ , my understanding is that in order for Siri to be able to execute tasks in applications, those applications must implement the Siri Intents API. Can someone at Apple please clarify: will it be possible for Siri or some other aspect of Apple Intelligence / Core ML / Create ML to take actions in applications which do not support these APIs (e.g. web apps, Citrix apps, legacy apps)? Thank you!
1
1
438
Jun ’24
WWDC24 - What's New in Create ML - Time Series Forecasting
The What’s New in Create ML session in WWDC24 went into great depth with time-series forecasting models (beginning at: 15:14) and mentioned these new models, capabilities, and tools for iOS 18. So, far, all I can find is API documentation. I don’t see any other session in WWDC24 covering these new time-series forecasting Create ML features. Is there more substance/documentation on how to use these with Create ML? Maybe I am looking in the wrong place but I am fairly new with ML. Are there any food truck / donut shop demo/sample code like in the video? It is of great interest to get ahead of the curve on this within business applications that may take advantage of this with inventory / ordering data.
2
2
522
Jun ’24
Custom Model Not Working Correctly in the Application #56
I created a model that classifies certain objects using yolov8. I noticed that the model is not working properly in my application. While the model works fine in Xcode preview, in the application it either returns the same result with 99% accuracy for each classification or does not provide any result. In Preview it looks like this: Predictions: extension CameraVC : AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: (any Error)?) { guard let data = photo.fileDataRepresentation() else { return } guard let image = UIImage(data: data) else { return } guard let cgImage = image.cgImage else { fatalError("Unable to create CIImage") } let handler = VNImageRequestHandler(cgImage: cgImage,orientation: CGImagePropertyOrientation(image.imageOrientation)) DispatchQueue.global(qos: .userInitiated).async { do { try handler.perform([self.viewModel.detectionRequest]) } catch { fatalError("Failed to perform detection: \(error)") } } lazy var detectionRequest: VNCoreMLRequest = { do { let model = try VNCoreMLModel(for: bestv720().model) let request = VNCoreMLRequest(model: model) { [weak self] request, error in self?.processDetections(for: request, error: error) } request.imageCropAndScaleOption = .centerCrop return request } catch { fatalError("Failed to load Vision ML model: \(error)") } }() This is where i print recognized objects: func processDetections(for request: VNRequest, error: Error?) { DispatchQueue.main.async { guard let results = request.results as? [VNRecognizedObjectObservation] else { return } var label = "" var all_results = [] var all_confidence = [] var true_results = [] var true_confidence = [] for result in results { for i in 0...results.count{ all_results.append(result.labels[i].identifier) all_confidence.append(result.labels[i].confidence) for confidence in all_confidence { if confidence as! Float > 0.7 { true_results.append(result.labels[i].identifier) true_confidence.append(confidence) } } } label = result.labels[0].identifier } print("True Results " , true_results) print("True Confidence ", true_confidence) self.output?.updateView(label:label) } } I converted the model like this: from ultralytics import YOLO model = YOLO(model_path) model.export(format='coreml', nms=True, imgsz=[720,1280])
2
1
442
Jun ’24
CoreML Crashed in iOS18 Beta
Here is an App using CoreML API with ML package format, it works fine in iOS17, while it is crashed when calling [MLModel modelWithContentsOfURL ] to load model running in iOS18. It seems an exception is raised "Failed to set compute_device_types_mask E5RT: Cannot provide zero compute device types. (1)". Is it a bug of iOS18 beta version , and it will be fixed in the future? The stack is as below: Exception Codes: #0 at 0x1e9280254 Crashed Thread: 49 Application Specific Information: *** Terminating app due to uncaught exception 'NSGenericException', reason: 'Failed to set compute_device_types_mask E5RT: Cannot provide zero compute device types. (1)' Last Exception Backtrace: 0 CoreFoundation 0x0000000199466418 __exceptionPreprocess + 164 1 libobjc.A.dylib 0x00000001967cde88 objc_exception_throw + 76 2 CoreFoundation 0x0000000199560794 -[NSException initWithCoder:] 3 CoreML 0x00000001b4fcfa8c -[MLE5ProgramLibraryOnDeviceAOTCompilationImpl createProgramLibraryHandleWithRespecialization:error:] + 1584 4 CoreML 0x00000001b4fcf3cc -[MLE5ProgramLibrary _programLibraryHandleWithForceRespecialization:error:] + 96 5 CoreML 0x00000001b4fc23d8 __44-[MLE5ProgramLibrary prepareAndReturnError:]_block_invoke + 60 6 libdispatch.dylib 0x00000001a12e1160 _dispatch_client_callout + 20 7 libdispatch.dylib 0x00000001a12f07b8 _dispatch_lane_barrier_sync_invoke_and_complete + 56 8 CoreML 0x00000001b4fc3e98 -[MLE5ProgramLibrary prepareAndReturnError:] + 220 9 CoreML 0x00000001b4fc3bc0 -[MLE5Engine initWithContainer:configuration:error:] + 220 10 CoreML 0x00000001b4fc3888 +[MLE5Engine loadModelFromCompiledArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] + 344 11 CoreML 0x00000001b4faf53c +[MLLoader _loadModelWithClass:fromArchive:modelVersionInfo:compilerVersionInfo:configuration:error:] + 364 12 CoreML 0x00000001b4faedd4 +[MLLoader _loadModelFromArchive:configuration:modelVersion:compilerVersion:loaderEvent:useUpdatableModelLoaders:loadingClasses:error:] + 540 13 CoreML 0x00000001b4f9b900 +[MLLoader _loadWithModelLoaderFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] + 424 14 CoreML 0x00000001b4faaeac +[MLLoader _loadModelFromArchive:configuration:loaderEvent:useUpdatableModelLoaders:error:] + 460 15 CoreML 0x00000001b4fb0428 +[MLLoader _loadModelFromAssetAtURL:configuration:loaderEvent:error:] + 240 16 CoreML 0x00000001b4fb00c4 +[MLLoader loadModelFromAssetAtURL:configuration:error:] + 104 17 CoreML 0x00000001b5314118 -[MLModelAssetResourceFactoryOnDiskImpl modelWithConfiguration:error:] + 116 18 CoreML 0x00000001b5418cc0 __60-[MLModelAssetResourceFactory modelWithConfiguration:error:]_block_invoke + 72 19 libdispatch.dylib 0x00000001a12e1160 _dispatch_client_callout + 20 20 libdispatch.dylib 0x00000001a12f07b8 _dispatch_lane_barrier_sync_invoke_and_complete + 56 21 CoreML 0x00000001b5418b94 -[MLModelAssetResourceFactory modelWithConfiguration:error:] + 276 22 CoreML 0x00000001b542919c -[MLModelAssetModelVendor modelWithConfiguration:error:] + 152 23 CoreML 0x00000001b5380ce4 -[MLModelAsset modelWithConfiguration:error:] + 112 24 CoreML 0x00000001b4fb0b3c +[MLModel modelWithContentsOfURL:configuration:error:] + 168
2
0
472
Jun ’24
No Speedup with CoreML SDPA
I am testing the new scaled dot product attention CoreML op on macOS 15 beta 1. Based on the session video I was expecting to see a speedup when running on GPU however I see roughly equivalent performance to the same model on macOS 14. I ran tests with two models: one that simply repeats y = sdpa(y, k, v) 50 times gpt2 124M converted from nanoGPT (the only change is not returning loss from the forward method) I converted both models using coremltools 8.0b1 with minimum deployment targets of macOS 14 and also macOS 15. In Xcode, I can see that the new op was used for the macOS 15 target. Running on macOS 15 both target models take the same time, and that time matches the runtime on macOS 14. Should I be seeing performance improvements?
2
3
408
Jun ’24