Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics

Post

Replies

Boosts

Views

Activity

App Intents "Text" parameters seem broken in iOS 17.6 beta
I'm getting widespread reports from users trialling iOS 17.6 public beta that Siri Shortcuts are failing whenever they enter any text that looks like a URL. It's getting reported to me because my app happens to have an app intent with a string parameter which can contain a URL in some circumstances. However it's easily reproducible outside of my app: just create a 2 line shortcut like the one below. If you change "This is some text" to "https://www.apple.com" the shortcut below will fail: In iOS 17.5 entering "https://www.apple.com" works fine. I've raised feedback on this (FB14206088) but can anyone confirm that this is indeed a bug and not some weird new feature of Shortcuts where the contents of a variable can somehow change the type of a variable? It would be very, very bad if this were so.
1
0
239
1w
MultivariateLinearRegressor problem training
Hi everyone, I attempted to use the MultivariateLinearRegressor from the Create ML Components framework to fit some multi-dimensional data linearly (4 dimensions in my example). I aim to obtain multi-dimensional output points (2 points in my example). However, when I fit the model with my training data and test it, it appears that only the first element of my training data is used for training, regardless of whether I use CreateMLComponents.AnnotatedBatch or [CreateMLComponents.AnnotatedFeature, CoreML.MLShapedArray>] as input. let sourceMatrix: [[Double]] = [ [0,0.1,0.2,0.3], [0.5,0.2,0.6,0.2] ] let referenceMatrix: [[Double]] = [ [0.2,0.7], [0.9,0.1] ] Here is a test code to test the function (ios 18.0 beta, Xcode 16.0 beta) In this example I train the model to learn 2 multidimensional points (4 dimensions) and here are the results of the predictions: ▿ 2 elements ▿ 0 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>> ▿ prediction : 0.20000000298023224 0.699999988079071 ▿ _storage : <StandardStorage<Double>: 0x600002ad8270> ▿ annotation : 0.2 0.7 ▿ _storage : <StandardStorage<Double>: 0x600002b30600> ▿ 1 : AnnotatedPrediction<MLShapedArray<Double>, MLShapedArray<Double>> ▿ prediction : 0.23158159852027893 0.9509953260421753 ▿ _storage : <StandardStorage<Double>: 0x600002ad8c90> ▿ annotation : 0.9 0.1 ▿ _storage : <StandardStorage<Double>: 0x600002b55f20> 0.23158159852027893 0.9509953260421753 is totally random and should be far more closer to [0.9,0.1]. Here is the test code : ( i run it on "My mac, Designed for Ipad") ContentView.swift import CoreImage import CoreImage.CIFilterBuiltins import UIKit import CoreGraphics import Accelerate import Foundation import CoreML import CreateML import CreateMLComponents func createMLShapedArray(from array: [Double], shape: [Int]) -> MLShapedArray<Double> { return MLShapedArray<Double>(scalars: array, shape: shape) } func calculateTransformationMatrixWithNonlinearity(sourceRGB: [[Double]], referenceRGB: [[Double]], degree: Int = 3) async throws -> MultivariateLinearRegressor<Double>.Model { let annotatedFeatures2 = zip(sourceRGB, referenceRGB).map { (featureArray, targetArray) -> AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>> in let featureMLShapedArray = createMLShapedArray(from: featureArray, shape: [featureArray.count]) let targetMLShapedArray = createMLShapedArray(from: targetArray, shape: [targetArray.count]) return AnnotatedFeature(feature: featureMLShapedArray, annotation: targetMLShapedArray) } // Flatten the sourceRGBPoly into a single-dimensional array var flattenedArray = sourceRGB.flatMap { $0 } let featuresMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 4]) flattenedArray = referenceRGB.flatMap { $0 } let targetMLShapedArray = createMLShapedArray(from: flattenedArray, shape: [2, 2]) // Create AnnotatedFeature instances /* let annotatedFeatures2: [AnnotatedFeature<MLShapedArray<Double>, MLShapedArray<Double>>] = [ AnnotatedFeature(feature: featuresMLShapedArray, annotation: targetMLShapedArray) ]*/ let annotatedBatch = AnnotatedBatch(features: featuresMLShapedArray, annotations: targetMLShapedArray) var regressor = MultivariateLinearRegressor<Double>() regressor.configuration.learningRate = 0.1 regressor.configuration.maximumIterationCount=5000 regressor.configuration.batchSize=2 let model = try await regressor.fitted(to: annotatedBatch,validateOn: nil) //var model = try await regressor.fitted(to: annotatedFeatures2) // Proceed to prediction once the model is fitted let predictions = try await model.prediction(from: annotatedFeatures2) // Process or use the predictions print(predictions) print("Predictions:", predictions) return model } struct ContentView: View { var body: some View { VStack {} .onAppear { Task { do { let sourceMatrix: [[Double]] = [ [0,0.1,0.2,0.3], [0.5,0.2,0.6,0.2] ] let referenceMatrix: [[Double]] = [ [0.2,0.7], [0.9,0.1] ] let model = try await calculateTransformationMatrixWithNonlinearity(sourceRGB: sourceMatrix, referenceRGB: referenceMatrix, degree: 2 ) print("Model fitted successfully:", model) } catch { print("Error:", error) } } } } }
3
0
243
1w
FactoryInstall Unable to query results, error: 5
I am developing an iPhone application. When I start testing in a simulator or on an actual device, I get the following message depending on the model. I don't see any problem with the actual operation of the app, but I don't know how to resolve this error. #FactoryInstall Unable to query results, error: 5 Unable to list voice folder Unable to list voice folder Unable to list voice folder Unable to list voice folder Unable to list voice folder I have tried to resolve the problem by following the steps below, but it hasn’t had any affect on the error. Is it OK to leave this error as it is? If you know how to resolve it, please let me know. Clear the Xcode cache: Go to the path of the DerivedData folder and delete all of its contents. Clean Build folder: Select "Product" -> "Clean Build Folder" from the Xcode menu. Restart: Restart Xcode and the simulator. Software update: Make sure you are using the latest version of Xcode and macOS. Reinstallation: Uninstall Xcode once and reinstall it. Reset the simulator
0
0
205
2w
CoreML 6 beta 2 - Failed to create CVPixelBufferPool
Hello everyone, I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2. I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h)) The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there). When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error: Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000 It is the first time I am using it, so I don't really have so much of experience. Could you help me to understand what could be the problem? Thanks a lot
2
0
196
2w
On device training of text classifier model
I have made a text classifier model but I want to train it on device too. When text is classified wrong, user can make update the model on device. Code : // // SpamClassifierHelper.swift // LearningML // // Created by Himan Dhawan on 7/1/24. // import Foundation import CreateMLComponents import CoreML import NaturalLanguage enum TextClassifier : String { case spam = "spam" case notASpam = "ham" } class SpamClassifierModel { // MARK: - Private Type Properties /// The updated Spam Classifier model. private static var updatedSpamClassifier: SpamClassifier? /// The default Spam Classifier model. private static var defaultSpamClassifier: SpamClassifier { do { return try SpamClassifier(configuration: .init()) } catch { fatalError("Couldn't load SpamClassifier due to: \(error.localizedDescription)") } } // The Spam Classifier model currently in use. static var liveModel: SpamClassifier { updatedSpamClassifier ?? defaultSpamClassifier } /// The location of the app's Application Support directory for the user. private static let appDirectory = FileManager.default.urls(for: .applicationSupportDirectory, in: .userDomainMask).first! class var urlOfModelInThisBundle : URL { let bundle = Bundle(for: self) return bundle.url(forResource: "SpamClassifier", withExtension:"mlmodelc")! } /// The default Spam Classifier model's file URL. private static let defaultModelURL = urlOfModelInThisBundle /// The permanent location of the updated Spam Classifier model. private static var updatedModelURL = appDirectory.appendingPathComponent("personalized.mlmodelc") /// The temporary location of the updated Spam Classifier model. private static var tempUpdatedModelURL = appDirectory.appendingPathComponent("personalized_tmp.mlmodelc") // MARK: - Public Type Methods static func predictLabelFor(_ value: String) throws -> (predication :String?, confidence : String) { let spam = try NLModel(mlModel: liveModel.model) let result = spam.predictedLabel(for: value) let confidence = spam.predictedLabelHypotheses(for: value, maximumCount: 1).first?.value ?? 0 return (result,String(format: "%.2f", confidence * 100)) } static func updateModel(newEntryText : String, spam : TextClassifier) throws { guard let modelURL = Bundle.main.url(forResource: "SpamClassifier", withExtension: "mlmodelc") else { fatalError("Could not find model in bundle") } // Create feature provider for the new image let featureProvider = try MLDictionaryFeatureProvider(dictionary: ["label": MLFeatureValue(string: newEntryText), "text": MLFeatureValue(string: spam.rawValue)]) let batchProvider = MLArrayBatchProvider(array: [featureProvider]) let updateTask = try MLUpdateTask(forModelAt: modelURL, trainingData: batchProvider, configuration: nil, completionHandler: { context in let updatedModel = context.model let fileManager = FileManager.default do { // Create a directory for the updated model. try fileManager.createDirectory(at: tempUpdatedModelURL, withIntermediateDirectories: true, attributes: nil) // Save the updated model to temporary filename. try updatedModel.write(to: tempUpdatedModelURL) // Replace any previously updated model with this one. _ = try fileManager.replaceItemAt(updatedModelURL, withItemAt: tempUpdatedModelURL) loadUpdatedModel() print("Updated model saved to:\n\t\(updatedModelURL)") } catch let error { print("Could not save updated model to the file system: \(error)") return } }) updateTask.resume() } /// Loads the updated Spam Classifier, if available. /// - Tag: LoadUpdatedModel private static func loadUpdatedModel() { guard FileManager.default.fileExists(atPath: updatedModelURL.path) else { // The updated model is not present at its designated path. return } // Create an instance of the updated model. guard let model = try? SpamClassifier(contentsOf: updatedModelURL) else { return } // Use this updated model to make predictions in the future. updatedSpamClassifier = model } }
1
0
204
2w
Neural Engine Request Overhead
I have several CoreML models that I've set up to run in sequence where one of the outputs from each model is passed as one of the inputs to the next. For the most part, there is very little overhead in between each sub-model "chunk": However a couple of the models (eg the first two above) spend a noticeable amount of time in "Prepare Neural Engine Request". From Instruments, it seems like this is spent doing some sort of model loading. Given that I'm calling these models in sequence and in a fixed order, is there some way to reduce or amortize this cost? Thanks!
0
0
217
2w
openAppWhenRun makes AppIntent crash when launched from Control Center.
Adding the openAppWhenRun property to an AppIntent for a ControlWidgetButton causes the following error when the control is tapped in Control Center: Unknown NSError The operation couldn’t be completed. (LNActionExecutorErrorDomain error 2018.) Here’s the full ControlWidget and AppIntent code that causes the errorerror: Should controls be able to open apps after the AppIntent runs, or is this a bug?
3
1
345
2w
Unable to convert models with coremltools on macOS 15 Beta
I was trying the latest coremltools-8.0b1 beta on macOS 15 Beta with the intent to try using the new stateful models api in CoreML. But the conversion would always fail with the error: /AppleInternal/Library/BuildRoots/<snip>/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:162: failed assertion `Error: the minimum deployment target for macOS is 14.0.0' Here's a minimal repro, which works fine with both the stable version of coremltools (7.2) and the beta version (8.0b1) on macOS Sonoma 14.5, but fails with both versions of coremltools on macOS 15.0 Beta and Xcode 16.0 Beta. Which means that this most likely isn't an issue with coremltools, but with the native compilation toolchain. from collections import OrderedDict import coremltools as ct import numpy as np import torch import torch.nn as nn class ResidualAttentionBlock(nn.Module): def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): super().__init__() self.attn = nn.MultiheadAttention(d_model, n_head) self.ln_1 = nn.LayerNorm(d_model) self.mlp = nn.Sequential( OrderedDict( [ ("c_fc", nn.Linear(d_model, d_model * 4)), ("gelu", nn.GELU()), ("c_proj", nn.Linear(d_model * 4, d_model)), ] ) ) self.ln_2 = nn.LayerNorm(d_model) self.attn_mask = attn_mask def attention(self, x: torch.Tensor): self.attn_mask = ( self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None ) return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] def forward(self, x: torch.Tensor): x = x + self.attention(self.ln_1(x)) x = x + self.mlp(self.ln_2(x)) return x class Transformer(nn.Module): def __init__( self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None ): super().__init__() self.width = width self.layers = layers self.resblocks = nn.Sequential( *[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)] ) def forward(self, x: torch.Tensor): return self.resblocks(x) transformer = Transformer(width=512, layers=12, heads=8) emb_tokens = torch.rand((1, 512)) ct_model = ct.convert( torch.jit.trace(transformer.eval(), emb_tokens), convert_to="mlprogram", minimum_deployment_target=ct.target.macOS14, inputs=[ct.TensorType(name="embIn", shape=[1, 512])], outputs=[ct.TensorType(name="embOutput", dtype=np.float32)], )
2
0
278
2w
Flexible Input Shapes of Core ML Model
I want to try an any resolution image input Core ML model. So I wrote the model following the Core ML Tools "Set the Range for Each Dimensionas" sample code, modified as below: # Trace the model with random input. example_input = torch.rand(1, 3, 50, 50) traced_model = torch.jit.trace(model.eval(), example_input) # Set the input_shape to use RangeDim for each dimension. input_shape = ct.Shape(shape=(1, 3, ct.RangeDim(lower_bound=25, upper_bound=1920, default=45), ct.RangeDim(lower_bound=25, upper_bound=1920, default=45))) scale = 1/(0.226*255.0) bias = [- 0.485/(0.229) , - 0.456/(0.224), - 0.406/(0.225)] # Convert the model with input_shape. mlmodel = ct.convert(traced_model, inputs=[ct.ImageType(shape=input_shape, name="input", scale=scale, bias=bias)], outputs=[ct.TensorType(name="output")], convert_to="mlprogram", ) # Save the Core ML model mlmodel.save("image_resize_model.mlpackage") It converts OK but when I predict the result with an image It will get the error as below: You will not be able to run predict() on this Core ML model. Underlying exception message was: { NSLocalizedDescription = "Failed to build the model execution plan using a model architecture file '/private/var/folders/8z/vtz02xrj781dxvz1v750skz40000gp/T/model-small.mlmodelc/model.mil' with error code: -7."; } Where did I do wrong?
1
0
223
2w
Multi Task Models in CoreML
Hi, I want to create a real time sports analytics app that takes camera input and records basketball stats. I want to use pose estimation and object classification to record things such as dribbles, when the ball leaves one's hands. etc. Is it possible to have a model in CoreML that performs pose estimation on people but also does just simple object detection on other classes (ie. ball, hoop?) Thanks
0
0
245
3w
Question about ARKit Object Tracking Capabilities
Hi everyone, I'm curious about the capabilities of ARKit's object tracking feature. Specifically, I'd like to know: Is there a size limit for the objects that can be tracked? Can ARKit differentiate between two objects with the same shape but different models (e.g., different colors)? Are objects with single colors and generic shapes (like squares or circles) effectively trackable? Any insights or examples from your experiences would be greatly appreciated! Thanks in advance.
1
0
273
3w
Can you match a new photo with existing images?
I'm looking for a solution to take a picture or point the camera at a piece of clothing and match that image with an image the user has stored in my app. I'm storing the data in a Core Data database as a Binary Data object. Since the user also takes the pictures they store in the database I think I cannot use pre-trained Core ML models. I would like the matching to be done on device if possible instead of going to an external service. That will probably describe the item based on what the AI sees, but then I cannot match the item with the stored images in the app. Does anyone know if this is possible with frameworks as Vision or VisionKit?
2
0
318
3w