Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

CoreML 6 beta 2 - Failed to create CVPixelBufferPool
Hello everyone, I am trying to train using CreateML Version 6.0 Beta (146.1), feature extractor Image Feature Print v2. I am using 100K images for a total ~4GB on my M3 Max 48GB (MacOs 15.0 Beta (24A5279h)) The images seems to be correctly read and visualized in the Data Source section (no images with corrupted data seems to be there). When I start the training it's all fine for the first 6k ~ 7k pictures, then I receive the following error: Failed to create CVPixelBufferPool. Width = 0, Height = 0, Format = 0x00000000 It is the first time I am using it, so I don't really have so much of experience. Could you help me to understand what could be the problem? Thanks a lot
6
1
1.2k
Dec ’24
Image Playground Error: Cannot find protocol declaration for 'ImageGenerationViewControllerDelegate'
@available(macCatalyst 18.1, *) @available(iOS 18.1, *) extension CKImageSelectionManager: ImagePlaygroundViewController.Delegate { public func imagePlaygroundViewController(_ imagePlaygroundViewController: ImagePlaygroundViewController, didCreateImageAt imageURL: URL) { } func presentImagePlayground() { let imagePlaygroundVC = ImagePlaygroundViewController() // Set delegate to self to receive the callback imagePlaygroundVC.delegate = self imagePlaygroundVC.isModalInPresentation = true // Prevents dismissal with swipe if needed self.delegate?.presentImageSelectionViewController(imagePlaygroundVC) } } This generates an error in the xcode generated swift header.
3
0
1.1k
Dec ’24
Apple Intelligence crashed/stopped working
Hi everyone, I’m currently using macOS Version 15.3 Beta (24D5034f), and I’m encountering an issue with Apple Intelligence. The image generation tools seem to work fine, but everything else shows a message saying that it’s “not available at this time.” I’ve tried restarting my Mac and double-checked my settings, but the problem persists. Is anyone else experiencing this issue on the beta version? Are there any fixes or settings I might be overlooking? Any help or insights would be greatly appreciated! Thanks in advance!
2
1
760
Dec ’24
existential any error in MLModel class
Problem I have set SWIFT_UPCOMING_FEATURE_EXISTENTIAL_ANY at Build Settings > Swift Compiler - Upcoming Features to true to support this existential any proposal. Then following errors appears in the MLModel class, but this is an auto-generated file, so I don't know how to deal with it. Use of protocol 'MLFeatureProvider' as a type must be written 'any MLFeatureProvider' Use of protocol 'Error' as a type must be written 'any Error' environment Xcode 16.0 Xcode 16.1 Beta 2 What I tried Delete cache of DerivedData and regenerate MLModel class files I also tried using DepthAnythingV2SmallF16P6.mlpackage to verify if there is a problem with my mlmodel I tried the above after setting up Swift6 in Xcode I also used coremlc to generate MLModel class files with Swift6 specified by command.
2
2
668
Dec ’24
Problems creating a PipelineRegressor from a PyTorch converted model
I am trying to create a Pipeline with 3 sub-models: a Feature Vectorizer -> a NN regressor converted from PyTorch -> a Feature Extractor (to convert the output tensor to a Double value). The pipeline works fine when I use just a Vectorizer and an Extractor, this is the code: vectorizer = models.feature_vectorizer.create_feature_vectorizer( input_features=["windSpeed", "theoreticalPowerCurve", "windDirection"], # Multiple input features output_feature_name="input" ) preProc_spec = vectorizer[0] ct.utils.convert_double_to_float_multiarray_type(preProc_spec) extractor = models.array_feature_extractor.create_array_feature_extractor( input_features=[("input",datatypes.Array(3,))], # Multiple input features output_name="output", extract_indices = 1 ) ct.utils.convert_double_to_float_multiarray_type(extractor) pipeline_network = pipeline.PipelineRegressor ( input_features = ["windSpeed", "theoreticalPowerCurve", "windDirection"], output_features=["output"] ) pipeline_network.add_model(preProc_spec) pipeline_network.add_model(extractor) ct.utils.convert_double_to_float_multiarray_type(pipeline_network.spec) ct.utils.save_spec(pipeline_network.spec,"Final.mlpackage") This model works ok. I created a regression NN using PyTorch and converted to Core ML either import torch import torch.nn as nn class TurbinePowerModel(nn.Module): def __init__(self): super().__init__() self.linear1 = nn.Linear(3, 4) self.activation1 = nn.ReLU() #self.linear2 = nn.Linear(5, 4) #self.activation2 = nn.ReLU() self.output = nn.Linear(4, 1) def forward(self, x): #x = F.normalize(x, dim = 0) x = self.linear1(x) x = self.activation1(x) # x = self.linear2(x) # x = self.activation2(x) x = self.output(x) return x def forward_inference(self, windSpeed,theoreticalPowerCurve,windDirection): input_tensor = torch.tensor([windSpeed, theoreticalPowerCurve, windDirection], dtype=torch.float32) return self.forward(input_tensor) model = torch.load('TurbinePowerRegression-1layer.pt', weights_only=False) import coremltools as ct print(ct.__version__) import pandas as pd from sklearn.preprocessing import StandardScaler df = pd.read_csv('T1_clean.csv',delimiter=';') X = df[['WindSpeed','TheoreticalPowerCurve','WindDirection']] y = df[['ActivePower']] scaler = StandardScaler() X = scaler.fit_transform(X) y = scaler.fit_transform(y) X_tensor = torch.tensor(X, dtype=torch.float32) y_tensor = torch.tensor(y, dtype=torch.float32) traced_model = torch.jit.trace(model, X_tensor[0]) mlmodel = ct.convert( traced_model, inputs=[ct.TensorType(name="input", shape=X_tensor[0].shape)], classifier_config=None # Optional, for classification tasks ) mlmodel.save("TurbineBase.mlpackage") This model has a Multiarray(Float 32 3) as input and a Multiarray(Float32 1) as output. When I try to include it in the middle of the pipeline (Adjusting the output and input types of the other models accordingly), the process runs ok, but I have the following error when opening the generated model on Xcode: What's is missing on the models. How can I set or adjust this metadata properly? Thanks!!!
1
0
610
Dec ’24
Permanent location for CoreML models
The Core ML developer guide recommends saving reusable compiled Core ML models to a permanent location to avoid unnecessary rebuilds when creating a Core ML model instance. However, there is no location that remains consistent across app updates, since each update changes the UUID associated with the app’s resources path /var/mobile/Containers/Data/Application/<UUID>/Library/Application Support/ As a result, Core ML rebuilds models even if they are unchanged and located in the same relative directory within the app’s file structure.
0
0
529
Dec ’24
Cmake build unable to 'find' Foundation framework
I'm trying to build llama.cpp, a popular tool for running LLMs locally on macos15.1.1 (24B91) Sonoma using cmake but am encountering errors. Here is the stack overflow post regarding the issue: https://stackoverflow.com/questions/79304015/cmake-unable-to-find-foundation-framework-on-macos-15-1-1-24b91?noredirect=1#comment139853319_79304015
0
0
541
Dec ’24
WWDC24 - What's New in Create ML - Time Series Forecasting
The What’s New in Create ML session in WWDC24 went into great depth with time-series forecasting models (beginning at: 15:14) and mentioned these new models, capabilities, and tools for iOS 18. So, far, all I can find is API documentation. I don’t see any other session in WWDC24 covering these new time-series forecasting Create ML features. Is there more substance/documentation on how to use these with Create ML? Maybe I am looking in the wrong place but I am fairly new with ML. Are there any food truck / donut shop demo/sample code like in the video? It is of great interest to get ahead of the curve on this within business applications that may take advantage of this with inventory / ordering data.
3
2
1.4k
Dec ’24
iOS 18 App Intents while supporting iOS 17
iOS 18 App Intents while supporting iOS 17 Hello, I have an existing app that supports iOS 17. I already have three App Intents but would like to add some of the new iOS 18 app intents like ShowInAppSearchResultsIntent. However, I am having a hard time using #available or @available to limit this ShowInAppSearchResultsIntent to iOS 18 only while still supporting iOS 17. Obviously, the ShowInAppSearchResultsIntent needs to use @AssistantIntent which is iOS 18 only, so I mark that struct as @available(iOS 18, *). That works as expected. It is when I need to add this "SearchSnippetIntent" intent to the AppShortcutsProvider, that I begin to have trouble doing. See code below: struct SnippetsShortcutsAppShortcutsProvider: AppShortcutsProvider { @AppShortcutsBuilder static var appShortcuts: [AppShortcut] { //iOS 17+ AppShortcut(intent: SnippetsNewSnippetShortcutsAppIntent(), phrases: [ "Create a New Snippet in \(.applicationName) Studio", ], shortTitle: "New Snippet", systemImageName: "rectangle.fill.on.rectangle.angled.fill") AppShortcut(intent: SnippetsNewLanguageShortcutsAppIntent(), phrases: [ "Create a New Language in \(.applicationName) Studio", ], shortTitle: "New Language", systemImageName: "curlybraces") AppShortcut(intent: SnippetsNewTagShortcutsAppIntent(), phrases: [ "Create a New Tag in \(.applicationName) Studio", ], shortTitle: "New Tag", systemImageName: "tag.fill") //iOS 18 Only AppShortcut(intent: SearchSnippetIntent(), phrases: [ "Search \(.applicationName) Studio", "Search \(.applicationName)" ], shortTitle: "Search", systemImageName: "magnifyingglass") } let shortcutTileColor: ShortcutTileColor = .blue } The iOS 18 Only AppShortcut shows the following error but none of the options seem to work. Maybe I am going about it the wrong way. 'SearchSnippetIntent' is only available in iOS 18 or newer Add 'if #available' version check Add @available attribute to enclosing static property Add @available attribute to enclosing struct Thanks in advance for your help.
4
3
2k
Jan ’25
Attempts to install Tensorflow on Mac Studio M1 fail
I am attempting to install Tensorflow on my M1 and I seem to be unable to find the correct matching versions of jax, jaxlib and numpy to make it all work. I am in Bash, because the default shell gave me issues. I downgraded to python 3.10, because with 3.13, I could not do anything right. Current actions: bash-3.2$ python3.10 -m venv ~/venv-metal bash-3.2$ python --version Python 3.10.16 python3.10 -m venv ~/venv-metal source ~/venv-metal/bin/activate python -m pip install -U pip python -m pip install tensorflow-macos And here, I keep running tnto errors like: (venv-metal):~$ pip install tensorflow-macos tensorflow-metal ERROR: Could not find a version that satisfies the requirement tensorflow-macos (from versions: none) ERROR: No matching distribution found for tensorflow-macos What is wrong here? How can I fix that? It seems like the system wants to use the x86 version of python ... which can't be right.
4
0
1.8k
Jan ’25
NLModel won't initialize in MessageFilterExtension
i'm trying to create an NLModel within a MessageFilterExtension handler. The code works fine in the main app, but when I try to use it in the extension it fails to initialize. Just this doesn't even work and gets the error below. Single line that fails. SMS_Classifier is the class xcode generated for my model. This line works fine in the main app. let mlModel = try SMS_Classifier(configuration: MLModelConfiguration()).model Error Unable to locate Asset for contextual word embedding model for local en. MLModelAsset: load failed with error Error Domain=com.apple.CoreML Code=0 "initialization of text classifier model with model data failed" UserInfo={NSLocalizedDescription=initialization of text classifier model with model data failed} Any ideas?
3
1
988
Jan ’25
Create ML how to handle polygon annotations?
I have images, and I annotated with polygon, actually simple trapezoid, so 4 points. I have been trying and trying but can't get Create ML to work. I am trying Object Detection. I am not a real programmer so really would greatly appreciate some guidance to help to get this model created. I think I made a Detectron2 model, and tried to get that converted into a mlmodel I need for xcode but had troubles there also. thank you. { "annotation": "IMG_1803.JPG", "annotations": [ { "label": "court", "coordinates": { "x": [ 187, 3710, 2780, 929 ], "y": [ 1689, 1770, 478, 508 ] } } ] },
2
0
704
Jan ’25
Help Needed: SwiftUI View with Camera Integration and Core ML Object Recognition
Hi everyone, I'm working on a SwiftUI app and need help building a view that integrates the device's camera and uses a pre-trained Core ML model for real-time object recognition. Here's what I want to achieve: Open the device's camera from a SwiftUI view. Capture frames from the camera feed and analyze them using a Create ML-trained Core ML model. If a specific figure/object is recognized, automatically close the camera view and navigate to another screen in my app. I'm looking for guidance on: Setting up live camera capture in SwiftUI. Using Core ML and Vision frameworks for real-time object recognition in this context. Managing navigation between views when the recognition condition is met. Any advice, code snippets, or examples would be greatly appreciated! Thanks in advance!
1
0
709
Jan ’25
How to Retrieve VisualLookUp Results (e.g., Object Name) in VisionKit?
Hi everyone, I'm working on an iOS app that uses VisionKit and I'm exploring the .visualLookUp feature. Specifically, I want to extract the detailed information that Visual Look Up provides after identifying an object in an image (e.g., if the object is a flower, retrieve its name; if it’s a clothing tag, get the tag's content).
1
0
594
Jan ’25
BarcodeObservation Orientation
Hi, I'm working with vision framework to detect barcodes. I tested both ean13 and data matrix detection and both are working fine except for the QuadrilateralProviding values in the returned BarcodeObservation. TopLeft, topRight, bottomRight and bottomLeft coordinates are rotated 90° counter clockwise (physical bottom left of data Matrix, the corner of the "L" is returned as the topLeft point in observation). The same behaviour is happening with EAN13 Barcode. Did someone else experienced the same issue with orientation? Is it normal behaviour or should we expect a fix in next releases of the Vision Framework?
4
0
569
Jan ’25
Can I Perform Hybrid Execution on Neural Engine and CPU with 16-bit Precision?
Hello, I have a question regarding hybrid execution for deep learning models on Apple's Neural Engine and CPU. I am aware that setting the precision of some layers to 32-bit allows hybrid execution across both the Neural Engine and the CPU. However, I would like to know if it is possible to achieve the same with 16-bit precision. Is there any specific configuration or workaround to enable hybrid execution in this case? Any guidance or documentation references would be greatly appreciated. Thank you!
0
0
421
Jan ’25
What is experimentalMLE5EngineUsage?
@property (assign,nonatomic) long long experimentalMLE5EngineUsage; //@synthesize experimentalMLE5EngineUsage=_experimentalMLE5EngineUsage - In the implementation block What is it, and why would disabling it fix NMS for a MLProgram? Is there anyway to signal this flag from model metadata? Is there anyway to signal or disable from a global, system-level scope? It's extremely easy to reproduce, but do not know how to investigate the drastic regression between toggling this flag let config = MLModelConfiguration() config.setValue(1, forKey: "experimentalMLE5EngineUsage")
0
1
634
Jan ’25