Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

A specific mlmodelc model runs on iPhone 15, but not on iPhone 16
As we described on the title, the model that I have built completely works on iPhone 15 / A16 Bionic, on the other hand it does not run on iPhone 16 / A18 chip with the following error message. E5RT encountered an STL exception. msg = MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED. E5RT: MILCompilerForANE error: failed to compile ANE model using ANEF. Error=_ANECompiler : ANECCompile() FAILED (11) It consumes 1.5 ~ 1.6 GB RAM on the loading the model, then the consumption is decreased to less than 100MB on the both of iPhone 15 and 16. After that, only on iPhone 16, the above error is shown on the Xcode log, the memory consumption is surged to 5 to 6GB, and the system kills the app. It works well only on iPhone 15. This model is built with the Core ML tools. Until now, I have tried the target iOS 16 to 18 and the compute units of CPU_AND_NE and ALL. But any ways have not solved this issue. Eventually, what kindof fix should I do? minimum_deployment_target = ct.target.iOS18 compute_units = ct.ComputeUnit.ALL compute_precision = ct.precision.FLOAT16
2
0
117
May ’25
ImagePlayground API not working on Xcode Simulator Devices
Hi! I'm trying to use the ImagePlayground API in SwiftUI with the .imagePlaygroundSheet modifier. However, when the sheet is shown (in the preview or in the simulator) it displays the following message: "Image Playground is not available. Image Playground is not available on this iPhone.". I'm using an iPhone 16 Pro with iOS 18.3.1 in the Xcode (16.2) Simulator. Anyone else having this problem? How can I fix it?
1
0
127
Apr ’25
ILMessageFilterExtension memory limit
I’m considering creating an ILMessageFilterExtension using a mini LLM/SLM to detect fraud and I’ve read it has strict memory limits yet I can’t find it in the documentation. What’s the set limit or any other constraints impacting the feasibility of running 100-500mb model?
0
0
46
Apr ’25
Question about Apple Intelligence
I downloaded the RC beta version on my Macbook and joined the waitlist so far I haven't received any message or any kind of notifications that I'm in but I have a question kinda silly but just want confirmation. By joining the beta and AI on my MacBook, whenever the official version is released, am I gonna have AI on my iPhone since already joined AI through my MacBook in the beta version? Kinda curious about it.
1
0
326
Oct ’24
Create ML Trouble Loading CSV to Train Word Tagger With Commas in Training Data
I'm using Numbers to build a spreadsheet that I'm exporting as a CSV. I then import this file into Create ML to train a word tagger model. Everything has been working fine for all the models I've trained so far, but now I'm coming across a use case that has been breaking the import process: commas within the training data. This is a case that none of Apple's examples show. My project takes Navajo text that has been tokenized by syllables and labels the parts-of-speech. Case that works... Raw text: Naaltsoos yídéeshtah. Tokens column: Naal,tsoos, ,yí,déesh,tah,. Labels column: NObj,NObj,Space,Verb,Verb,VStem,Punct Case that breaks... Raw text: óola, béésh łigaii, tłʼoh naadą́ą́ʼ, wáin, akʼah, dóó á,shįįh Tokens column with tokenized text (commas quoted): óo,la,",", ,béésh, ,łi,gaii,",", ,tłʼoh, ,naa,dą́ą́ʼ,",", ,wáin,",", ,a,kʼah,",", ,dóó, ,á,shįįh (Create ML reports mismatched columns) Tokens column with tokenized text (commas escaped): óo,la,\,, ,béésh, ,łi,gaii,\,, ,tłʼoh, ,naa,dą́ą́ʼ,\,, ,wáin,\,, ,a,kʼah,\,, ,dóó, ,á,shįįh (Create ML reports mismatched columns) Tokens column with tokenized text (commas escape-quoted): óo,la,\",\", ,béésh, ,łi,gaii,\",\", ,tłʼoh, ,naa,dą́ą́ʼ,\",\", ,wáin,\",\", ,a,kʼah,\",\", ,dóó, ,á,shįįh (record not detected by Create ML) Tokens column with tokenized text (commas escape-quoted): óo,la,"","", ,béésh, ,łi,gaii,"","", ,tłʼoh, ,naa,dą́ą́ʼ,"","", ,wáin,"","", ,a,kʼah,"","", ,dóó, ,á,shįįh (Create ML reports mismatched columns) Labels column: NSub,NSub,Punct,Space,NSub,Space,NSub,NSub,Punct,Space,NSub,Space,NSub,NSub,Punct,Space,NSub,Punct,Space,NSub,NSub,Punct,Space,Conj,Space,NSub,NSub Sample From Spreadsheet Solution Needed It's simple enough to escape commas within CSV files, but the format needed by Create ML essentially combines entire CSV records into single columns, so I'm ending up needing a CSV record that contains a mixture of commas to use for parsing and ones to use as character literals. That's where this gets complicated. For this particular use case (which seems like it would frequently arise when training a word tagger model), how should I properly escape a comma literal?
6
0
749
Jan ’25
Apple Intelligence & iMac M1
Trying to get on the waitlist for the above and the computer is saying: “Apple Intelligence is not available when Mac is set to English (Singapore)”. When just a few more bullet points below my Language selection shows “English (United States)”. That’s the only thing I can see, of course you guys are the experts. I would like to be part of this AI experiment/experience. Thanks for any help you can give to this 35+ year Mac user. Lee W
0
0
366
Nov ’24
Visual Intelligence -- Make OpenIntent show a sheet rather than open my App
The developer tutorial for visual intelligence indicates that the method to detect and handle taps on a displayed entity from the Search section is via an "OpenIntent" associated with your entity. However, running this intent executes code from within my app. If I have the perform() method display UI, it always displays UI from within my app. I noticed that the Google app's integration to visual intelligence has a different behavior-- tapping on an entity does not take you to the Google app -- instead, a Webview is presented sheet-style WITHIN the Visual Intelligence environment (see below) How is that accomplished?
0
0
214
1d
Foundational Model - Image as Input? Timeline
Hi all, I am interested in unlocking unique applications with the new foundational models. I have a few questions regarding the availability of the following features: Image Input: The update in June 2025 mentions "image" 44 times (https://machinelearning.apple.com/research/apple-foundation-models-2025-updates) - however I can't seem to find any information about having images as the input/prompt for the foundational models. When will this be available? I understand that there are existing Vision ML APIs, but I want image input into a multimodal on-device LLM (VLM) instead for features like "Which player is holding the ball in the image", etc (image understanding) Cloud Foundational Model - when will this be available? Thanks! Clement :)
1
0
368
1w
MLModel crashes when it is released on some iOS systems
We use MLModel in our app, which uses two file formats: mlmodel and mlpackage. We find that when the model is released, models using mlmodel format have a certain probability of crashing. And these crashes account for the majority (over 85%) in the iOS 16.x system. Here is the crash stack: Exception Type: SIGTRAP Exception Codes: TRAP_BRKPT at 0x1b48e855c Crashed Thread: 5 Thread 5 Crashed: 0 libdispatch.dylib 0x00000001b48e855c _dispatch_semaphore_dispose.cold.1 + 40 1 libdispatch.dylib 0x00000001b48b2b28 _dispatch_semaphore_signal_slow 2 libdispatch.dylib 0x00000001b48b0e58 _dispatch_dispose + 208 3 AppleNeuralEngine 0x00000001ef07b51c -[_ANEProgramForEvaluation .cxx_destruct] + 32 4 libobjc.A.dylib 0x00000001a67ed4a4 object_cxxDestructFromClass(objc_object*, objc_class*) + 116 5 libobjc.A.dylib 0x00000001a67f221c objc_destructInstance + 80 6 libobjc.A.dylib 0x00000001a67fb9d0 _objc_rootDealloc + 80 7 AppleNeuralEngine 0x00000001ef079e04 -[_ANEProgramForEvaluation dealloc] + 72 8 AppleNeuralEngine 0x00000001ef07ca70 -[_ANEModel .cxx_destruct] + 44 9 libobjc.A.dylib 0x00000001a67ed4a4 object_cxxDestructFromClass(objc_object*, objc_class*) + 116 10 libobjc.A.dylib 0x00000001a67f221c objc_destructInstance + 80 11 libobjc.A.dylib 0x00000001a67fb9d0 _objc_rootDealloc + 80 12 AppleNeuralEngine 0x00000001ef07bd7c -[_ANEModel dealloc] + 136 13 CoreFoundation 0x00000001ad4563cc cow_cleanup + 168 14 CoreFoundation 0x00000001ad49044c -[__NSDictionaryM dealloc] + 148 15 Espresso 0x00000001bb19c7a4 Espresso::ANERuntimeEngine::compiler::reset() + 1340 16 Espresso 0x00000001bb19cac8 Espresso::ANERuntimeEngine::compiler::~compiler() + 108 17 Espresso 0x00000001bacd69e4 std::__1::__shared_weak_count::__release_shared() + 84 18 Espresso 0x00000001ba944d00 std::__1::__hash_table<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::__unordered_map_hasher<Espresso::platform, std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::hash<Espresso::platform>, std::__1::equal_to<Espresso::platform>, true>, std::__1::__unordered_map_equal<Espresso::platform, std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::equal_to<Espresso::platform>, std::__1::hash<Espresso::platform>, true>, std::__1::allocator<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>>>::__deallocate_node(std::__1::__hash_node_base<std::__1::__hash_node<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, void*>*>*) + 40 19 Espresso 0x00000001ba8ea640 std::__1::__hash_table<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::__unordered_map_hasher<Espresso::platform, std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::hash<Espresso::platform>, std::__1::equal_to<Espresso::platform>, true>, std::__1::__unordered_map_equal<Espresso::platform, std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>, std::__1::equal_to<Espresso::platform>, std::__1::hash<Espresso::platform>, true>, std::__1::allocator<std::__1::__hash_value_type<Espresso::platform, std::__1::shared_ptr<Espresso::net_compiler>>>>::~__hash_table() + 28 20 Espresso 0x00000001ba8e5750 Espresso::net::~net() + 396 21 Espresso 0x00000001bacd69e4 std::__1::__shared_weak_count::__release_shared() + 84 22 Espresso 0x00000001bad750e4 std::__1::__vector_base<std::__1::shared_ptr<Espresso::net>, std::__1::allocator<std::__1::shared_ptr<Espresso::net>>>::clear() + 52 23 Espresso 0x00000001ba902448 std::__1::__vector_base<std::__1::shared_ptr<Espresso::net>, std::__1::allocator<std::__1::shared_ptr<Espresso::net>>>::~__vector_base() + 36 24 Espresso 0x00000001ba8ed99c std::__1::unique_ptr<EspressoLight::espresso_plan::priv_t, std::__1::default_delete<EspressoLight::espresso_plan::priv_t>>::reset(EspressoLight::espresso_plan::priv_t*) + 188 25 Espresso 0x00000001ba95b7fc EspressoLight::espresso_plan::~espresso_plan() + 72 26 Espresso 0x00000001ba902078 EspressoLight::espresso_plan::~espresso_plan() + 16 27 Espresso 0x00000001ba8e690c espresso_plan_destroy + 372 28 CoreML 0x00000001c48c45cc -[MLNeuralNetworkEngine _deallocContextAndPlan] + 40 29 CoreML 0x00000001c48c43bc -[MLNeuralNetworkEngine dealloc] + 40 30 libobjc.A.dylib 0x00000001a67ed4a4 object_cxxDestructFromClass(objc_object*, objc_class*) + 116 31 libobjc.A.dylib 0x00000001a67f221c objc_destructInstance + 80 32 libobjc.A.dylib 0x00000001a67fb9d0 _objc_rootDealloc + 80 ~~~~ Our code that release the MLModel object ~~~~ Moreover, we use a synchronization mechanism to ensure that the release of the MLModel and the data processing of the model (by calling [model predictionFromFeatures]) do not occur simultaneously. What could be the possible causes of the problem, and how can we prevent it from happening? Any advice would be appreciated.
1
0
571
Dec ’24
Example Usage of sliceUpdateDataTensor
Where can I find an example of using this MPSGraph function? I'm trying to use it to paste an image into a larger canvas at certain coordinates. func sliceUpdateDataTensor( _ dataTensor: MPSGraphTensor, update updateTensor: MPSGraphTensor, starts: [NSNumber], ends: [NSNumber], strides: [NSNumber], startMask: UInt32, endMask: UInt32, squeezeMask: UInt32, name: String? ) -> MPSGraphTensor
0
0
515
Nov ’24
Error in Xcode console
Lately I am getting this error. GenerativeModelsAvailability.Parameters: Initialized with invalid language code: en-GB. Expected to receive two-letter ISO 639 code. e.g. 'zh' or 'en'. Falling back to: en Does anyone know what this is and how it can be resolved. The error does not crash the app
1
0
725
3w
Core ML Async API Seems to Not Work Properly
I'm experiencing issues with the Core ML Async API, as it doesn't seem to be working correctly. It consistently hangs during the "03 performInference, after get smallInput, before prediction" part, as shown in the attached: log1.txt log2.txt Below is my code. Could you please advise on how I should modify it? private func createFrameAsync(for sampleBuffer: CMSampleBuffer ) { guard let pixelBuffer = sampleBuffer.imageBuffer else { return } Task { print("**** createFrameAsync before performInference") do { try await runModelAsync(on: pixelBuffer) } catch { print("Error processing frame: \(error)") } print("**** createFrameAsync after performInference") } } func runModelAsync(on pixelbuffer: CVPixelBuffer) async { print("01 performInference, before resizeFrame") guard let data = metalResizeFrame(sourcePixelFrame: pixelbuffer, targetSize: MTLSize.init(width: InputWidth, height: InputHeight, depth: 1), resizeMode: .scaleToFill) else { os_log("Preprocessing failed", type: .error) return } print("02 performInference, after resizeFrame, before get smallInput") let input = model_smallInput(input: data) print("03 performInference, after get smallInput, before prediction") if let prediction = try? await mlmodel!.model.prediction(from: input) { print("04 performInference, after prediction, before get result") var results: [Float] = [] let output = prediction.featureValue(for: "output")?.multiArrayValue if let bufferPointer = try? UnsafeBufferPointer<Float>(output!) { results = Array(bufferPointer) } print("05 performInference, after get result, before setRenderData") let localResults = results await MainActor.run { ScreenRecorder.shared .setRenderDataNormalized( screenImage: pixelbuffer, depthData: localResults ) } print("06 performInference, after setRenderData") } }
1
0
646
Oct ’24
AttributedString in App Intents
In this WWDC25 session, it is explictely mentioned that apps should support AttributedString for text parameters to their App Intents. However, I have not gotten this to work. Whenever I pass rich text (either generated by the new "Use Model" intent or generated manually for example using "Make Rich Text from Markdown"), my Intent gets an AttributedString with the correct characters, but with all attributes stripped (so in effect just plain text). struct TestIntent: AppIntent { static var title = LocalizedStringResource(stringLiteral: "Test Intent") static var description = IntentDescription("Tests Attributed Strings in Intent Parameters.") @Parameter var text: AttributedString func perform() async throws -> some IntentResult & ReturnsValue<AttributedString> { return .result(value: text) } } Is there anything else I am missing?
0
0
179
Jul ’25
Depth Anything V2 Core ML Model not working with Xcode 16.1
https://developer.apple.com/machine-learning/models/ Adding the DepthAnythingV2SmallF16.mlpackage to a new project in Xcode 16.1 and invoking the class crashes the app. Anyone else having the same issue? I tried Xcode 16.2 beta and it has the same response. Code import UIKit import CoreML class ViewController : UIViewController { override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view. do { // Use a default model configuration. let defaultConfig = MLModelConfiguration() // app crashes here let model = try? DepthAnythingV2SmallF16( configuration: defaultConfig ) } catch { // } } } Response /AppleInternal/Library/BuildRoots/4b66fb3c-7dd0-11ef-b4fb-4a83e32a47e1/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphExecutable.mm:129: failed assertion Error: unhandled platform for MPSGraph serialization' `
1
0
785
Nov ’24
Foundation Model - Change LLM
Almost everywhere else you see Apple Intelligence, you get to select whether it's on device, private cloud compute, or ChatGPT. Is there a way to do that via code in the Foundation Model? I searched through the docs and couldn't find anything, but maybe I missed it.
2
0
121
Jul ’25
From Apple Dev.
Early access to Image Playground, Genmoji, and Image Wand  Apple Oct 25, 2024 at 5:58 PM With the iOS & iPadOS 18.2 and macOS Sequoia 15.2 betas, you can join the waitlist for early access to Image Playground, Genmoji, and Image Wand in order to test and help improve these features. You can request access within any one of these experiences: Image Playground app Image Playground integration in Messages or Freeform Genmoji integration in the emoji keyboard, or Image Wand within the Apple Pencil tool palette in Notes We will roll out access to Image Playground, Genmoji, and Image Wand over the coming weeks. When the features are ready for you to test, you will be notified. After you receive access, you can tap the thumbs up or thumbs down that appear with each result in Image Playground, Genmoji, and Image Wand in order to provide feedback.
0
0
386
Oct ’24
How to use CoreML outside of XCode as a library ?
I'm working on a cross-platform AI app. It is a CMake project. The inference part should be built as a library separately on Windows and MacOS. On MacOS it should be built with objective-c and CoreML. Here's my step roughly: Create a XCode Project for CoreML inference and build it as static library. Models are compiled to ".mlmodelc", and codes are compile to binary ".a" lib. Create a CMake Project for the app, and use the ".a" lib built by XCode. Run the App. I initialize the CoreML model like this(just for demostration): #include "det.h" // the model header generated by xcode auto url = [[NSURL alloc] initFileURLWithPath:[NSString stringWithFormat:@"%@/%@", dir, @"det.mlmodelc"]]; auto model = [[det alloc] initWithContentsOfURL:url error:&error]; // no error The url is valid, and the initialization doesn't report any error. However, when I tried to do inference using codes like this: auto cvPixelBuffer = createCVPixelBuffer(960, 960); // util function auto preds = [model predictionFromImage:cvPixelBuffer error:NULL]; The output preds will be null and I got these errors: 2024-12-10 14:52:37.678201+0800 望言OCR[50204:5615023] [e5rt] E5RT encountered unknown exception. 2024-12-10 14:52:37.678237+0800 望言OCR[50204:5615023] [coreml] E5RT: E5RT encountered an unknown exception. (11) 2024-12-10 14:52:37.870739+0800 望言OCR[50204:5615023] H11ANEDevice::H11ANEDeviceOpen kH11ANEUserClientCommand_DeviceOpen call failed result=0xe00002e2 2024-12-10 14:52:37.870758+0800 望言OCR[50204:5615023] Device Open failed - status=0xe00002e2 2024-12-10 14:52:37.870760+0800 望言OCR[50204:5615023] (Single-ANE System) Critical Error: Could not open the only H11ANE device 2024-12-10 14:52:37.870769+0800 望言OCR[50204:5615023] H11ANEDeviceOpen failed: 0x17 2024-12-10 14:52:37.870845+0800 望言OCR[50204:5615023] H11ANEDevice::H11ANEDeviceOpen kH11ANEUserClientCommand_DeviceOpen call failed result=0xe00002e2 2024-12-10 14:52:37.870848+0800 望言OCR[50204:5615023] Device Open failed - status=0xe00002e2 2024-12-10 14:52:37.870849+0800 望言OCR[50204:5615023] (Single-ANE System) Critical Error: Could not open the only H11ANE device 2024-12-10 14:52:37.870853+0800 望言OCR[50204:5615023] H11ANEDeviceOpen failed: 0x17 2024-12-10 14:52:37.870857+0800 望言OCR[50204:5615023] [common] start: ANEDeviceOpen() failed : ret=23 : It seems that CoreML failed to find ANE device. Is there anything need to be done before we use a CoreML Model as a library in a CMake or other non-XCode project? By the way, codes like above will work on an XCode Native App with CoreML (I tested this before) . So I guess I missed some environment initializations in my non-XCode project?
1
0
622
Dec ’24