Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Core ML Model Performance report shows prediction speed much faster than actual app runs
Hi all, I'm tuning my app prediction speed with Core ML model. I watched and tried the methods in video: Improve Core ML integration with async prediction and Optimize your Core ML usage. I also use instruments to look what's the bottleneck that my prediction speed cannot be faster. Below is the instruments result with my app. its prediction duration is 10.29ms And below is performance report shows the average speed of prediction is 5.55ms, that is about half time of my app prediction! Below is part of my instruments records. I think the prediction should be considered quite frequent. Could it be faster? How to be the same prediction speed as performance report? The prediction speed on macbook Pro M2 is nearly the same as macbook Air M1!
5
0
1.3k
Oct ’25
Foundation Models / Playgrounds Hello World - Help!
I am using Foundation Models for the first time and no response is being provided to me. Code import Playgrounds import FoundationModels #Playground { let session = LanguageModelSession() let result = try await session.respond(to: "List all the states in the USA") print(result.content) } Canvas Output What I did New file Code Canvas refreshes but nothing happens Am I missing a step or setup here? Please help. Something so basic is not working I do not know what to do. Running 40GPU, 16CPU MacBook Pro.. IOS26/Xcodebeta2/Tahoe allocated 8CPU, 48GB memory in Parallels VM. Settings for Playgrounds in Xcode Thank you for your help in advance.
5
1
360
Jul ’25
coreml Fetching decryption key from server failed
My iOS app supports iOS 18, and I’m using an encrypted CoreML model secured with a key generated from Xcode. Every few months (around every 3 months), the encrypted model fails to load for both me and my users. When I investigate, I find this error: coreml Fetching decryption key from server failed: noEntryFound("No records found"). Make sure the encryption key was generated with correct team ID To temporarily fix it, I delete the old key, generate a new one, re-encrypt the model, and submit an app update. This resolves the issue, but only for a while. This is a terrible experience for users and obviously not a sustainable solution. I want to understand: Why is this happening? Is there a known expiration or invalidation policy for CoreML encryption keys? How can I prevent this issue permanently? Any insights or official guidance would be really appreciated.
5
2
629
Jul ’25
Issues with using ClassifyImageRequest() on an Xcode simulator
Hello, I am developing an app for the Swift Student challenge; however, I keep encountering an error when using ClassifyImageRequest from the Vision framework in Xcode: VTEST: error: perform(_:): inside 'for await result in resultStream' error: internalError("Error Domain=NSOSStatusErrorDomain Code=-1 \"Failed to create espresso context.\" UserInfo={NSLocalizedDescription=Failed to create espresso context.}") It works perfectly when testing it on a physical device, and I saw on another thread that ClassifyImageRequest doesn't work on simulators. Will this cause problems with my submission to the challenge? Thanks
5
1
821
Feb ’25
Will Apple Intelligence Support Third-Party LLMs or Custom AI Agent Integrations?
Hi everyone, I’m an AI engineer working on autonomous AI agents and exploring ways to integrate them into the Apple ecosystem, especially via Siri and Apple Intelligence. I was impressed by Apple’s integration of ChatGPT and its privacy-first design, but I’m curious to know: • Are there plans to support third-party LLMs? • Could Siri or Apple Intelligence call external AI agents or allow extensions to plug in alternative models for reasoning, scheduling, or proactive suggestions? I’m particularly interested in building event-driven, voice-triggered workflows where Apple Intelligence could act as a front-end for more complex autonomous systems (possibly local or cloud-based). This kind of extensibility would open up incredible opportunities for personalized, privacy-friendly use cases — while aligning with Apple’s system architecture. Is anything like this on the roadmap? Or is there a suggested way to prototype such integrations today? Thanks in advance for any thoughts or pointers!
4
0
500
May ’25
Using RAG on local documents from Foundation Model
I am watching a few WWDC sessions on Foundation Model and its usage and it looks pretty cool. I was wondering if it is possible to perform RAG on the user documents on the devices and entuallly on iCloud... Let's say I have a lot of pages documents about me and I want the Foundation model to access those information on the documents to answer questions about me that can be retrieved from the documents. How can this be done ? Thanks
4
2
454
Jun ’25
Error in Xcode console
Lately I am getting this error. GenerativeModelsAvailability.Parameters: Initialized with invalid language code: en-GB. Expected to receive two-letter ISO 639 code. e.g. 'zh' or 'en'. Falling back to: en Does anyone know what this is and how it can be resolved. The error does not crash the app
4
1
1.5k
2d
missing CreateML frameworks
I have reinstalled everything including command line tools but the CreateML frameworks fail to install, I need the framework so that I can train my auto-categorzation model which predicts category based on descriptions. I need that framework because I want to use reviision 4. please suggest advice on how do I proceed
4
0
807
Mar ’25
DataScannerViewController does't recognize currency less 1.00
Hi, DataScannerViewController does't recognize currencies less than 1.00 (e.g. 0.59 USD, 0.99 EUR, etc.). Why? How to solve the problem? This feature is not described in Apple documentation, is there a solution? This is my code: func makeUIViewController(context: Context) -> DataScannerViewController { let dataScanner = DataScannerViewController(recognizedDataTypes: [ .text(textContentType: .currency)]) return dataScanner }
4
0
246
Apr ’25
Python 3.13
Hello, Are there any plans to compile a python 3.13 version of tensorflow-metal? Just got my new Mac mini and the automatically installed version of python installed by brew is python 3.13 and while if I was in a hurry, I could manage to get python 3.12 installed and use the corresponding tensorflow-metal version but I'm not in a hurry. Many thanks, Alan
4
5
1.6k
Dec ’25
Support for Content Exclusion Files in Apple Intelligence
I am writing to inquire about content exclusion capabilities within Apple Intelligence, particularly regarding the use of configuration files such as .aiignore or .aiexclude—similar to what exists in other AI-assisted coding tools. These mechanisms are highly valuable in managing what content AI systems can access, especially in environments that involve sensitive code or proprietary frameworks. I would appreciate it if anyone could clarify whether Apple Intelligence currently supports any exclusion configuration for AI-assisted features. If so, could you kindly provide documentation or guidance on how developers can implement these controls? If not, Is there any plan to include such feature in future updates?
4
0
794
Nov ’25
Crash when testing Speech sample app with FoundationModels on macOS 26.0 beta and iOS 26.0 beta
Hello, I am testing the sample project provided here: Bringing advanced speech-to-text capabilities to your app. On both macOS 26.0 beta and iOS 26.0 beta, the app crashes immediately on launch with a dyld "Symbol not found" error related to FoundationModels.framework. It feels like this may be related to testing primarily on newer Apple Silicon devices, as I am seeing consistent crashes on an Intel MacBook and on an older iPhone device. I would appreciate any insight, confirmation, or guidance on whether this is a known limitation or if there is a workaround. Is it planned to be resolved soon? Environment macOS: Device: MacBook Pro (Intel) Processor: 2 GHz Quad-Core Intel Core i5 Graphics: Intel Iris Plus Graphics 1536 MB Memory: 16 GB 3733 MHz LPDDR4X OS: macOS Tahoe Version 26.0 Beta (25A5338b) iOS: Device: iPhone 11 Model Number: MHDD3HN/A OS: iOS 26.0 Xcode: Version: 26.0 beta 3 (17A5276g) Crash (macOS) Abort signal received. Excerpt from crash dump: dyld`__abort_with_payload: 0x7ff80e3ad4a0 <+0>: movl $0x2000209, %eax 0x7ff80e3ad4a5 <+5>: movq %rcx, %r10 0x7ff80e3ad4a8 <+8>: syscall -> 0x7ff80e3ad4aa <+10>: jae 0x7ff80e3ad4b4 Console: dyld[9819]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /Users/userx/Library/Developer/Xcode/DerivedData/SwiftTranscriptionSampleApp-*/Build/Products/Debug/SwiftTranscriptionSampleApp.app/Contents/MacOS/SwiftTranscriptionSampleApp.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/Versions/A/FoundationModels Crash (iOS) Abort signal received. Excerpt from crash dump: dyld`__abort_with_payload: 0x18f22b4b0 <+0>: mov x16, #0x209 0x18f22b4b4 <+4>: svc #0x80 -> 0x18f22b4b8 <+8>: b.lo 0x18f22b4d8 Console dyld[2080]: Symbol not found: _$s16FoundationModels20LanguageModelSessionC5model10guardrails5tools12instructionsAcA06SystemcD0C_AC10GuardrailsVSayAA4Tool_pGAA12InstructionsVSgtcfC Referenced from: /private/var/containers/Bundle/Application/.../SwiftTranscriptionSampleApp.app/SwiftTranscriptionSampleApp.debug.dylib Expected in: /System/Library/Frameworks/FoundationModels.framework/FoundationModels Question Is this crash expected on Intel Macs and older iPhone models with the beta SDKs? Is there an official statement on whether macOS 26.x releases support Intel, or it exists only until macOS 26.1? Any suggested workarounds for testing this sample project on current hardware? Is this a known limitation for the 26.0 beta, and if so, should we expect a fix in 26.0 or only in subsequent releases? Attaching screenshots for reference. Thank you in advance.
4
0
559
Aug ’25
Missing module 'coremltools.libmilstoragepython'
Hello! I'm following the Foundation Models adapter training guide (https://developer.apple.com/apple-intelligence/foundation-models-adapter/) on my NVIDIA DGX Spark box. I'm able to train on my own data but the example notebook fails when I try to export the artifact as an fmadapter. I get the following error for the code block I'm trying to run. I haven't touched any of the code in the export folder. I tried exporting it on my Mac too and got the same error as well (given below). Would appreciate some more clarity around this. Thank you. Code Block: from export.export_fmadapter import Metadata, export_fmadapter metadata = Metadata( author="3P developer", description="An adapter that writes play scripts.", ) export_fmadapter( output_dir="./", adapter_name="myPlaywritingAdapter", metadata=metadata, checkpoint="adapter-final.pt", draft_checkpoint="draft-model-final.pt", ) Error: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[10], line 1 ----> 1 from export.export_fmadapter import Metadata, export_fmadapter 3 metadata = Metadata( 4 author="3P developer", 5 description="An adapter that writes play scripts.", 6 ) 8 export_fmadapter( 9 output_dir="./", 10 adapter_name="myPlaywritingAdapter", (...) 13 draft_checkpoint="draft-model-final.pt", 14 ) File /workspace/export/export_fmadapter.py:11 8 from typing import Any 10 from .constants import BASE_SIGNATURE, MIL_PATH ---> 11 from .export_utils import AdapterConverter, AdapterSpec, DraftModelConverter, camelize 13 logger = logging.getLogger(__name__) 16 class MetadataKeys(enum.StrEnum): File /workspace/export/export_utils.py:15 13 import torch 14 import yaml ---> 15 from coremltools.libmilstoragepython import _BlobStorageWriter as BlobWriter 16 from coremltools.models.neural_network.quantization_utils import _get_kmeans_lookup_table_and_weight 17 from coremltools.optimize._utils import LutParams ModuleNotFoundError: No module named 'coremltools.libmilstoragepython'
4
0
622
Oct ’25
Avoid hallucinations and information from trainning data
Hi For certain tasks, such as qualitative analysis or tagging, it is advisable to provide the AI with the option to respond with a joker / wild card answer when it encounters difficulties in tagging or scoring. For instance, you can include this slot in the prompt as follows: output must be "not data to score" when there isn't information to score. In the absence of these types of slots, AI trends to provide a solution even when there is insufficient information. Foundations Models are told to be prompted with simple prompts. I wonder: Is recommended keep this slot though adds verbose complexity? Is the best place the comment of a guided attribute? other tips? Another use case is when you want the AI to be tied to the information provided in the prompt and not take information from its data set. What is the best approach to this purpose? Thanks in advance for any suggestion.
4
0
824
Oct ’25
LanguageModelStream and collecting the final output
I have a Generable type with many elements. I am using a stream() to incrementally process the output (Generable.PartiallyGenerated?) content. At the end, I want to pass the final version (not partially generated) to another function. I cannot seem to find a good way to convert from a MyGenerable.PartiallyGenerated to a MyGenerable. Am I missing some functionality in the APIs?
4
0
588
Jul ’25
Does Generable support recursive schemas?
I've run into an issue with a small Foundation Models test with Generable. I'm getting a strange error message with this Generable. I was able to get simpler ones to work. Is this because the Generable is recursive with a property of [HTMLDiv]? The error message is: FoundationModels/SchemaAugmentor.swift:209: Fatal error: 'try!' expression unexpectedly raised an error: FoundationModels.GenerationSchema.SchemaError.undefinedReferences(schema: Optional("SafeResponse<HTMLDiv>"), references: ["HTMLDiv"], context: FoundationModels.GenerationSchema.SchemaError.Context(debugDescription: "Undefined types: [HTMLDiv]", underlyingErrors: [])) The code is: import FoundationModels import Playgrounds @Generable struct HTMLDiv { @Guide(description: "Optional named ID, useful for nicknames") var id: String? = nil @Guide(description: "Optional visible HTML text") var textContent: String? = nil @Guide(description: "Any child elements", .count(0...10)) var children: [HTMLDiv] = [] static var sample: HTMLDiv { HTMLDiv( id: "profileToolbar", children: [ HTMLDiv(textContent: "Log in"), HTMLDiv(textContent: "Sign up"), ] ) } } #Playground { do { let session = LanguageModelSession { "Your job is to generate simple HTML markup" "Here is an example response to the prompt: 'Make a profile toolbar':" HTMLDiv.sample } let response = try await session.respond( to: "Make a sign up form", generating: HTMLDiv.self ) print(response.content) } catch { print(error) } }
4
0
170
Jul ’25
All generations in #Playground macro are throwing "unsafe" Generation Errors
I'm using Xcode 26 Beta 5 and get errors on any generation I try, however harmless, when wrapped in the #Playground macro. #Playground { let session = LanguageModelSession() let topic = "pandas" let prompt = "Write a safe and respectful story about (topic)." let response = try await session.respond(to: prompt) Not seeing any issues on simulator or device. Anyone else seeing this or have any ideas? Thanks for any help! Version 26.0 beta 5 (17A5295f) macOS 26.0 Beta (25A5316i)
4
0
153
Aug ’25