Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

RDMA API Documentation
With the release of the newest version of tahoe and MLX supporting RDMA. Is there a documentation link to how to utilizes the libdrma dylib as well as what functions are available? I am currently assuming it mostly follows the standard linux infiniband library but I would like the apple specific details.
0
1
292
Dec ’25
Overly strict foundation model rate limit when used in app extension
I am calling into an app extension from a Safari Web Extension (sendNativeMessage, which in turn results in a call to NSExtensionRequestHandling’s beginRequest). My Safari extension aims to make use of the new foundation models for some of the features it provides. In my testing, I hit the rate limit by sending 4 requests, waiting 30 seconds between each. This makes the FoundationModels framework (which would otherwise serve my use case perfectly well) unusable in this context, because the model is called in response to user input, and this rate of user input is perfectly plausible in a real world scenario. The error thrown as a result of the rate limit is “Safety guardrail was triggered after consecutive failures during streaming.", but looking at the system logs in Console.app shows the rate limit as the real culprit. My suggestions: Please introduce sensible rate limits for app extensions, through an entitlement if need be. If it is rate limited to 1 request per every couple of seconds, that would already fix the issue for me. Please document the rate limit. Please make the thrown error reflect that it is the result of a rate limit and not a generic guardrail violation. IMPORTANT: please indicate in the thrown error when it is safe to try again. Filed a feedback here: FB18332004
3
1
239
Jun ’25
Foundation Models flags 'Six Flags Great America' as unsafe
I'm working on a to-do list app that uses SpeechTranscriber and Foundation Models framework to transcribe a user's voice into text and create to-do items based off of it. After about 30 minutes looking at my code, I couldn't figure out why I was failing to generate a to-do for "I need to go to Six Flags Great America tomorrow at 3pm." It turns out, I was consistently firing the Foundation Models's safety filter violation for unsafe content ("May contain unsafe content"). Lesson learned: consider comprehensively logging Foundation Models error states to quickly identify when safety filters are unexpectedly triggered.
3
1
514
Jul ’25
FoundationModels coding
I am writing an app that parses text and conducts some actions. I don't want to give too much away ;) However, I am having a huge problem with token sizes. LanguageModelSession will of course give me the on device model 4096 available, but when you go over 4096, my code doesn't seem to be falling back to PCC, or even the system configured ChatGPT. Can anyone assist me with this? For some reason, after reading the docs, it's very unclear how this transition between the three takes place.
3
0
838
Jan ’26
Does Image Playground is On-device + Private Cloud ?
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC) For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud: Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment. No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request. Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
3
0
1.1k
Dec ’25
Two errors in debug: com.apple.modelcatalog.catalog sync and nw_protocol_instance_set_output_handler
We get two error message in Xcode debug. apple.model.catalog we get 1 time at startup, and the nw_protocol_instance_set_output_handler Not calling remove_input_handler on 0x152ac3c00:udp we get on sartup and some time during running of the app. I have tested cutoff repos WS eg. But nothing helpss, thats for the nw_protocol. We have a fondationmodel in a repo but we check if it is available if not we do not touch it. Please help me? nw_protocol_instance_set_output_handler Not calling remove_input_handler on 0x152ac3c00:udp com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: Connection init failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: Connection init failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1 The function we have in the repo is this: public actor FoundationRepo: JobDescriptionChecker, SubskillSuggester { private var session: LanguageModelSession? private let isEnabled: Bool private let shouldUseLocalFoundation: Bool private let baseURLString = "https://xx.xx.xxx/xx" private let http: HTTPPac public init(http: HTTPPac, isEnabled: Bool = true) { self.http = http self.isEnabled = isEnabled self.session = nil guard isEnabled else { self.shouldUseLocalFoundation = false return } let model = SystemLanguageModel.default guard model.supportsLocale() else { self.shouldUseLocalFoundation = false return } switch model.availability { case .available: self.shouldUseLocalFoundation = true case .unavailable(.deviceNotEligible), .unavailable(.appleIntelligenceNotEnabled), .unavailable(.modelNotReady): self.shouldUseLocalFoundation = false @unknown default: self.shouldUseLocalFoundation = false } } So here we decide if we are going to use iPhone ML or my backend-remote?
2
0
419
3w
Crashed: AXSpeech EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x000056f023efbeb0
Application is getting Crashed: AXSpeech EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x000056f023efbeb0 Crashed: AXSpeech 0 libobjc.A.dylib 0x4820 objc_msgSend + 32 1 libsystem_trace.dylib 0x6c34 _os_log_fmt_flatten_object + 116 2 libsystem_trace.dylib 0x5344 _os_log_impl_flatten_and_send + 1884 3 libsystem_trace.dylib 0x4bd0 _os_log + 152 4 libsystem_trace.dylib 0x9c48 _os_log_error_impl + 24 5 TextToSpeech 0xd0a8c _pcre2_xclass_8 6 TextToSpeech 0x3bc04 TTSSpeechUnitTestingMode 7 TextToSpeech 0x3f128 TTSSpeechUnitTestingMode 8 AXCoreUtilities 0xad38 -[NSArray(AXExtras) ax_flatMappedArrayUsingBlock:] + 204 9 TextToSpeech 0x3eb18 TTSSpeechUnitTestingMode 10 TextToSpeech 0x3c948 TTSSpeechUnitTestingMode 11 TextToSpeech 0x48824 AXAVSpeechSynthesisVoiceFromTTSSpeechVoice 12 TextToSpeech 0x49804 AXAVSpeechSynthesisVoiceFromTTSSpeechVoice 13 Foundation 0xf6064 __NSThreadPerformPerform + 264 14 CoreFoundation 0x37acc CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION + 28 15 CoreFoundation 0x36d48 __CFRunLoopDoSource0 + 176 16 CoreFoundation 0x354fc __CFRunLoopDoSources0 + 244 17 CoreFoundation 0x34238 __CFRunLoopRun + 828 18 CoreFoundation 0x33e18 CFRunLoopRunSpecific + 608 19 Foundation 0x2d4cc -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212 20 TextToSpeech 0x24b88 TTSCFAttributedStringCreateStringByBracketingAttributeWithString 21 Foundation 0xb3154 NSThread__start + 732 com.livingMedia.AajTakiPhone_issue_3ceba855a8ad2d1af83655803dc13f70_crash_session_9081fa41ced440ae9a57c22cb432f312_DNE_0_v2_stacktrace.txt 22 libsystem_pthread.dylib 0x24d4 _pthread_start + 136 23 libsystem_pthread.dylib 0x1a10 thread_start + 8
4
1
1.5k
1w
Foundation Models / Playgrounds Hello World - Help!
I am using Foundation Models for the first time and no response is being provided to me. Code import Playgrounds import FoundationModels #Playground { let session = LanguageModelSession() let result = try await session.respond(to: "List all the states in the USA") print(result.content) } Canvas Output What I did New file Code Canvas refreshes but nothing happens Am I missing a step or setup here? Please help. Something so basic is not working I do not know what to do. Running 40GPU, 16CPU MacBook Pro.. IOS26/Xcodebeta2/Tahoe allocated 8CPU, 48GB memory in Parallels VM. Settings for Playgrounds in Xcode Thank you for your help in advance.
5
1
420
Jul ’25
Can not use Language Model in Xcode-beta
I've downloaded the Xcode-beta and run the sample project "FoundationModelsTripPlanner" but I got this error when trying generate the response. InferenceError::inferenceFailed::Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog} Device: M1 Pro Question: Is it because M1 not supporting this feature?
1
1
317
Jun ’25
Defining instructions employing Content Tagging Model
Hello It seems the model Content Tagging doesn't obey when I define the type of tag I wish in the instructions parameters, always the output are the main topics. The unique form to get other type of tags like emotions is using Generable + Guided types. The documentation says it is recommended but not mandatory the use instructions. Maybe I'm setting wrongly the instructions but take a look in the attached snapshot. I copied the definition of tagging emotions from the official documentation. The upper example is employing generable and it works but in the example at the botton I set like instruction the same description of emotion and it doesn't work. I tried with other statements with more or less verbose and never output emotions. Could you provide a state using instruction where it works? Current version of model isn't working with instruction?
1
0
408
Oct ’25
iOS 26 beta breaking my model
I just recently updated to iOS 26 beta (23A5336a) to test an app I am developing I running an MLModel loaded from a .mlmodelc file. On the current iOS version 18.6.2 the model is running as expected with no issues. However on iOS 26 I am now getting error when trying to perform an inference to the model where I pass a camera frame into it. Below is the error I am seeing when I attempt to run an inference. at the bottom it says "Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model " does this indicate I need to convert my model or something? I don't understand since it runs as normal on iOS 18. Any help getting this to run again would be greatly appreciated. Thank you, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: Could not process request ret=0x1d lModel=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/ : sourceURL=(null) : UUID=46228BFC-19B0-45BF-B18D-4A2942EEC144 : key={"isegment":0,"inputs":{"input":{"shape":[512,512,1,3,1]}},"outputs":{"var_633":{"shape":[512,512,1,19,1]},"94_argmax_out_value":{"shape":[512,512,1,1,1]},"argmax_out":{"shape":[512,512,1,1,1]},"var_637":{"shape":[512,512,1,19,1]}}} : identifierSource=1 : cacheURLIdentifier=01EF2D3DDB9BA8FD1FDE18C7CCDABA1D78C6BD02DC421D37D4E4A9D34B9F8181_93D03B87030C23427646D13E326EC55368695C3F61B2D32264CFC33E02FFD9FF : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=259022032430 : intermediateBufferHandle=13949 : queueDepth=127 } : state=3 : [Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=8 err=Error Domain=com.apple.appleneuralengine Code=8 "processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error} [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": ANEF error: /private/var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/model.espresso.net, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). Error Domain=com.apple.Vision Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x114d92940 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}
1
0
1.2k
Sep ’25
Is there an API for the 3D effect from flat photos?
Introduced in the Keynote was the 3D Lock Screen images with the kangaroo: https://9to5mac.com/wp-content/uploads/sites/6/2025/06/3d-lock-screen-2.gif I can't see any mention on if this effect is available for developers with an API to convert flat 2D photos in to the same 3D feeling image. Does anyone know if there is an API?
1
1
106
Jun ’25
Training adapter, it won't call my tool
Hi all. My adapter model just won't invoke my tool. The problem I am having is covered in an older post: https://developer.apple.com/forums/thread/794839?answerId=852262022#852262022 Sadly the thread dies there and no resolution is seen in that thread. It's worth noting that I have developed an AI chatbot built around LanguageModelSession to which I feed the exact same system prompt that I feed to my training set (pasted further in this post). The AI chatbot works perfectly, the tool is invoked when needed. I am training the adapter model because the base model whilst capable doesn't produce the quality I'm looking for. So here's the template of an item in my training set: [ { 'role': 'system', 'content': systemPrompt, 'tools': [TOOL_DEFINITION] }, { 'role': 'user', 'content': entry['prompt'] }, { 'role': 'assistant', 'content': entry['code'] } ] where TOOL_DEFINITION = { 'type': 'function', 'function': { 'name': 'WriteUbersichtWidgetToFileSystem', 'description': 'Writes an Übersicht Widget to the file system. Call this tool as the last step in processing a prompt that generates a widget.', 'parameters': { 'type': 'object', 'properties': { 'jsxContent': { 'type': 'string', 'description': 'Complete JSX code for an Übersicht widget. This should include all required exports: command, refreshFrequency, render, and className. The JSX should be a complete, valid Übersicht widget file.' } }, 'required': ['jsxContent'] } } ... and systemPrompt = A conversation between a user and a helpful assistant. You are an Übersicht widget designer. Create Übersicht widgets when requested by the user. IMPORTANT: You have access to a tool called WriteUbersichtWidgetToFileSystem. When asked to create a widget, you MUST call this tool. ### Tool Usage: Call WriteUbersichtWidgetToFileSystem with complete JSX code that implements the Übersicht Widget API. Generate custom JSX based on the user's specific request - do not copy the example below. ### Übersicht Widget API (REQUIRED): Every Übersicht widget MUST export these 4 items: - export const command: The bash command to execute (string) - export const refreshFrequency: Refresh rate in milliseconds (number) - export const render: React component function that receives {output} prop (function) - export const className: CSS positioning for absolute placement (string) Example format (customize for each request): WriteUbersichtWidgetToFileSystem({jsxContent: `export const command = "echo hello"; export const refreshFrequency = 1000; export const render = ({output}) => { return <div>{output}</div>; }; export const className = "top: 20px; left: 20px;"`}) ### Rules: - The terms "ubersicht widget", "widget", "a widget", "the widget" must all be interpreted as "Übersicht widget" - Generate complete, valid JSX code that follows the Übersicht widget API - When you generate a widget, don't just show JSON or code - you MUST call the WriteUbersichtWidgetToFileSystem tool - Report the results to the user after calling the tool ### Examples: - "Generate a Übersicht widget" → Use WriteUbersichtWidgetToFileSystem tool - "Can you add a widget that shows the time" → Use WriteUbersichtWidgetToFileSystem tool - "Create a widget with a button" → Use WriteUbersichtWidgetToFileSystem tool When the script that I use to compose the full training set is executed, entry['prompt'] and entry['code'] contain the prompt and the resulting JSX code for one of the examples I'm feeding to the training session. This is repeated for about 60 such examples that I have in my sample data collection. Thanks for any help. Michael
6
0
1.1k
Nov ’25
Core ML model decryption on Intel chips
About the Core ML model encryption mention in:https://developer.apple.com/documentation/coreml/encrypting-a-model-in-your-app When I encrypted the model, if the machine is M chip, the model will load perfectly. One the other hand, when I test the executable on an Intel chip macbook, there will be an error: Error Domain=com.apple.CoreML Code=9 "Operation not supported on this platform." UserInfo={NSLocalizedDescription=Operation not supported on this platform.} Intel test machine is 2019 macbook air with CPU: Intel i5-8210Y, OS: 14.7.6 23H626, With Apple T2 Security Chip. The encrypted model do load on M2 and M4 macbook air. If the model is NOT encrypted, it will also load on the Intel test machine. I did not find in Core ML document that suggest if the encryption/decryption support Intel chips. May I check if the decryption indeed does NOT support Intel chip?
2
1
419
Jan ’26
no tensorflow-metal past tf 2.18?
Hi We're on tensorflow 2.20 that has support now for python 3.13 (finally!). tensorflow-metal is still only supporting 2.18 which is over a year old. When can we expect to see support in tensorflow-metal for tf 2.20 (or later!) ? I bought a mac thinking I would be able to get great performance from the M processors but here I am using my CPU for my ML projects. If it's taking so long to release it, why not open source it so the community can keep it more up to date? cheers Matt
1
1
438
Nov ’25
RDMA API Documentation
With the release of the newest version of tahoe and MLX supporting RDMA. Is there a documentation link to how to utilizes the libdrma dylib as well as what functions are available? I am currently assuming it mostly follows the standard linux infiniband library but I would like the apple specific details.
Replies
0
Boosts
1
Views
292
Activity
Dec ’25
Overly strict foundation model rate limit when used in app extension
I am calling into an app extension from a Safari Web Extension (sendNativeMessage, which in turn results in a call to NSExtensionRequestHandling’s beginRequest). My Safari extension aims to make use of the new foundation models for some of the features it provides. In my testing, I hit the rate limit by sending 4 requests, waiting 30 seconds between each. This makes the FoundationModels framework (which would otherwise serve my use case perfectly well) unusable in this context, because the model is called in response to user input, and this rate of user input is perfectly plausible in a real world scenario. The error thrown as a result of the rate limit is “Safety guardrail was triggered after consecutive failures during streaming.", but looking at the system logs in Console.app shows the rate limit as the real culprit. My suggestions: Please introduce sensible rate limits for app extensions, through an entitlement if need be. If it is rate limited to 1 request per every couple of seconds, that would already fix the issue for me. Please document the rate limit. Please make the thrown error reflect that it is the result of a rate limit and not a generic guardrail violation. IMPORTANT: please indicate in the thrown error when it is safe to try again. Filed a feedback here: FB18332004
Replies
3
Boosts
1
Views
239
Activity
Jun ’25
Foundation Models flags 'Six Flags Great America' as unsafe
I'm working on a to-do list app that uses SpeechTranscriber and Foundation Models framework to transcribe a user's voice into text and create to-do items based off of it. After about 30 minutes looking at my code, I couldn't figure out why I was failing to generate a to-do for "I need to go to Six Flags Great America tomorrow at 3pm." It turns out, I was consistently firing the Foundation Models's safety filter violation for unsafe content ("May contain unsafe content"). Lesson learned: consider comprehensively logging Foundation Models error states to quickly identify when safety filters are unexpectedly triggered.
Replies
3
Boosts
1
Views
514
Activity
Jul ’25
FoundationModels coding
I am writing an app that parses text and conducts some actions. I don't want to give too much away ;) However, I am having a huge problem with token sizes. LanguageModelSession will of course give me the on device model 4096 available, but when you go over 4096, my code doesn't seem to be falling back to PCC, or even the system configured ChatGPT. Can anyone assist me with this? For some reason, after reading the docs, it's very unclear how this transition between the three takes place.
Replies
3
Boosts
0
Views
838
Activity
Jan ’26
Does Image Playground is On-device + Private Cloud ?
Apple's Image Playground primarily performs image generation on-device, but can use secure Private Cloud Compute for more complex requests that require larger models. Private Cloud Compute (PCC) For more complex tasks that require greater computational power than the device can provide, Image Playground leverages Apple's Private Cloud Compute. This system extends the privacy and security of the device to the cloud: Secure Environment: PCC runs on Apple silicon servers and uses a secure enclave to protect data, ensuring requests are processed in a verified, secure environment. No Data Storage: Data is never stored or made accessible to Apple when using PCC; it is used only to fulfill the specific request. Independent Verification: Independent experts are able to inspect the code running on these servers to verify Apple's privacy promises.
Replies
3
Boosts
0
Views
1.1k
Activity
Dec ’25
Two errors in debug: com.apple.modelcatalog.catalog sync and nw_protocol_instance_set_output_handler
We get two error message in Xcode debug. apple.model.catalog we get 1 time at startup, and the nw_protocol_instance_set_output_handler Not calling remove_input_handler on 0x152ac3c00:udp we get on sartup and some time during running of the app. I have tested cutoff repos WS eg. But nothing helpss, thats for the nw_protocol. We have a fondationmodel in a repo but we check if it is available if not we do not touch it. Please help me? nw_protocol_instance_set_output_handler Not calling remove_input_handler on 0x152ac3c00:udp com.apple.modelcatalog.catalog sync: connection error during call: Error Domain=NSCocoaErrorDomain Code=4099 "The connection to service named com.apple.modelcatalog.catalog was invalidated: Connection init failed at lookup with error 159 - Sandbox restriction." UserInfo={NSDebugDescription=The connection to service named com.apple.modelcatalog.catalog was invalidated: Connection init failed at lookup with error 159 - Sandbox restriction.} reached max num connection attempts: 1 The function we have in the repo is this: public actor FoundationRepo: JobDescriptionChecker, SubskillSuggester { private var session: LanguageModelSession? private let isEnabled: Bool private let shouldUseLocalFoundation: Bool private let baseURLString = "https://xx.xx.xxx/xx" private let http: HTTPPac public init(http: HTTPPac, isEnabled: Bool = true) { self.http = http self.isEnabled = isEnabled self.session = nil guard isEnabled else { self.shouldUseLocalFoundation = false return } let model = SystemLanguageModel.default guard model.supportsLocale() else { self.shouldUseLocalFoundation = false return } switch model.availability { case .available: self.shouldUseLocalFoundation = true case .unavailable(.deviceNotEligible), .unavailable(.appleIntelligenceNotEnabled), .unavailable(.modelNotReady): self.shouldUseLocalFoundation = false @unknown default: self.shouldUseLocalFoundation = false } } So here we decide if we are going to use iPhone ML or my backend-remote?
Replies
2
Boosts
0
Views
419
Activity
3w
Crashed: AXSpeech EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x000056f023efbeb0
Application is getting Crashed: AXSpeech EXC_BAD_ACCESS KERN_INVALID_ADDRESS 0x000056f023efbeb0 Crashed: AXSpeech 0 libobjc.A.dylib 0x4820 objc_msgSend + 32 1 libsystem_trace.dylib 0x6c34 _os_log_fmt_flatten_object + 116 2 libsystem_trace.dylib 0x5344 _os_log_impl_flatten_and_send + 1884 3 libsystem_trace.dylib 0x4bd0 _os_log + 152 4 libsystem_trace.dylib 0x9c48 _os_log_error_impl + 24 5 TextToSpeech 0xd0a8c _pcre2_xclass_8 6 TextToSpeech 0x3bc04 TTSSpeechUnitTestingMode 7 TextToSpeech 0x3f128 TTSSpeechUnitTestingMode 8 AXCoreUtilities 0xad38 -[NSArray(AXExtras) ax_flatMappedArrayUsingBlock:] + 204 9 TextToSpeech 0x3eb18 TTSSpeechUnitTestingMode 10 TextToSpeech 0x3c948 TTSSpeechUnitTestingMode 11 TextToSpeech 0x48824 AXAVSpeechSynthesisVoiceFromTTSSpeechVoice 12 TextToSpeech 0x49804 AXAVSpeechSynthesisVoiceFromTTSSpeechVoice 13 Foundation 0xf6064 __NSThreadPerformPerform + 264 14 CoreFoundation 0x37acc CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION + 28 15 CoreFoundation 0x36d48 __CFRunLoopDoSource0 + 176 16 CoreFoundation 0x354fc __CFRunLoopDoSources0 + 244 17 CoreFoundation 0x34238 __CFRunLoopRun + 828 18 CoreFoundation 0x33e18 CFRunLoopRunSpecific + 608 19 Foundation 0x2d4cc -[NSRunLoop(NSRunLoop) runMode:beforeDate:] + 212 20 TextToSpeech 0x24b88 TTSCFAttributedStringCreateStringByBracketingAttributeWithString 21 Foundation 0xb3154 NSThread__start + 732 com.livingMedia.AajTakiPhone_issue_3ceba855a8ad2d1af83655803dc13f70_crash_session_9081fa41ced440ae9a57c22cb432f312_DNE_0_v2_stacktrace.txt 22 libsystem_pthread.dylib 0x24d4 _pthread_start + 136 23 libsystem_pthread.dylib 0x1a10 thread_start + 8
Replies
4
Boosts
1
Views
1.5k
Activity
1w
Download toolkit link failing for Foundation Models adapter training
Attempted to download the Adapter Toolkit linked to from https://developer.apple.com/apple-intelligence/foundation-models-adapter/. Failed on all attempts, with a "403 Forbidden" error. I had accepted the agreement on the first attempt. How would we get access please?
Replies
3
Boosts
1
Views
288
Activity
Jun ’25
CoreML SIP error should be more explicit
When trying to open an encrypted CoreML model file on a system with SIP disabled, the error message is Failed to generate key request for <...> with error: -42187 This should state that SIP is disabled and needs to be enabled.
Replies
1
Boosts
1
Views
973
Activity
Jan ’26
Computer Vision and Foundation Models
Is foundation models matured enough to take input from the Apple Vision framework to generate responses? Something similar to what google's gemini does although in a much smaller scale and for a very specific niche.
Replies
1
Boosts
0
Views
775
Activity
Nov ’25
Foundation Models / Playgrounds Hello World - Help!
I am using Foundation Models for the first time and no response is being provided to me. Code import Playgrounds import FoundationModels #Playground { let session = LanguageModelSession() let result = try await session.respond(to: "List all the states in the USA") print(result.content) } Canvas Output What I did New file Code Canvas refreshes but nothing happens Am I missing a step or setup here? Please help. Something so basic is not working I do not know what to do. Running 40GPU, 16CPU MacBook Pro.. IOS26/Xcodebeta2/Tahoe allocated 8CPU, 48GB memory in Parallels VM. Settings for Playgrounds in Xcode Thank you for your help in advance.
Replies
5
Boosts
1
Views
420
Activity
Jul ’25
Can not use Language Model in Xcode-beta
I've downloaded the Xcode-beta and run the sample project "FoundationModelsTripPlanner" but I got this error when trying generate the response. InferenceError::inferenceFailed::Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.modelcatalog} Device: M1 Pro Question: Is it because M1 not supporting this feature?
Replies
1
Boosts
1
Views
317
Activity
Jun ’25
Iphone 16 stuck on ‘download support for Image playground’
Itself been 4-5 days my Image playground has showing the “Downloading Support for Image Playground “
Replies
2
Boosts
1
Views
1.4k
Activity
Dec ’25
Defining instructions employing Content Tagging Model
Hello It seems the model Content Tagging doesn't obey when I define the type of tag I wish in the instructions parameters, always the output are the main topics. The unique form to get other type of tags like emotions is using Generable + Guided types. The documentation says it is recommended but not mandatory the use instructions. Maybe I'm setting wrongly the instructions but take a look in the attached snapshot. I copied the definition of tagging emotions from the official documentation. The upper example is employing generable and it works but in the example at the botton I set like instruction the same description of emotion and it doesn't work. I tried with other statements with more or less verbose and never output emotions. Could you provide a state using instruction where it works? Current version of model isn't working with instruction?
Replies
1
Boosts
0
Views
408
Activity
Oct ’25
iOS 26 beta breaking my model
I just recently updated to iOS 26 beta (23A5336a) to test an app I am developing I running an MLModel loaded from a .mlmodelc file. On the current iOS version 18.6.2 the model is running as expected with no issues. However on iOS 26 I am now getting error when trying to perform an inference to the model where I pass a camera frame into it. Below is the error I am seeing when I attempt to run an inference. at the bottom it says "Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model " does this indicate I need to convert my model or something? I don't understand since it runs as normal on iOS 18. Any help getting this to run again would be greatly appreciated. Thank you, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: Could not process request ret=0x1d lModel=_ANEModel: { modelURL=file:///var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/ : sourceURL=(null) : UUID=46228BFC-19B0-45BF-B18D-4A2942EEC144 : key={"isegment":0,"inputs":{"input":{"shape":[512,512,1,3,1]}},"outputs":{"var_633":{"shape":[512,512,1,19,1]},"94_argmax_out_value":{"shape":[512,512,1,1,1]},"argmax_out":{"shape":[512,512,1,1,1]},"var_637":{"shape":[512,512,1,19,1]}}} : identifierSource=1 : cacheURLIdentifier=01EF2D3DDB9BA8FD1FDE18C7CCDABA1D78C6BD02DC421D37D4E4A9D34B9F8181_93D03B87030C23427646D13E326EC55368695C3F61B2D32264CFC33E02FFD9FF : string_id=0x00000000 : program=_ANEProgramForEvaluation: { programHandle=259022032430 : intermediateBufferHandle=13949 : queueDepth=127 } : state=3 : [Espresso::ANERuntimeEngine::__forward_segment 0] evaluate[RealTime]WithModel returned 0; code=8 err=Error Domain=com.apple.appleneuralengine Code=8 "processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error" UserInfo={NSLocalizedDescription=processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error} [Espresso::handle_ex_plan] exception=Espresso exception: "Generic error": ANEF error: /private/var/containers/Bundle/Application/04F01BF5-D48B-44EC-A5F6-3C7389CF4856/RizzCanvas.app/faceParsing.mlmodelc/model.espresso.net, processRequest:model:qos:qIndex:modelStringID:options:returnValue:error:: ANEProgramProcessRequestDirect() Failed with status=0x1d : statusType=0x9: Program Inference error status=-1 Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1). Error Domain=com.apple.Vision Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x114d92940 {Error Domain=com.apple.CoreML Code=0 "Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1)." UserInfo={NSLocalizedDescription=Unable to compute the prediction using a neural network model. It can be an invalid input data or broken/unsupported model (error code: -1).}}}
Replies
1
Boosts
0
Views
1.2k
Activity
Sep ’25
Is there an API for the 3D effect from flat photos?
Introduced in the Keynote was the 3D Lock Screen images with the kangaroo: https://9to5mac.com/wp-content/uploads/sites/6/2025/06/3d-lock-screen-2.gif I can't see any mention on if this effect is available for developers with an API to convert flat 2D photos in to the same 3D feeling image. Does anyone know if there is an API?
Replies
1
Boosts
1
Views
106
Activity
Jun ’25
Training adapter, it won't call my tool
Hi all. My adapter model just won't invoke my tool. The problem I am having is covered in an older post: https://developer.apple.com/forums/thread/794839?answerId=852262022#852262022 Sadly the thread dies there and no resolution is seen in that thread. It's worth noting that I have developed an AI chatbot built around LanguageModelSession to which I feed the exact same system prompt that I feed to my training set (pasted further in this post). The AI chatbot works perfectly, the tool is invoked when needed. I am training the adapter model because the base model whilst capable doesn't produce the quality I'm looking for. So here's the template of an item in my training set: [ { 'role': 'system', 'content': systemPrompt, 'tools': [TOOL_DEFINITION] }, { 'role': 'user', 'content': entry['prompt'] }, { 'role': 'assistant', 'content': entry['code'] } ] where TOOL_DEFINITION = { 'type': 'function', 'function': { 'name': 'WriteUbersichtWidgetToFileSystem', 'description': 'Writes an Übersicht Widget to the file system. Call this tool as the last step in processing a prompt that generates a widget.', 'parameters': { 'type': 'object', 'properties': { 'jsxContent': { 'type': 'string', 'description': 'Complete JSX code for an Übersicht widget. This should include all required exports: command, refreshFrequency, render, and className. The JSX should be a complete, valid Übersicht widget file.' } }, 'required': ['jsxContent'] } } ... and systemPrompt = A conversation between a user and a helpful assistant. You are an Übersicht widget designer. Create Übersicht widgets when requested by the user. IMPORTANT: You have access to a tool called WriteUbersichtWidgetToFileSystem. When asked to create a widget, you MUST call this tool. ### Tool Usage: Call WriteUbersichtWidgetToFileSystem with complete JSX code that implements the Übersicht Widget API. Generate custom JSX based on the user's specific request - do not copy the example below. ### Übersicht Widget API (REQUIRED): Every Übersicht widget MUST export these 4 items: - export const command: The bash command to execute (string) - export const refreshFrequency: Refresh rate in milliseconds (number) - export const render: React component function that receives {output} prop (function) - export const className: CSS positioning for absolute placement (string) Example format (customize for each request): WriteUbersichtWidgetToFileSystem({jsxContent: `export const command = "echo hello"; export const refreshFrequency = 1000; export const render = ({output}) => { return <div>{output}</div>; }; export const className = "top: 20px; left: 20px;"`}) ### Rules: - The terms "ubersicht widget", "widget", "a widget", "the widget" must all be interpreted as "Übersicht widget" - Generate complete, valid JSX code that follows the Übersicht widget API - When you generate a widget, don't just show JSON or code - you MUST call the WriteUbersichtWidgetToFileSystem tool - Report the results to the user after calling the tool ### Examples: - "Generate a Übersicht widget" → Use WriteUbersichtWidgetToFileSystem tool - "Can you add a widget that shows the time" → Use WriteUbersichtWidgetToFileSystem tool - "Create a widget with a button" → Use WriteUbersichtWidgetToFileSystem tool When the script that I use to compose the full training set is executed, entry['prompt'] and entry['code'] contain the prompt and the resulting JSX code for one of the examples I'm feeding to the training session. This is repeated for about 60 such examples that I have in my sample data collection. Thanks for any help. Michael
Replies
6
Boosts
0
Views
1.1k
Activity
Nov ’25
Core ML model decryption on Intel chips
About the Core ML model encryption mention in:https://developer.apple.com/documentation/coreml/encrypting-a-model-in-your-app When I encrypted the model, if the machine is M chip, the model will load perfectly. One the other hand, when I test the executable on an Intel chip macbook, there will be an error: Error Domain=com.apple.CoreML Code=9 "Operation not supported on this platform." UserInfo={NSLocalizedDescription=Operation not supported on this platform.} Intel test machine is 2019 macbook air with CPU: Intel i5-8210Y, OS: 14.7.6 23H626, With Apple T2 Security Chip. The encrypted model do load on M2 and M4 macbook air. If the model is NOT encrypted, it will also load on the Intel test machine. I did not find in Core ML document that suggest if the encryption/decryption support Intel chips. May I check if the decryption indeed does NOT support Intel chip?
Replies
2
Boosts
1
Views
419
Activity
Jan ’26
Is Private Cloud Compute allowed for Swift Student Challenge submissions?
Is Private Cloud Compute allowed? Or are on-device Foundational Models allowed only? Thanks!
Replies
1
Boosts
0
Views
332
Activity
Jan ’26
no tensorflow-metal past tf 2.18?
Hi We're on tensorflow 2.20 that has support now for python 3.13 (finally!). tensorflow-metal is still only supporting 2.18 which is over a year old. When can we expect to see support in tensorflow-metal for tf 2.20 (or later!) ? I bought a mac thinking I would be able to get great performance from the M processors but here I am using my CPU for my ML projects. If it's taking so long to release it, why not open source it so the community can keep it more up to date? cheers Matt
Replies
1
Boosts
1
Views
438
Activity
Nov ’25