Explore the power of machine learning and Apple Intelligence within apps. Discuss integrating features, share best practices, and explore the possibilities for your app here.

All subtopics
Posts under Machine Learning & AI topic

Post

Replies

Boosts

Views

Activity

Model Guardrails Too Restrictive?
I'm experimenting with using the Foundation Models framework to do news summarization in an RSS app but I'm finding that a lot of articles are getting kicked back with a vague message about guardrails. This seems really common with political news but we're talking mainstream stuff, i.e. Politico, etc. If the models are this restrictive, this will be tough to use. Is this intended? FB17904424
7
4
260
Jul ’25
Foundation Models Error: Local Sanitizer Asset
Hi, I just upgraded to macOS Tahoe Beta 2 and now I'm getting this error when I try to initialize my Foundation Models' session: Error Resource (Local Sanitizer Asset) unavailable error. import FoundationModels #Playground { let session = LanguageModelSession() do { let result = try await session.respond(to: "Tell me 3 colors") print(result.content) } catch { print("Error", error) } } I couldn't find any resource guiding me on how to solve this. Any help/workaround? Thank you!
1
4
384
Jun ’25
SFSpeechRecognitionResult discards previous transcripts with on-device option set to true
Hi everyone, I might need some help with on-device recognition. It seems that the speech recognition task will discard whatever it has transcribed after a new sentence starts (or it believes it becomes a new sentence) during a single audio session, with requiresOnDeviceRecognition is set to true. This doesn't happen with requiresOnDeviceRecognition set to false. System environment: macOS 14 with Xcode 15, deploying to iOS 17 Thank you all!
13
4
2.3k
Oct ’24
Foundation Models not working in Simulator?
I'm attempting to run a basic Foundation Model prototype in Xcode 26, but I'm getting the error below, using the iPhone 16 simulator with iOS 26. Should these models be working yet? Do I need to be running macOS 26 for these to work? (I hope that's not it) Error: Passing along Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} in response to ExecuteRequest Playground to reproduce: #Playground { let session = LanguageModelSession() do { let response = try await session.respond(to: "What's happening?") } catch { let error = error } }
14
3
1.7k
Jul ’25
Is it possible to set writingToolsBehavior globally?
Hello, we're investigating an option to disable writing tools for some customers in our app. I'm aware of the writingToolsBehavior property for UITextView etc, but we would like to have a way to set this globally without having to update all UITextView instances (or future instances). Is there any API to do this? We tried using UITextView.appearance.writingToolsBehavior = .none and it seemed promising on 18.2 beta, however it introduced crashes on devices running 18.1. The crashes look like: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UITextView: 0x14462c000; frame = (0 0; 0 0); text = ''; userInteractionEnabled = NO; gestureRecognizers = <NSArray: 0x30067cb40>; backgroundColor = UIExtendedGrayColorSpace 0 0; layer = <CALayer: 0x3009b1ba0>; contentOffset: {0, 0}; contentSize: {0, 0}; adjustedContentInset: {0, 0, 0, 0}> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = { "setWritingToolsBehavior:" = ( ); }' Similarly, even on 18.2 beta if we used UITextField.appearance.writingToolsBehavior = .none we would see crashes for any search fields like: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UISearchBarTextField: 0x141c04a00; frame = (0 0; 0 0); text = ''; opaque = NO; gestureRecognizers = <NSArray: 0x301fe15c0>; placeholder = Search Leads; borderStyle = RoundedRect; background = <_UITextFieldSystemBackgroundProvider: 0x3015de960: backgroundView=<_UISearchBarSearchFieldBackgroundView: 0x141c60200; frame = (0 0; 0 0); opaque = NO; autoresize = W+H; userInteractionEnabled = NO; layer = <CALayer: 0x3015de8e0>>, fillColor=(null), textfield=<UISearchBarTextField: 0x141c04a00>>; layer = <CALayer: 0x3015de240>> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = { "setWritingToolsBehavior:" = ( ); }' Is it possible to set this globally?
0
3
429
Nov ’24
Block Apple Intelligence
Hi everyone, Could someone confirm if it's currently possible, or if there are any plans, to restrict users from enabling Apple Intelligence altogether? I understand that we can block individual features using MDM, but I'm interested in knowing if we can prevent users from toggling Apple Intelligence on and off in System Settings entirely. Thanks! Kind Regards, Filipe Nogueira
0
3
535
Nov ’24
Broken compatibility in tensorflow-metal with tensorflow 2.18
Issue type: Bug TensorFlow metal version: 1.1.1 TensorFlow version: 2.18 OS platform and distribution: MacOS 15.2 Python version: 3.11.11 GPU model and memory: Apple M2 Max GPU 38-cores Standalone code to reproduce the issue: import tensorflow as tf if __name__ == '__main__': gpus = tf.config.experimental.list_physical_devices('GPU') print(gpus) Current behavior Apple silicone GPU with tensorflow-metal==1.1.0 and python 3.11 works fine with tensorboard==2.17.0 This is normal output: /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py [PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')] Process finished with exit code 0 But if I upgrade tensorflow to 2.18 I'll have error: /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/bin/python /Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py Traceback (most recent call last): File "/Users/mspanchenko/VSCode/cryptoNN/ml/core_second_window/test_tensorflow_gpus.py", line 1, in <module> import tensorflow as tf File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/__init__.py", line 437, in <module> _ll.load_library(_plugin_dir) File "/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN3tsl8internal10LogMessageC1EPKcii Referenced from: <D2EF42E3-3A7F-39DD-9982-FB6BCDC2853C> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: <2814A58E-D752-317B-8040-131217E2F9AA> /Users/mspanchenko/anaconda3/envs/cryptoNN_ml_core/lib/python3.11/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so Process finished with exit code 1
3
3
1.5k
Feb ’25
Crash inside of Vision framework during VNImageRequestHandler use
Hello, I've been dealing with a puzzling issue for some time now, and I’m hoping someone here might have insights or suggestions. The Problem: We’re observing an occasional crash in our app that seems to originate from the Vision framework. Frequency: It happens randomly, after many successful executions of the same code, hard to tell how long the app was working, but in some cases app could run for like a month without any issues. Devices: The issue doesn't seem device-dependent (we’ve seen it on various iPad models). OS Versions: The crashes started occurring with iOS 18.0.1 and are still present in 18.1 and 18.1.1. What I suspected: The crash logs point to a potential data race within the Vision framework. The relevant section of the code where the crash happens: guard let cgImage = image.cgImage else { throw ... } let request = VNCoreMLRequest(model: visionModel) try VNImageRequestHandler(cgImage: cgImage).perform([request]) // <- the line causing the crash Since the code is rather simple, I'm not sure what else there could be missing here. The images sent here are uniform (fixed size). Model is loaded and working, the crash occurs random after a period of time and the call worked correctly many times. Also, the model variable is not an optional. Here is the crash log: libobjc.A objc_exception_throw CoreFoundation -[NSMutableArray removeObjectsAtIndexes:] Vision -[VNWeakTypeWrapperCollection _enumerateObjectsDroppingWeakZeroedObjects:usingBlock:] Vision -[VNWeakTypeWrapperCollection addObject:droppingWeakZeroedObjects:] Vision -[VNSession initWithCachingBehavior:] Vision -[VNCoreMLTransformer initWithOptions:model:error:] Vision -[VNCoreMLRequest internalPerformRevision:inContext:error:] Vision -[VNRequest performInContext:error:] Vision -[VNRequestPerformer _performOrderedRequests:inContext:error:] Vision -[VNRequestPerformer _performRequests:onBehalfOfRequest:inContext:error:] Vision -[VNImageRequestHandler performRequests:gatheredForensics:error:] OurApp ModelWrapper.perform And I'm a bit lost at this point, I've tried everything I could image so far. I've tried to putting a symbolic breakpoint in the removeObjectsAtIndexes to check if some library (e.g. crash reporter) we use didn't do some implementation swap. There was none, and if anything did some method swizzling, I'd expect that to show in the stack trace before the original code would be called. I did peek into the previous functions and I've noticed a lock used in one of the Vision methods, so in my understanding any data race in this code shouldn't be possible at all. I've also put breakpoints in the NSLock variants, to check for swizzling/override with a category and possibly messing the locking - again, nothing was there. There is also another model that is running on a separate queue, but after seeing the line with the locking in the debugger, it doesn't seem to me like this could cause a problem, at least not in this specific spot. Is there something I'm missing here, or something I'm doing wrong? Thanks in advance for your help!
8
3
624
Jul ’25
Foundation Model Framework
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following: Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s. I was wondering if anyone had any advice?
15
3
1.1k
Jun ’25
Tensor Flow Metal 1.2.0 on M2 Fails to converge on common toy models
I've been trying to get some basic models to work on an M2 with tensor metal 1.2 and keras 2.15 and 2.18 and they all fail to work as expected. I'm running models copy/pasted from common tutorials like Jason Brownlee ML Mastery Object Classification tutorial using CIFAR-10. When run with the GPU I can't get any reasonable results. Under keras 2.15 the best validation accuracy ends up being around 10-15%. Under keras 2.18, the validation goes off the rails around epoch 5 with wildly low accuracy and loss values that are reported as "nan". Epoch 4/25 782/782: 19s 24ms/step - accuracy: 0.3450 - loss: 2.8925 - val_accuracy: 0.2992 - val_loss: 1.9869 Epoch 5/25 782/782: 19s 24ms/step - accuracy: 0.2553 - loss: nan - val_accuracy: 0.0000e+00 - val_loss: nan Running the same code on the CPU using keras 2.15 using tf.config.experimental.set_visible_devices([], 'GPU') yields a reasonable result with the validation accuracy around 75% as expected. Running the same code on keras 2.15 on a linux instance with just the CPU provides similar results. The tutorial can be found here: https://machinelearningmastery.com/object-recognition-convolutional-neural-networks-keras-deep-learning-library/ The only places I've deviated from the provided tutorial is using sdg = tf.keras.optimizers.legacy.SGD(learning_rate=lrate, momentum=0.9, nesterov=False) I did this at the advice of the warning: WARNING:absl:At this time, the v2.11+ optimizer `tf.keras.optimizers.SGD` runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at `tf.keras.optimizers.legacy.SGD`. Is there something special that I need to do to make this work? I've followed the instructions here: https://developer.apple.com/metal/tensorflow-plugin/ I've purged the venv a few times and started from scratch, but all with similarly terrible results. Here are my platform details: Chip: Apple M2 Memory: 16 GB macOS : Sequoia 15.2 Python venv: 3.11 Jupyter Lab Version: 4.3.3 TensorFlow versions: 2.15, 2.18 tensorflow-metal: 1.2.0 Thanks for any assistance or advice.
8
3
811
Mar ’25
Foundation Models not working: "Model is unavailable" error on iPad Pro M4
I am excited to try Foundation Models during WWDC, but it doesn't work at all for me. When running on my iPad Pro M4 with iPadOS 26 seed 1, I get the following error even when running the simplest query: let prompt = "How are you?" let stream = session.streamResponse(to: prompt) for try await partial in stream { self.answer = partial self.resultString = partial } In the Xcode console, I see the following error: assetsUnavailable(FoundationModels.LanguageModelSession.GenerationError.Context(debugDescription: "Model is unavailable", underlyingErrors: [])) I have verified that Apple Intelligence is enabled on my iPad. Any tips on how can I get it working? I have also submitted this feedback: FB17896752
3
3
544
Jun ’25
iOS 18 @AssistantIntent's marked with @available crashing on older OS's
Is anyone else seeing their apps crash on iOS/macOS 17.4/14.4 and newer when building a project that simply just includes the iOS 18 @AssistantIntent Macro? The beta 4 releases still have this problem. There are no notes about this that I have seen in the beta release notes. Crash message shown in console when trying to run on 17.4, 17.5, 17.5.1, etc: dyld[21935]: Symbol not found: _$s10AppIntents15AssistantSchemaV06IntentD0VAC0E0AAWP Referenced from: <F7A1FEF0-F3B0-379C-A914-D1FB0BA7C693> /Users/jonathan/Library/Developer/CoreSimulator/Devices/CA308F47-BCA8-4429-8599-1BB1CCEAB5B6/data/Containers/Bundle/Application/D7DC8E16-90DB-406A-A521-20F18326E4A7/IntentDemo.app/IntentDemo.debug.dylib Expected in: <88E18E38-24EC-364E-94A1-E7922AD247AF> /Library/Developer/CoreSimulator/Volumes/iOS_21F79/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS 17.5.simruntime/Contents/Resources/RuntimeRoot/System/Library/Frameworks/AppIntents.framework/AppIntents Obviously, the new Apple Intelligence AssistantIntents only work on the 2024 OS releases. However, even when these new App Intents are marked with @available(iOS 18, macOS 15, *), the app crashes on any earlier OS version. But it runs just fine on iOS 18 and macOS 15... I would love for me to just have done something wrong but I don’t think I have… Here is the sample project: https://github.com/JTostitos/FB14323923 Maybe it's a compiler issue thats failing to strip out the macro when building for older OS's or an Xcode issue - I have no idea. I just would like to know why its not working and how to resolve it. Thanks in advance for anyones help...
4
3
1.3k
Oct ’24
iOS 18 App Intents while supporting iOS 17
iOS 18 App Intents while supporting iOS 17 Hello, I have an existing app that supports iOS 17. I already have three App Intents but would like to add some of the new iOS 18 app intents like ShowInAppSearchResultsIntent. However, I am having a hard time using #available or @available to limit this ShowInAppSearchResultsIntent to iOS 18 only while still supporting iOS 17. Obviously, the ShowInAppSearchResultsIntent needs to use @AssistantIntent which is iOS 18 only, so I mark that struct as @available(iOS 18, *). That works as expected. It is when I need to add this "SearchSnippetIntent" intent to the AppShortcutsProvider, that I begin to have trouble doing. See code below: struct SnippetsShortcutsAppShortcutsProvider: AppShortcutsProvider { @AppShortcutsBuilder static var appShortcuts: [AppShortcut] { //iOS 17+ AppShortcut(intent: SnippetsNewSnippetShortcutsAppIntent(), phrases: [ "Create a New Snippet in \(.applicationName) Studio", ], shortTitle: "New Snippet", systemImageName: "rectangle.fill.on.rectangle.angled.fill") AppShortcut(intent: SnippetsNewLanguageShortcutsAppIntent(), phrases: [ "Create a New Language in \(.applicationName) Studio", ], shortTitle: "New Language", systemImageName: "curlybraces") AppShortcut(intent: SnippetsNewTagShortcutsAppIntent(), phrases: [ "Create a New Tag in \(.applicationName) Studio", ], shortTitle: "New Tag", systemImageName: "tag.fill") //iOS 18 Only AppShortcut(intent: SearchSnippetIntent(), phrases: [ "Search \(.applicationName) Studio", "Search \(.applicationName)" ], shortTitle: "Search", systemImageName: "magnifyingglass") } let shortcutTileColor: ShortcutTileColor = .blue } The iOS 18 Only AppShortcut shows the following error but none of the options seem to work. Maybe I am going about it the wrong way. 'SearchSnippetIntent' is only available in iOS 18 or newer Add 'if #available' version check Add @available attribute to enclosing static property Add @available attribute to enclosing struct Thanks in advance for your help.
4
3
2k
Jan ’25
Core Spotlight Semantic Search - still non-functional for 1+ year after WWDC24?
After more than a year since the announcement, I'm still unable to get this feature working properly and wondering if there are known issues or missing implementation details. Current Setup: Device: iPhone 16 Pro Max iOS: 26 beta 3 Development: Tested on both Xcode 16 and Xcode 26 Implementation: Following the official documentation examples The Problem: Semantic search simply doesn't work. Lexical search functions normally, but enabling semantic search produces identical results to having it disabled. It's as if the feature isn't actually processing. Error Output (Xcode 26): [QPNLU][qid=5] Error Domain=com.apple.SpotlightEmbedding.EmbeddingModelError Code=-8007 "Text embedding generation timeout (timeout=100ms)" [CSUserQuery][qid=5] got a nil / empty embedding data dictionary [CSUserQuery][qid=5] semanticQuery failed to generate, using "(false)" In Xcode 16, there are no error messages at all - the semantic search just silently fails. Missing Resources: The sample application mentioned during the WWDC24 presentation doesn't appear to have been released, which makes it difficult to verify if my implementation is correct. Would really appreciate any guidance or clarification on the current status of this feature. Has anyone in the community successfully implemented this?
0
3
416
Jul ’25