Apple Intelligence

RSS for tag

Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.

Posts under Apple Intelligence subtopic

Post

Replies

Boosts

Views

Activity

tensorflow 2.20 broken support
Hi, testing latest tensorflow-metal plugin with tensorflow 2.20 doesn't work.. using python Python 3.12.11 (main, Jun 3 2025, 15:41:47) [Clang 17.0.0 (clang-1700.0.13.3)] on darwin simple testing shows error: import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Library not loaded: @rpath/_pywrap_tensorflow_internal.so Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Reason: tried: '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so' (no such file), '/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file), '/System/Volumes/Preboot/Cryptexes/OS/opt/homebrew/lib/_pywrap_tensorflow_internal.so' (no such file) tf.config.experimental.list_physical_devices('GPU') Traceback (most recent call last): File "", line 1, in NameError: name 'tf' is not defined I fixed this error by copying _pywrap_tensorflow_internal.so where it's searched.. 1)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64 2)mkdir /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ 3)cp /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/_pywrap_tensorflow_internal.so /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/../_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/ then fails symbol not found: Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E in libmetal_plugin.dylib full log: with import tensorflow as tf Traceback (most recent call last): File "", line 1, in File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/init.py", line 438, in _ll.load_library(_plugin_dir) File "/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow/python/framework/load_library.py", line 151, in load_library py_tf.TF_LoadLibrary(lib) tensorflow.python.framework.errors_impl.NotFoundError: dlopen(/Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib, 0x0006): Symbol not found: __ZN10tensorflow28_AttrValue_default_instance_E Referenced from: <8B62586B-B082-3113-93AB-FD766A9960AE> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/tensorflow-plugins/libmetal_plugin.dylib Expected in: <2FF91C8B-0CB6-3E66-96B7-092FDF36772E> /Users/obg/npu/venv-tf/lib/python3.12/site-packages/_solib_darwin_arm64/_U@local_Uconfig_Utf_S_S_C_Upywrap_Utensorflow_Uinternal___Uexternal_Slocal_Uconfig_Utf/_pywrap_tensorflow_internal.so
1
0
164
17h
Python 3.13
Hello, Are there any plans to compile a python 3.13 version of tensorflow-metal? Just got my new Mac mini and the automatically installed version of python installed by brew is python 3.13 and while if I was in a hurry, I could manage to get python 3.12 installed and use the corresponding tensorflow-metal version but I'm not in a hurry. Many thanks, Alan
3
2
875
17h
Apple Intelligence language
I found what might be a bug with enabling Apple Intelligence when switching languages. When my iPhone's language is set to Catalan, the Apple Intelligence is disabled because it is not available for that language. Switching to Spanish doesn't activate it, and it still shows the same message of being unavailable, this time saying not available in Spanish (which is not true). However, it is enabled when the phone is rebooted. Once at this point, the bug becomes even weirder. Having the iPhone language set to Spanish and with Apple Intelligence on, I switch the language to Catalan, and the feature remains enabled. After I ask a query in Catalan, it surprisingly understands it and works, but then it gets disabled. Apart from that, as user feedback, I would love to activate Apple Intelligence in an available language other than my device's language. That's how I always used Siri (iPhone in Catalan, Siri in Spanish). Thanks!
2
1
897
5d
Visual Intelligence API SemanticContentDescriptor labels are empty
I'm trying to use Apple's new Visual Intelligence API for recommending content through screenshot image search. The problem I encountered is that the SemanticContentDescriptor labels are either completely empty or super misleading, making it impossible to query for similar content on my app. Even the closest matching example was inaccurate, returning a single label ["cardigan"] for a Supreme T-Shirt. I see other apps using this API like Etsy for example, and I'm wondering if they're using the input pixel buffer to query for similar content rather than using the labels? If anyone has a similar experience or something that wasn't called out in the documentation please lmk! Thanks.
0
0
155
1w
Visual Intelligence -- Make OpenIntent show a sheet rather than open my App
The developer tutorial for visual intelligence indicates that the method to detect and handle taps on a displayed entity from the Search section is via an "OpenIntent" associated with your entity. However, running this intent executes code from within my app. If I have the perform() method display UI, it always displays UI from within my app. I noticed that the Google app's integration to visual intelligence has a different behavior-- tapping on an entity does not take you to the Google app -- instead, a Webview is presented sheet-style WITHIN the Visual Intelligence environment (see below) How is that accomplished?
0
0
491
1w
ImagePlayground: Programmatic Creation Error
Hardware: Macbook Pro M4 Nov 2024 Software: macOS Tahoe 26.0 & xcode 26.0 Apple Intelligence is activated and the Image playground macOS app works Running the following on xcode throws ImagePlayground.ImageCreator.Error.creationFailed Any suggestions on how to make this work? import Foundation import ImagePlayground Task { let creator = try await ImageCreator() guard let style = creator.availableStyles.first else { print("No styles available") exit(1) } let images = creator.images( for: [.text("A cat wearing mittens.")], style: style, limit: 1) for try await image in images { print("Generated image: \(image)") } exit(0) } RunLoop.main.run()
0
0
231
2w
Image Playground Error: Unable to Generate Images Using externalProvider Style
I’m working on generating images using Image Playground. The code works fine for other styles but fails when using an external provider. I don’t see any other requirements mentioned in the documentation. Has anyone else encountered a similar issue? Here’s the relevant code snippet: https://developer.apple.com/documentation/imageplayground/imageplaygroundstyle/externalprovider?changes=_2 The error message is also not very helpful. It simply states that the creation failed. Note: I have enabled ChatGPT Plus, and the image generation using ChatGPT styles works fine when using the Playground app. do { let creator = try await ImageCreator() let concept = ImagePlaygroundConcept.text("Love") let images = creator.images(for: [concept], style: .externalProvider, limit: 1) for try await image in images { // Handle image break } } catch { // Handle error } I’m using the iOS 26 RC, and when I print creator.availableStyles, it doesn’t display the external Provider. [ImagePlayground.ImagePlaygroundStyle(id: "animation", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "emoji", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "illustration", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "sketch", _representationInfo: nil), ImagePlayground.ImagePlaygroundStyle(id: "messages-background", _representationInfo: nil)]
1
0
767
2w
How to test for VisualIntelligence available on device?
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max). What is the best way for me to determine availability to set this TipKit rule? Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
0
0
558
4w
UI Guidelines for Apple Intelligence?
Are there any guidelines for using Foundation Models To generate text for users in response to some canned queries? Should we use a special icon or text to let the user know that Apple Intelligence is generating the text? Should there be a disclaimer like, Apple Intelligence can make mistakes, please check for accuracy, etc?
1
0
600
4w
AppShortcuts.xcstrings does not translate each invocation phrase option separately, just the first
Due to our min iOS version, this is my first time using .xcstrings instead of .strings for AppShortcuts. When using the migrate .strings to .xcstrings Xcode context menu option, an .xcstrings catalog is produced that, as expected, has each invocation phrase as a separate string key. However, after compilation, the catalog changes to group all invocation phrases under the first phrase listed for each intent (see attached screenshot). It is possible to hover in blank space on the right and add more translations, but there is no 1:1 key matching requirement to the phrases on the left nor a requirement that there are the same number of keys in one language vs. another. (The lines just happen to align due to my window size.) What does that mean, practically? Do all sub-phrases in each language in AppShortcuts.xcstrings get processed during compilation, even if there isn't an equivalent phrase key declared in the AppShortcut (e.g., the ja translation has more phrases than the English)? (That makes some logical sense, as these phrases need not be 1:1 across languages.) In the AppShortcut declaration, if I delete all but the top invocation phrase, does nothing change with Siri? Is there something I'm doing incorrectly? struct WatchShortcuts: AppShortcutsProvider { static var appShortcuts: [AppShortcut] { AppShortcut( intent: QuickAddWaterIntent(), phrases: [ "\(.applicationName) log water", "\(.applicationName) log my water", "Log water in \(.applicationName)", "Log my water in \(.applicationName)", "Log a bottle of water in \(.applicationName)", ], shortTitle: "Log Water", systemImageName: "drop.fill" ) } }
0
0
233
Aug ’25
OpenIntent not executed with Visual Intelligence
I'm building a new feature with Visual Intelligence framework. My implementation for IndexedEntity and IntentValueQuery worked as expected and I can see a list of objects in visual search result. However, my OpenIntent doesn't work. When I tap on the object, I got a message on screen "Sorry somethinf went wrong ...". and the breakpoint in perform() is never triggered. Things I've tried: I added @MainActor before perform(), this didn't change anything I set static let openAppWhenRun: Bool = true and static var supportedModes: IntentModes = [.foreground(.immediate)], still nothing I created a different intent for the see more button at the end of feed. This AppIntent with schema: .visualIntelligence.semanticContentSearch worked, perform() is executed
10
0
294
Aug ’25
ImageCreator fails with GenerationError Code=11 on Apple Intelligence-enabled device
When I ran the following code on a physical iPhone device that supports Apple Intelligence, I encountered the following error log. What does this internal error code mean? Image generation failed with NSError in a different domain: Error Domain=ImagePlaygroundInternal.ImageGeneration.GenerationError Code=11 “(null)”, returning a generic error instead let imageCreator = try await ImageCreator() let style = imageCreator.availableStyles.first ?? .animation let stream = imageCreator.images(for: [.text("cat")], style: style, limit: 1) for try await result in stream { // error: ImagePlayground.ImageCreator.Error.creationFailed _ = result.cgImage }
0
1
222
Jul ’25
Foundation Models not working in Simulator?
I'm attempting to run a basic Foundation Model prototype in Xcode 26, but I'm getting the error below, using the iPhone 16 simulator with iOS 26. Should these models be working yet? Do I need to be running macOS 26 for these to work? (I hope that's not it) Error: Passing along Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} in response to ExecuteRequest Playground to reproduce: #Playground { let session = LanguageModelSession() do { let response = try await session.respond(to: "What's happening?") } catch { let error = error } }
14
3
1.8k
Jul ’25
Does ImageRequestHandler(data:) include depth data from AVCapturePhoto?
Hi all, I'm capturing a photo using AVCapturePhotoOutput, and I've set: let photoSettings = AVCapturePhotoSettings() photoSettings.isDepthDataDeliveryEnabled = true Then I create the handler like this: let data = photo.fileDataRepresentation() let handler = try ImageRequestHandler(data: data, orientation: .right) Now I’m wondering: If depth data delivery is enabled, is it actually included and used when I pass the Data to ImageRequestHandler? Or do I need to explicitly pass the depth data using the other initializer? let handler = try ImageRequestHandler( cvPixelBuffer: photo.pixelBuffer!, depthData: photo.depthData, orientation: .right ) In short: Does ImageRequestHandler(data:) make use of embedded depth info from AVCapturePhoto.fileDataRepresentation() — or is the pixel buffer + explicit depth data required? Thanks for any clarification!
1
0
229
Jul ’25
Core Spotlight Semantic Search - still non-functional for 1+ year after WWDC24?
After more than a year since the announcement, I'm still unable to get this feature working properly and wondering if there are known issues or missing implementation details. Current Setup: Device: iPhone 16 Pro Max iOS: 26 beta 3 Development: Tested on both Xcode 16 and Xcode 26 Implementation: Following the official documentation examples The Problem: Semantic search simply doesn't work. Lexical search functions normally, but enabling semantic search produces identical results to having it disabled. It's as if the feature isn't actually processing. Error Output (Xcode 26): [QPNLU][qid=5] Error Domain=com.apple.SpotlightEmbedding.EmbeddingModelError Code=-8007 "Text embedding generation timeout (timeout=100ms)" [CSUserQuery][qid=5] got a nil / empty embedding data dictionary [CSUserQuery][qid=5] semanticQuery failed to generate, using "(false)" In Xcode 16, there are no error messages at all - the semantic search just silently fails. Missing Resources: The sample application mentioned during the WWDC24 presentation doesn't appear to have been released, which makes it difficult to verify if my implementation is correct. Would really appreciate any guidance or clarification on the current status of this feature. Has anyone in the community successfully implemented this?
0
3
421
Jul ’25