Apple Intelligence

RSS for tag

Apple Intelligence is the personal intelligence system that puts powerful generative models right at the core of your iPhone, iPad, and Mac and powers incredible new features to help users communicate, work, and express themselves.

Posts under Apple Intelligence subtopic

Post

Replies

Boosts

Views

Activity

Playgroud
Woke up to a notification saying playground, Genmoji…etc was ready. but every time I try to use it says early access was requested. Anyone else had this issue? if so how did you fix it?
0
0
325
Nov ’24
Is it possible to set writingToolsBehavior globally?
Hello, we're investigating an option to disable writing tools for some customers in our app. I'm aware of the writingToolsBehavior property for UITextView etc, but we would like to have a way to set this globally without having to update all UITextView instances (or future instances). Is there any API to do this? We tried using UITextView.appearance.writingToolsBehavior = .none and it seemed promising on 18.2 beta, however it introduced crashes on devices running 18.1. The crashes look like: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UITextView: 0x14462c000; frame = (0 0; 0 0); text = ''; userInteractionEnabled = NO; gestureRecognizers = <NSArray: 0x30067cb40>; backgroundColor = UIExtendedGrayColorSpace 0 0; layer = <CALayer: 0x3009b1ba0>; contentOffset: {0, 0}; contentSize: {0, 0}; adjustedContentInset: {0, 0, 0, 0}> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = { "setWritingToolsBehavior:" = ( ); }' Similarly, even on 18.2 beta if we used UITextField.appearance.writingToolsBehavior = .none we would see crashes for any search fields like: *** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Have you sent -setWritingToolsBehavior: to <UISearchBarTextField: 0x141c04a00; frame = (0 0; 0 0); text = ''; opaque = NO; gestureRecognizers = <NSArray: 0x301fe15c0>; placeholder = Search Leads; borderStyle = RoundedRect; background = <_UITextFieldSystemBackgroundProvider: 0x3015de960: backgroundView=<_UISearchBarSearchFieldBackgroundView: 0x141c60200; frame = (0 0; 0 0); opaque = NO; autoresize = W+H; userInteractionEnabled = NO; layer = <CALayer: 0x3015de8e0>>, fillColor=(null), textfield=<UISearchBarTextField: 0x141c04a00>>; layer = <CALayer: 0x3015de240>> off the main thread? To verify, look for a complaint in the logs: "Unsupported use of UIKit…", and fix the problem if you find it. If your use is main-thread only please file a radar on UIKit, and attach this log. exercisedImplementations = { "setWritingToolsBehavior:" = ( ); }' Is it possible to set this globally?
0
3
431
Nov ’24
Block Apple Intelligence
Hi everyone, Could someone confirm if it's currently possible, or if there are any plans, to restrict users from enabling Apple Intelligence altogether? I understand that we can block individual features using MDM, but I'm interested in knowing if we can prevent users from toggling Apple Intelligence on and off in System Settings entirely. Thanks! Kind Regards, Filipe Nogueira
0
3
540
Nov ’24
app intents not working with placeholders and without app name
I tried this: struct CarShortcutsProvider: AppShortcutsProvider { @AppShortcutsBuilder static var appShortcuts: [AppShortcut] { AppShortcut( intent: LockCarIntent(), phrases: ["Lock my car with \(.applicationName)", "Lock my \(\.$car) with \(.applicationName)"], shortTitle: LocalizedStringResource("Lock Car"), systemImageName: "lock.fill" ) AppShortcut( intent: UnlockCarIntent(), phrases: ["Unlock my car with \(.applicationName)", "Unlock my \(\.$car) with \(.applicationName)"], shortTitle: LocalizedStringResource("Unlock Car"), systemImageName: "lock.open.fill" ) } } but Siri only understands "unlock my car ", not with the placeholder. Siri asks me then for the car, and it understands it, but not in one sentence. Is there something wrong with my code? Also I tried it without applicationName first, and then it didn't work at all with Siri. Is this a general limitation of app intents? I thought the goal was to reduce friction. If the user has to mention the app name all the time, it adds friction.
0
2
435
Nov ’24
Get NFC Data Identity card
Hello, I have to create an app in Swift that it scan NFC Identity card. It extract data and convert it to human readable data. I do it with below code import CoreNFC class NFCIdentityCardReader: NSObject , NFCTagReaderSessionDelegate { func tagReaderSessionDidBecomeActive(_ session: NFCTagReaderSession) { print("\(session.description)") } func tagReaderSession(_ session: NFCTagReaderSession, didInvalidateWithError error: any Error) { print("NFC Error: \(error.localizedDescription)") } var session: NFCTagReaderSession? func beginScanning() { guard NFCTagReaderSession.readingAvailable else { print("NFC is not supported on this device") return } session = NFCTagReaderSession(pollingOption: .iso14443, delegate: self, queue: nil) session?.alertMessage = "Hold your NFC identity card near the device." session?.begin() } func tagReaderSession(_ session: NFCTagReaderSession, didDetect tags: [NFCTag]) { guard let tag = tags.first else { session.invalidate(errorMessage: "No tag detected") return } session.connect(to: tag) { (error) in if let error = error { session.invalidate(errorMessage: "Connection error: \(error.localizedDescription)") return } switch tag { case .miFare(let miFareTag): self.readMiFareTag(miFareTag, session: session) case .iso7816(let iso7816Tag): self.readISO7816Tag(iso7816Tag, session: session) case .iso15693, .feliCa: session.invalidate(errorMessage: "Unsupported tag type") @unknown default: session.invalidate(errorMessage: "Unknown tag type") } } } private func readMiFareTag(_ tag: NFCMiFareTag, session: NFCTagReaderSession) { // Read from MiFare card, assuming it's formatted as an identity card let command: [UInt8] = [0x30, 0x04] // Example: Read command for block 4 let requestData = Data(command) tag.sendMiFareCommand(commandPacket: requestData) { (response, error) in if let error = error { session.invalidate(errorMessage: "Error reading MiFare: \(error.localizedDescription)") return } let readableData = String(data: response, encoding: .utf8) ?? response.map { String(format: "%02X", $0) }.joined() session.alertMessage = "ID Card Data: \(readableData)" session.invalidate() } } private func readISO7816Tag(_ tag: NFCISO7816Tag, session: NFCTagReaderSession) { let selectAppCommand = NFCISO7816APDU(instructionClass: 0x00, instructionCode: 0xA4, p1Parameter: 0x04, p2Parameter: 0x00, data: Data([0xA0, 0x00, 0x00, 0x02, 0x47, 0x10, 0x01]), expectedResponseLength: -1) tag.sendCommand(apdu: selectAppCommand) { (response, sw1, sw2, error) in if let error = error { session.invalidate(errorMessage: "Error reading ISO7816: \(error.localizedDescription)") return } let readableData = response.map { String(format: "%02X", $0) }.joined() session.alertMessage = "ID Card Data: \(readableData)" session.invalidate() } } } But I got null. I think that these data are encrypted. How can I convert them to readable data without MRZ, is it possible ? I need to get personal informations from Identity card via Core NFC. Thanks in advance. Best regards
0
0
126
Mar ’25
Writing tools options
Hi team, We have implemented a writing tool inside a WebView that allows users to type content in a textarea. When the "Show Writing Tools" button is clicked, an AI-powered editor opens. After clicking the "Rewrite" button, the AI modifies the text. However, when clicking the "Replace" button, the rewritten text does not update the original textarea. Kindly check and help me showButton.addTarget(self, action: #selector(showWritingTools(_:)), for: .touchUpInside) @available(iOS 18.2, *) optional func showWritingTools(_ sender: Any) Note: same cases working in TextView pfa
0
0
141
Mar ’25
Siri 2.0 (suggests and future updates)
Hey dear developers! This post should be available for the future Siri updates and improvements but also for wishes in this forum so that everyone can share their opinion and idea please stay friendly. have fun! I had already thought about developing a demo app to demonstrate my idea for a better Siri. My change of many: Wish Update: Siri's language recognition capabilities have been significantly enhanced. Instead of manually setting the language, Siri can now automatically recognize the language you intend to use, making language switching much more efficient. Simply speak the language you want to communicate in, and Siri will automatically recognize it and respond accordingly. Whether you speak English, German, or Japanese, Siri will respond in the language you choose.
0
1
143
Mar ’25
Threading issues when using debugger
Hi, I am modifying the sample camera app that is here: https://developer.apple.com/tutorials/sample-apps/capturingphotos-camerapreview ... In the processPreviewImages, I am using the Vision APIs to generate a segmentation mask for a person/object, then compositing that person onto a different background (with some other filtering). The filtering and compositing is done via CoreImage. At the end, I convert the CIImage to a CGImage then to a SwiftUI Image. When I run it on my iPhone, it works fine, and has not crashed. When I run it on the iPhone with the debugger, it crashes within a few seconds with: EXC_BAD_ACCESS in libRPAC.dylib`std::__1::__hash_table<std::__1::__hash_value_type<long, qos_info_t>, std::__1::__unordered_map_hasher<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::hash, std::__1::equal_to, true>, std::__1::__unordered_map_equal<long, std::__1::__hash_value_type<long, qos_info_t>, std::__1::equal_to, std::__1::hash, true>, std::__1::allocator<std::__1::__hash_value_type<long, qos_info_t>>>::__emplace_unique_key_args<long, std::__1::piecewise_construct_t const&, std::__1::tuple<long const&>, std::__1::tuple<>>: It had previously been working fine with the debugger, so I'm not sure what has changed. Is there a difference in how the Vision APIs are executed if the debugger is attached vs. not?
0
0
62
Apr ’25
is it possible to let siri monitor phone calls, and notify me when a certain trigger happens?
the specific context is that i would like to build an agent that monitors my phone call (with a customer support for example), and simiply identify whether or not im still put on hold, and notify me when im not. currently after reading the doc, i dont think its possible yet, but im so annoyed by the customer support calls that im willing to go the distance and see if theres any way.
0
0
108
Jun ’25
Various On-Device Frameworks API & ChatGPT
Posting a follow up question after the WWDC 2025 Machine Learning AI & Frameworks Group Lab on June 12. In regards to the on-device API of any of the AI frameworks (foundation model, vision framework, ect.), is there a response condition or path where the API outsources it's input to ChatGPT if the user has allowed this like Siri does? Ignore this if it's a no: is this handled behind the scenes or by the developer?
0
0
225
Jun ’25
SwiftUI App Intent throws error when using requestDisambiguation with @Parameter property wrapper
I'm implementing an App Intent for my iOS app that helps users plan trip activities. It only works when run as a shortcut but not using voice through Siri. There are 2 issues: The ShortcutsTripEntity will only accept a voice input for a specific trip but not others. I'm stuck with a throwing error when trying to use requestDisambiguation() on the activity day @Parameter property. How do I rectify these issues. This is blocking me from completing a critical feature that lets users quickly plan activities through Siri and Shortcuts. Expected behavior for trip input: The intent should make Siri accept the spoken trip input from any of the options. Actual behavior for trip input: Siri only accepts the same trip when spoken but accepts any when selected by click/touch. Expected behavior for day input: Siri should accept the spoken selected option. Actual behavior for day input: Siri only accepts an input by click/touch but yet throws an error at runtime I'm happy to provide more code. But here's the relevant code: struct PlanActivityTestIntent: AppIntent { @Parameter(title: "Activity Day") var activityDay: ShortcutsItineraryDayEntity @Parameter( title: "Trip", description: "The trip to plan an activity for", default: ShortcutsTripEntity(id: UUID().uuidString, title: "Untitled trip"), requestValueDialog: "Which trip would you like to add an activity to?" ) var tripEntity: ShortcutsTripEntity @Parameter(title: "Activity Title", description: "The title of the activity", requestValueDialog: "What do you want to do or see?") var title: String @Parameter(title: "Activity Day", description: "Activity Day", default: ShortcutsItineraryDayEntity(itineraryDay: .init(itineraryId: UUID(), date: .now), timeZoneIdentifier: "UTC")) var activityDay: ShortcutsItineraryDayEntity func perform() async throws -> some ProvidesDialog { // ...other code... let tripsStore = TripsStore() // load trips and map them to entities try? await tripsStore.getTrips() let tripsAsEntities = tripsStore.trips.map { trip in let id = trip.id ?? UUID() let title = trip.title return ShortcutsTripEntity(id: id.uuidString, title: title, trip: trip) } // Ask user to select a trip. This line would doesn't accept a voice // answer. Why? let selectedTrip = try await $tripEntity.requestDisambiguation( among: tripsAsEntities, dialog: .init( full: "Which of the \(tripsAsEntities.count) trip would you like to add an activity to?", supporting: "Select a trip", systemImageName: "safari.fill" ) ) // This line throws an error let selectedDay = try await $activityDay.requestDisambiguation( among: daysAsEntities, dialog:"Which day would you like to plan an activity for?" ) } } Here are some related images that might help:
0
0
129
Jul ’25
Core Spotlight Semantic Search - still non-functional for 1+ year after WWDC24?
After more than a year since the announcement, I'm still unable to get this feature working properly and wondering if there are known issues or missing implementation details. Current Setup: Device: iPhone 16 Pro Max iOS: 26 beta 3 Development: Tested on both Xcode 16 and Xcode 26 Implementation: Following the official documentation examples The Problem: Semantic search simply doesn't work. Lexical search functions normally, but enabling semantic search produces identical results to having it disabled. It's as if the feature isn't actually processing. Error Output (Xcode 26): [QPNLU][qid=5] Error Domain=com.apple.SpotlightEmbedding.EmbeddingModelError Code=-8007 "Text embedding generation timeout (timeout=100ms)" [CSUserQuery][qid=5] got a nil / empty embedding data dictionary [CSUserQuery][qid=5] semanticQuery failed to generate, using "(false)" In Xcode 16, there are no error messages at all - the semantic search just silently fails. Missing Resources: The sample application mentioned during the WWDC24 presentation doesn't appear to have been released, which makes it difficult to verify if my implementation is correct. Would really appreciate any guidance or clarification on the current status of this feature. Has anyone in the community successfully implemented this?
0
3
421
Jul ’25
ImageCreator fails with GenerationError Code=11 on Apple Intelligence-enabled device
When I ran the following code on a physical iPhone device that supports Apple Intelligence, I encountered the following error log. What does this internal error code mean? Image generation failed with NSError in a different domain: Error Domain=ImagePlaygroundInternal.ImageGeneration.GenerationError Code=11 “(null)”, returning a generic error instead let imageCreator = try await ImageCreator() let style = imageCreator.availableStyles.first ?? .animation let stream = imageCreator.images(for: [.text("cat")], style: style, limit: 1) for try await result in stream { // error: ImagePlayground.ImageCreator.Error.creationFailed _ = result.cgImage }
0
1
222
Jul ’25
AppShortcuts.xcstrings does not translate each invocation phrase option separately, just the first
Due to our min iOS version, this is my first time using .xcstrings instead of .strings for AppShortcuts. When using the migrate .strings to .xcstrings Xcode context menu option, an .xcstrings catalog is produced that, as expected, has each invocation phrase as a separate string key. However, after compilation, the catalog changes to group all invocation phrases under the first phrase listed for each intent (see attached screenshot). It is possible to hover in blank space on the right and add more translations, but there is no 1:1 key matching requirement to the phrases on the left nor a requirement that there are the same number of keys in one language vs. another. (The lines just happen to align due to my window size.) What does that mean, practically? Do all sub-phrases in each language in AppShortcuts.xcstrings get processed during compilation, even if there isn't an equivalent phrase key declared in the AppShortcut (e.g., the ja translation has more phrases than the English)? (That makes some logical sense, as these phrases need not be 1:1 across languages.) In the AppShortcut declaration, if I delete all but the top invocation phrase, does nothing change with Siri? Is there something I'm doing incorrectly? struct WatchShortcuts: AppShortcutsProvider { static var appShortcuts: [AppShortcut] { AppShortcut( intent: QuickAddWaterIntent(), phrases: [ "\(.applicationName) log water", "\(.applicationName) log my water", "Log water in \(.applicationName)", "Log my water in \(.applicationName)", "Log a bottle of water in \(.applicationName)", ], shortTitle: "Log Water", systemImageName: "drop.fill" ) } }
0
0
233
Aug ’25
How to test for VisualIntelligence available on device?
I'm adding Visual Intelligence support to my app, and now want to add a Tip using TipKit to guide users to this feature from within my app. I want to add a Rule to my Tip which will only show this Tip on devices where Visual Intelligence is supported (ex. not iPhone 14 Pro Max). What is the best way for me to determine availability to set this TipKit rule? Here's the documentation I'm following for Visual Intelligence: https://developer.apple.com/documentation/visualintelligence/integrating-your-app-with-visual-intelligence
0
0
558
4w