iOS 18.3.1, iPhone 16 Pro.
I pick photos using connected physical keyboard from the user's photo library using:
.photosPicker(isPresented: $viewModel.isImagePickerPresented, selection: $viewModel.selectedImageItem, matching: .images)
When picker appears, accessibility focus is moved to "dynamic Island" instead of cancel button. There is no possibility to navigate by keyboard in photos picker view without tapping on this view and move focus to this view manually . I noticed the same behavior in Notes app.
General
RSS for tagExplore best practices for creating inclusive apps that cater to users with diverse abilities
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hello,
I'm observing a persistent and frustrating issue with an accessibility feature called Guided Access that seems to affect many users across different devices and iOS versions.
Problem
The triple-click gesture (side or home button) to activate Guided Access intermittently stops working after the device has been in normal use for a few days (typically 2-7 days) without a restart.
I have done some debugging for Apple in FB16094026 but received no updates after 6 months. So I'm posting here in the hope that this will be solved sooner. A core accessibility feature shouldn't require daily device restarts to function reliably.
Details:
Guided Access is correctly enabled in Settings > Accessibility.
Initially, the triple-click works perfectly.
After a period of normal device use (2-7 days), the triple-click no longer triggers Guided Access in any app.
Restarting the device temporarily resolves the issue, and Guided Access triple-click works again immediately after a reboot. However, the problem recurs after continued use.
Simply toggling the Guided Access setting on/off does NOT fix it.
Additional observation: Even trying to select Guided Access manually via the Accessibility Shortcut menu (if multiple shortcuts are enabled) sometimes fails to launch the feature when in this state.
Affected:
iPhones and iPads
Observed on iOS/iPadOS 16, 17, and now 18, indicating it's a long-standing bug.
Impact:
Guided Access is a crucial accessibility feature for many users (for focus, special needs, parental controls, etc.). Its unreliable activation significantly disrupts daily workflows and reliance on this function. This issue appears to be widespread, with many reports across forums like Apple Support Communities and Reddit.
For example, this post received over 1k upvotes.
To see more examples please refer to FB16094026.
Could Apple please investigate this bug urgently? Thanks.
Topic:
Accessibility & Inclusion
SubTopic:
General
I remember that Vision Pro's dwell control could previously be set to 0.1 seconds, but now it can't. Is there a way to adjust it?
Hi everyone,
I’ve been analyzing the current state of Sign Language accessibility tools, and I noticed a significant gap in learning tools: we lack real-time feedback for students (e.g., "Is my hand position correct?").
Most current solutions rely on 2D video processing, which struggles with depth perception and occlusion (hand-over-hand or hand-over-face gestures), which are critical in Sign Language grammar.
I'd like to propose/discuss an architecture leveraging the current LiDAR + Neural Engine capabilities found in iPhone devices to solve this.
The Concept: Skeleton-based Normalization
Instead of training ML models on raw video frames (which introduces noise from lighting, skin tone, and clothing), we could use ARKit's Body Tracking to abstract the input.
Capture: Use ARKit/LiDAR to track the user's upper body and hand joints in 3D space.
Data Normalization: Extract only the vector coordinates (X, Y, Z of joints). This creates a "clean" dataset, effectively normalizing the user regardless of physical appearance.
Comparison: Feed these vectors into a CoreML model trained on "Reference Skeletons" (recorded by native signers).
Feedback Loop: The app calculates the geometric distance between the user's pose and the reference pose to provide specific correction (e.g., "Raise your elbow 10 degrees").
Why this approach?
Solves Occlusion: LiDAR handles depth much better than standard RGB cameras when hands cross the body.
Privacy: We are processing coordinates, not video streams.
Efficiency: Comparing vector sequences is computationally cheaper than video analysis, preserving battery life.
Has anyone experimented with using ARKit Body Anchors specifically for comparing complex gesture sequences against a stored "correct" database? I believe this "Skeleton First" approach is the key to scalable Sign Language education apps.
Looking forward to hearing your thoughts.
In our application we are using a pop over view and we have enabled the accessibility VoiceOver, When user navigating inside the popover and reached to the last element that time with the right swipe we need to dismiss the popover.
Context:
We are using UIKit to provide accessibility in our app for our iOS users. Our app majorly contains documents/books that user can read.
Issue: The issue is VoiceOver is skipping the lines given to it when there are some leading spaces in it. We have observed this issue in different languages. This is only happening for line granularity, other granularities seems to be working as expected.
Implementation:
We are using below API's to provide line content to voice over.
UIAccessibilityReadingContent
- accessibilityPageContent
- accessibilityFrameForLineNumber
- accessibilityContentForLineNumber
We are creating UIAccessibilityElement objects to pass to VoiceOver and each UIAccessibilityElement implements UIAccessibilityReadingContent to provide readable content.
We also use below APIs to cross element boundaries for all granular navigations.
accessibilityNextTextNavigationElement
accessibilityPreviousTextNavigationElement
We want to know whether skipping the line when provided with leading spaces is expected or a bug in UIKit.
I'm developing a macOS app using NSView and trying to make my content navigable via VoiceOver. I'm expecting the built-in rotor category "Content Chooser" (accessed via VO + U) to list my accessible elements — just like how it shows message items in the Mail app. However, in my app, this rotor appears empty, even though:
My views return proper accessibilityChildren() or accessibilityContents() with valid NSAccessibilityElements
Each child has correct AXRole, AXLabel, etc.
The window is key and visible
VoiceOver navigation works for the elements
I've also tried:
Using both accessibilityChildren() and accessibilityContents() in container views
Setting roles like .group, .staticText, .button, etc.
Avoiding hidden elements
Ensuring all elements are visible and labeled
Still, "Content Chooser" rotor is empty.
What exact conditions must be met for an element to appear in the "Content Chooser" rotor in a macOS app?
Any Apple-specific guidance, hidden requirements, or sample code would be appreciated.
Hello!
I was doing some accessibility testing for my app and found out that when the user switches the text size, all of the data in the text fields is reset, which causes major disruption.
I've tried looking for documentation, but all I've found is information on how to dynamically scale the UI for different text sizes, which I've already implemented.
My guess is that every time Dynamic Type registers a change, it redraws my UI instead of just updating it.
How can I make sure the data is not reset when the text size changes?
I'm facing a bizarre issue with the Apple's Accessibility APIs. I am registering an AXObserver that listens for, among other things, the kAXSelectedTextChangedNotification. For many new users, the kAXSelectTextChangedNotification is not triggered, even though they have enabled Accessibility permission for the app. Other notifications are getting through (kAXWindowMovedNotification, kAXWindowResizedNotification, kAXValueChangedNotification etc - full list here), just not the kAXSelectedTextChangedNotification!
We've found that we can reproduce the error by removing accessibility permission for the app and rebooting our computers. After restarting and reenabling accessibility permissions, the kAXSelectedTextChangedNotification was not received, even though other notifications were fine.
Strangely, the issue can be resolved by launching Apple's Accessibility Inspector app on an impacted computer. Once the Accessibility Inspector is loaded, the kAXSelectedTextChangedNotifications start coming through as expected. This implies to me that either:
We are missing some needed setup when starting the observers. Accessibility Inspector gets it right, thus ‘starting’ the system properly.
Accessibility Inspector is using some Apple private APIs that we don’t have access to.
Things I’ve tried:
I've tried subscribing the AXSelectedTextChangedNotification to different AXUIElements, including the SystemWide element, the Application element, and children elements from the AXApplication. None of these received the kAXSelectedTextChangedNotification, until Accessibility Inspector is booted up. No surprises here, as Apple's documentation confirms that you should add the notification to the root Application AXUIElement if you want to receive notifications for all its children.
I had a theory that the issue might be due to my code calling AXUIElementCreateApplication multiple times, possibly creating multiple "Applications" in Apple's Accessibility implementation. If that’s the case, the notifications might be sent to the wrong application AXUIElement. However, refactoring my code to only call AXUIElementCreateApplication once didn't resolve the issue.
I thought the issue may be caused by subscribing the AXSelectedTextChangedNotification on the high-level application element (at odds with Apple's documentation). I've tried traversing the child AXUIElements until we find one with the kAXSelectedTextAttribute and then subscribing to that. This did not resolve the issue. I don’t think it's the correct path to continue exploring, given that the notifications are received correctly after AccessibilityInspector is launched.
There is one exception to the above: if I add the kSelectedTextChangedNotification listener to a specific text field AXUIElement, I do receive the notification on that text field. However, this is not practical; I need a solution that will work for all text fields within an app. The Accessibility Inspector appears to be doing something that causes the selected-text-changed notifications to be correctly passed up to the high-level application AXUIElement.
Another thought is that I could traverse the entire Accessibility hierarchy and add listeners to every subview that has the kAXSelectedTextAttribute. However, I don’t like this long-term solution. It will be slow and incomplete: new elements get added and removed frequently. I just want the kAXSelectedTextChangedNotification to be received by the high-level Application AXUIElement, which the documentation suggests it should be. I also have evidence that this can work, since notifications start coming through after Accessibility Inspector is launched. It’s just a matter of discovering how to replicate whatever Accessibility Inspector is doing.
An interesting wrinkle: I implemented the 'traverse' strategy above, but was surprised by how few elements were in the hierarchy. Most apps only go down ~2-3 levels, which didn't seem right to me. Perhaps the Accessibility tree isn't fully initialized? I tried adding a 5-second delay to allow more initialization time, but it didn't change anything.
Does anyone have any ideas? Here's our file.
Hello community,
We're designing an app that can optionally be controlled by a stylus with a mesh tip. In this case, the mesh tip we're using is 5 mm in diameter. It seems that mesh tip contact detection is unstable in this size, although it works better with a larger diameter.
Is it possible to access a setting in iOS that lets you define the minimum contact area needed to detect a contact on the screen? This would enable us to use this 5 mm stylus.
Best regards,
Edwin
Topic:
Accessibility & Inclusion
SubTopic:
General
Triple tap for screenshot->notification->triple tap detected becomes a part of the screenshot and obscures the top part of screenshot.
Thanks
Topic:
Accessibility & Inclusion
SubTopic:
General
Hi,
Our app has a section where, we show to users how to activate "Silence Unknown Callers", because is a crucial feature for our app. But, we saw that 30% of users drop the process here, because we can't open directly that setting option in phone app.
We are using this url scheme to open phone settings in iOS 18:
if let url = URL(string: "App-prefs:com.apple.mobilephone") {
UIApplication.shared.open(url)
}
But, we don't see other way to open directly the path "silence", like in iOS 17, with this url scheme: prefs:root=Phone&path=SILENCE_CALLS
So, do you know if is possible open that option directly? We want to improve our accessibility.
Thank you!
I am invoking the UIImagePickerController of type UIImagePickerControllerSourceTypePhotoLibrary from my viewController. I want shift the keyboard focus to the Cancel button which is the first interactive element on the gallery picker. When a user has full keyboard access turned on they should be able to tap tab and interact with the gallery picker modal. How do I achieve this?
why did the screen recorder button disappear? It cannot be found anywhere.
Topic:
Accessibility & Inclusion
SubTopic:
General
When using iOS VoiceOver to navigate a webpage, selecting a element correctly activates the :focus-visible state. However, when VoiceOver moves to a non-button element (such as a or ), the previously focused button retains its :focus-visible state. The focus indicator only updates when VoiceOver moves to another .
This behavior can be confusing for screen reader users, as it creates the appearance of multiple elements being focused simultaneously. It also differs from expected keyboard navigation behavior, where focus styles typically update as soon as the user moves to a new interactive element.
Is this an intentional VoiceOver behavior, or could this be a bug? If intentional, is there a recommended workaround to ensure correct focus indication when moving between different types of elements?
Steps to Reproduce:
Enable VoiceOver on an iOS device.
Navigate using swipe gestures or explore-by-touch to focus on a .
Observe that the button correctly receives the :focus-visible styling.
Move to a non-button element (e.g., a with tabindex="0" or an ).
Notice that the button still retains its :focus-visible state, even though VoiceOver has moved to a new element.
Expected Behavior:
The previously focused should lose its :focus-visible state when VoiceOver moves to a different interactive element, just as it does when using keyboard navigation.
Actual Behavior:
The :focus-visible state remains on the previously focused button unless VoiceOver moves to another . This can create confusion by displaying multiple focus indicators at once.
Tested On:
iOS 17.7, 18.3.1
iOS Safari
iPhone 11 Pro, iPhone 14 Pro Max
ar quicklook suddenly is grayed out on iphone 15 pro, I bought the phone new recently ot was working great, 2 days ago updated to ios 18.1.4, ar mode kept opening but i started getting a move iphone over surface message and the object wouldn’t detect surfaces correctly, updated to ios 18.5, now when i open quicklook modesl ar is completely greyed out,
can someone help me fix or detect the issue
thank you
Hello!
I was faced with unexpected behavior of hardware keyboard focus in UITests.
A clear description of the problem
When running UITests on the iOS Simulator with both "Full Keyboard Access" and "Connect Hardware Keyboard" options enabled, there is a noticeable delay between keyboard actions for focus managing (like pressing Tab or arrow keys). The delay seems to increase with repeated input and suggests that events are being queued instead of processed immediately.
I will describe why I have such an assumption later.
A step-by-step set of instructions to reproduce the problem
Launch the iOS Simulator.
Enable both "Full Keyboard Access" and "Connect Hardware Keyboard" in the Simulator settings.
Run a UITest on a target application (ideally an endless or long-running test).
Once the app is launched, press the Tab key several times.
Observe the delay in focus movement.
Optionally, press the Tab or arrow keys rapidly, then stop the UITest.
After stopping, you’ll see a burst of rapid focus changes.
What results you expected
We expected keyboard actions (like Tab) to be handled immediately and the UI focus to update smoothly during UITests.
What results you saw
There was a 4–10 (end more) second delay between pressing keys and seeing a response. All stacked keyboard events (used for managing focus) are performed all at once after stopping the UITest.
The version of Xcode you are using
Xcode: Version 16.3 (16E140)
Simulator: iPhone 16 Pro (iOS 18.4 and 18.1)
Simulator: iPad Pro 11-inch (M4) (iPadOS 17.5)
Hi everyone — I’m implementing the new Hearing Device Support API described here:
https://developer.apple.com/documentation/accessibility/hearing-device-support
I have MFi hearing aids paired and visible under Settings → Accessibility → Hearing Devices, and I’ve added the com.apple.developer.hearing.aid.app entitlement (and also tested with Wireless Accessory Configuration: https://developer.apple.com/documentation/bundleresources/entitlements/com.apple.external-accessory.wireless-configuration
).
com.apple.developer.hearing.aid.app
xxxxx
but the app won't even compile with this entitlement
Problem
NotificationCenter.default.addObserver(...) for
pairedUUIDsDidChangeNotification
never fires — not on app launch, not after pairing/unpairing, and not after reconnecting the hearing aids.
Because the notification never triggers, calls like:
HearingDeviceSession.shared.pairedDevices
always return an empty list.
What I expected
According to the docs, the notification should be posted whenever paired device UUIDs change, and the session should expose those devices — but nothing happens.
Questions
Does the hearing.aid.app entitlement require special approval from Apple beyond adding it to the entitlements file?
Is there a way to verify that iOS is actually honoring this entitlement?
Has anyone successfully received this notification on a real device?
Any help or confirmation would be greatly appreciated.
Is there any way to get history?
Topic:
Accessibility & Inclusion
SubTopic:
General
VoiceOver reads out all visible content on the screen, which is essential for visually challenged users. However, this raises a privacy concern—what if a user accidentally focuses on sensitive information, like a bank account password, and it gets read aloud?
How can developers prevent VoiceOver from exposing confidential data while still maintaining accessibility? Are there best practices or recommended approaches to handle such scenarios effectively?