Apple has many problems. Please contact me for more feedback.
General
RSS for tagExplore best practices for creating inclusive apps that cater to users with diverse abilities
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Description
On macOS, system-level autocomplete suggestions appear in <textarea> elements even when all relevant HTML attributes intended to disable autocomplete and text assistance are explicitly set.
Is this behavior intentional, or is there any supported way for developers to control or disable this functionality?
Steps to Reproduce
Send yourself an email using the native macOS Mail app containing a verification code (for example).
Focus an HTML <textarea> element in any web application.
Focus the textarea.
Expected Result
The autocomplete popup should be controllable from the code, and it should be possible to fully disable it using standard HTML attributes or browser APIs.
Actual Result
The system autocomplete popup appears in all cases and cannot be controlled or disabled by the code, even when all known attributes (autocomplete="off", autocorrect="off", autocapitalize="off", spellcheck="false") are set.
Topic:
Accessibility & Inclusion
SubTopic:
General
У меня проблема с часами Apple Watch Series 10. После обновления до IOS 26 часы перестали грузить приложения. просто чёрный экран с кодом сайта сверху. уже перезагружала, сбрасывала. к интернету подключен
Topic:
Accessibility & Inclusion
SubTopic:
General
I have reported this bug on multiple macOS version and it never gets fixed, so I am posting it here in hopes someone at Apple will see this and fix the issue. I use voice control extensively (in fact, it is THE reason I use Macs. I am an amputee, and voice control makes life much easier and there is nothing even close on Windows (or Linux, but Linux is barely usable for people with with two arms)). Voice control has a setting to have a screen overlay numbers, names, or a grid, to help indicate what item you're referring to. There is an option to have no overlay. However, even when overlay is set to None, the numbers overlay still appears on screen, even when I haven't triggered something by voice. If I right click on the desktop, for example, the numbers appear on the menu. This bug has been in macOS for as long as I can remember. I really hope someone at Apple can fix this. There are quite a few other bugs I've reported with Feedback assistant over the years that go unfixed, this is one of the more annoying ones.
Topic:
Accessibility & Inclusion
SubTopic:
General
I am currently trying to get my app ready for full external keyboard support, while testing I found an issue with the native DatePicker.
Whenever I enter the DatePicker with an external keyboard it only jumps to the time picker and I am not able to move away from it. Arrow keys don't work, tab and control + tab only move me to the toolbar and back.
This is how they look like
private var datePicker: some View {
DatePicker(
"",
selection: date,
in: minDate...,
displayedComponents: [.date]
)
.fixedSize()
.accessibilityIdentifier("\(datePickerLabel).DatePicker")
}
private var timePicker: some View {
DatePicker(
"",
selection: date,
in: minDate...,
displayedComponents: [.hourAndMinute]
)
.fixedSize()
.accessibilityIdentifier("\(datePickerLabel).TimePicker")
}
private var datePickerLabelView: some View {
Text(datePickerLabel.localizedString)
.accessibilityIdentifier(datePickerLabel)
}
And we implement it like this in the view:
HStack {
datePickerLabelView
Spacer()
datePicker
timePicker
}
Does anyone know how to fix this behavior? Is it our fault or is it the system? The issue comes up both in iOS 18 and 26.
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
External Accessory
iOS
Accessibility
SwiftUI
I had a VoiceOver user point out an issue with my app that I’ve definitely known about but have never been able to fix. I thought that I had filed feedback for it but it looks like I didn’t.
Before I do I’m hoping someone has some insight. With Swift Charts when I tap part of a chart it summarizes the three hours and then you can swipe vertically to hear it read out details of each hour. For example, the Y-Axis is the amount of precipitation for the hour and the X-Axis is the hours of the day. The units aren't being read in the summary but they are for individual hours when you vertical swipe.
The summary says something such as "varies between 0.012 and 0.082". In the AXChartDescriptor I’ve tried everything I can think of, including adding a label to the Y axis in the DataPoint but nothing seems to work in getting that summary to include units. With a vertical swipe it seems to just be using my accessibility label and value (like I would expect).
after the 26.3 beta update, my mouse has been having major problems with transparency, have to keep going to reset colors in display, but it doesn't hold, anyone else?
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello everyone,
I am currently evaluating my app's accessibility features to accurately display the "Accessibility" information on the App Store. I have encountered two specific issues regarding Voice Control testing and would appreciate any guidance.
Voice Command for "Stop Recording" According to the evaluation criteria, if an app supports audio recording or dictation, users must be able to start and stop recording using only their voice.
Behavior: I can successfully trigger the recording using the command "Start Recording". However, I cannot find a command to stop it. Commands like "Stop Recording" or "Stop" are not recognized by the system.
Question: Is there a specific standard voice command intended for stopping a recording?
Item Number Overlays on Non-Interactive Web Elements (WKWebView) I noticed an inconsistency between native views and web content regarding Voice Control item numbering.
Behavior: When testing web content within the app (WKWebView) or in Safari, Voice Control displays item number overlays on non-interactive text elements (such as standard or tags). In native views, static labels do not receive item numbers.
Question: Is this expected behavior for web content? Since these elements are not interactive, I am unsure if this should be considered a bug (fail) or an acceptable exception for the accessibility evaluation.
Has anyone experienced similar issues or know the correct criteria for these cases?
Thank you.
Since the last bet upgrade for iPad to 26.3 labels have disappeared.
Going into system/accessibility the toggle setting for labels makes no difference whether on or off.
labels are permanently not there/missing.
Topic:
Accessibility & Inclusion
SubTopic:
General
I am getting this issue when trying to accept an invite to a new test version of our app.
****Unable to Accept invite
This invitation cannot be accepted because your Apple Account, xxxxxxxx.me.com, has already been associated to this app.****
Can you help please?
Topic:
Accessibility & Inclusion
SubTopic:
General
Hi Community,
I'm excited to share R Helper, a speech practice app I built with accessibility as the core focus from day one.
App Store: https://apps.apple.com/app/speak-r-clearly/id6751442522
WHY I BUILT THIS
I personally struggled with R sound pronunciation growing up. It affected my confidence in school and job interviews. That experience taught me how important accessible practice tools are.
R Helper helps children and adults practice R sounds with full accessibility support.
ACCESSIBILITY FEATURES IMPLEMENTED
VoiceOver - complete navigation and feedback
Voice Control - hands-free operation
Dynamic Type - scales to large accessibility sizes
Reduce Motion - respects user preference
Dark Mode - user controllable
High Contrast compatibility
Differentiate Without Color
THE CHALLENGE
Most speech practice apps ignore accessibility. I wanted to change that and prove that specialized educational apps can be fully accessible.
KEY FEATURES
Works 100% offline, no internet needed
Zero data collection, privacy first
Generous free tier with all accessibility features included
10 story missions with gamification
7 languages supported including RTL for Arabic
LESSONS LEARNED
Accessibility is not hard when you prioritize it from the start. VoiceOver labels and hints make a huge difference. Testing with accessibility features enabled is essential. Standard SwiftUI components handle most accessibility automatically. Reducing motion significantly helps users with vestibular issues.
TECHNICAL DETAILS
Built with SwiftUI, targets iOS 17 and up. Universal app for iPhone and iPad. Fully offline using CoreData and local storage. No third party analytics, privacy focused.
QUESTIONS FOR THE COMMUNITY
What accessibility features do you find users request most? How do you test accessibility features efficiently?
WHATS NEXT
I'm currently working on expanding the word library, adding more story content, improving haptic feedback
Thanks for reading.
Nour
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Education and Kids
Education
Machine Learning
Apple Intelligence
I have an app that needs Input Monitoring permissions to get keyboard access in the background. I've attempted to use both IOHIDCheckAccess(kIOHIDRequestTypeListenEvent) and IOHIDRequestAccess(kIOHIDRequestTypeListenEvent), but they always return denied, even though I have given the permission for Input Monitoring to the app in Settings.
Is there something I need to put in my Info.plist to enable this permission to work?
Topic:
Accessibility & Inclusion
SubTopic:
General
My team has a robust digital accessibility program and processes for WCAG conformance in our apps. Because of this, there are definitely accessibility defects that get caught and addressed in order of impact and business priority like any other bug. Obviously we want to aim for 100% accessibility for our users, but it's a continual work in progress as new enhancements or changes are released.
I'm stuck on the appropriate measurement to indicate support. If we have 50 common tasks and the most central 10 tasks are solid but some supporting (but also common) tasks have a contrast fail or accessibleLabel missing, does that make the whole app not supporting the feature? If "completing the task" is the rubric there are a whole range of interpretations for that.
In a complex app, I anticipate that a group like ours will have strong support for many of the Accessibility Nutrition Labels accessibility features across tasks and devices, but realistically never be 100% free of defects for a given Apple Accessibility feature, even among core tasks.
As I consider the next steps for Nutrition Labels, I do not see anything in the documentation that gives a sort of baseline or measurement for inclusion. We plan to test all steps to complete a task, and log defects accordingly with an assigned timeline for fixing them (as would be true for functional defects).
We have a password entry field with a "show password" button. The button effectively turns the "secure text entry" textfield into a non-secure text entry field allowing the user to view what they typed in.
When VoiceOver is enabled, I am not including that button in the UI; it doesn't seem to make sense to me for the following reasons.
If you properly test with the screen curtain, the functionality is useless. You don't see anything. I've tried to explain this to my accessibility team. It's also quite ridiculous to offer to show a blind user their password, I'm sure they'd love to see it, but they just can't. This would almost seem insulting as well.
If by toggling that button, and turning a secure text entry into a non-secure text entry, now the app is literally speaking their password aloud. This seems like a security vulnerability to me. What if someone else overhears the password spoken aloud.
The accessibility team is insisting that I need to include the "show password" button when VoiceOver is enabled. This is the response I received.
"functionality should be the same for VI users as for sighted users. It may happen that a VI user wants to check what is typed into password field in order to correct mistakes".
Again, I don't agree with that because functionality should not be the same. Functionality should be changed and altered as necessary to make the user experience as accessible as possible.
And in this scenario, to me the functionality doesn't make sense at all in a VoiceOver setting.
Any thoughts on this? Am I incorrect here? Are there benefits of including a "show password" button to a user utilizing VoiceOver? What should then the functionality be? Speak the password aloud?
Thanks.
We have a requirement to manage the shortcuts and hotkeys in our application, and have it to be intuitive and support multi-lingual fully. The understanding that we have currently is that most universal shortcuts and hotkeys on MacOS/iOS are expressed using English/Latin characters’ – and now, when a ‘pure foreign language physical or virtual keyboard’ is the ‘input device’ – we are unclear how the user would invoke such a hotkey.
Now, considering cases where other language keyboards have no Latin characters, in these environments, managing shortcuts and hotkeys becomes a rather difficult task. Taking a very simple example, the shortcut for Printing a page is Command/Control + 'P'. This can be an issue on Non English character keyboards like Arabic, where not only are there no letters for P, there is also no equivalent phonetic character as well, since the language itself does not have it.
Also – when we are wanting customizability of a hotkey by the user, how would the user express ‘which is the key combination for a given action they want to perform’.
So, based on these conditions, in order to provide the most comprehensive and optimal experience for the user in their own language, what is it that Apple recommend we do here, for Hotkeys/Shortcuts support in Pure Languages
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
InputMethodKit
Internationalization
Shortcuts
Localization
Hey,
We've run into an issue where WKWebView contents are not always available for VoiceOver users. It seems to occur when WKWebView contents are loaded asynchronously.
I have a sample project where this can be reproduced and a video showing the issue. See FB21257352
The only solution we currently see is forcing an update continuously using UIAccessibility.post(notification: .layoutChanged, argument: nil), but this is ofc a last resort as it may have other unintended side effects.
Hi everyone — I’m implementing the new Hearing Device Support API described here:
https://developer.apple.com/documentation/accessibility/hearing-device-support
I have MFi hearing aids paired and visible under Settings → Accessibility → Hearing Devices, and I’ve added the com.apple.developer.hearing.aid.app entitlement (and also tested with Wireless Accessory Configuration: https://developer.apple.com/documentation/bundleresources/entitlements/com.apple.external-accessory.wireless-configuration
).
com.apple.developer.hearing.aid.app
xxxxx
but the app won't even compile with this entitlement
Problem
NotificationCenter.default.addObserver(...) for
pairedUUIDsDidChangeNotification
never fires — not on app launch, not after pairing/unpairing, and not after reconnecting the hearing aids.
Because the notification never triggers, calls like:
HearingDeviceSession.shared.pairedDevices
always return an empty list.
What I expected
According to the docs, the notification should be posted whenever paired device UUIDs change, and the session should expose those devices — but nothing happens.
Questions
Does the hearing.aid.app entitlement require special approval from Apple beyond adding it to the entitlements file?
Is there a way to verify that iOS is actually honoring this entitlement?
Has anyone successfully received this notification on a real device?
Any help or confirmation would be greatly appreciated.
Is the accessibility feature, voice command recording available on the Apple Vision Pro? It does not start on my device.
The Apple Vision Pro is on 26.1.
Regular single voice commands work on the Apple Vision Pro.
Recording commands worked on other devices. (iPad and iPhone)
Hello!
I was doing some accessibility testing for my app and found out that when the user switches the text size, all of the data in the text fields is reset, which causes major disruption.
I've tried looking for documentation, but all I've found is information on how to dynamically scale the UI for different text sizes, which I've already implemented.
My guess is that every time Dynamic Type registers a change, it redraws my UI instead of just updating it.
How can I make sure the data is not reset when the text size changes?
Hi everyone,
I’ve been analyzing the current state of Sign Language accessibility tools, and I noticed a significant gap in learning tools: we lack real-time feedback for students (e.g., "Is my hand position correct?").
Most current solutions rely on 2D video processing, which struggles with depth perception and occlusion (hand-over-hand or hand-over-face gestures), which are critical in Sign Language grammar.
I'd like to propose/discuss an architecture leveraging the current LiDAR + Neural Engine capabilities found in iPhone devices to solve this.
The Concept: Skeleton-based Normalization
Instead of training ML models on raw video frames (which introduces noise from lighting, skin tone, and clothing), we could use ARKit's Body Tracking to abstract the input.
Capture: Use ARKit/LiDAR to track the user's upper body and hand joints in 3D space.
Data Normalization: Extract only the vector coordinates (X, Y, Z of joints). This creates a "clean" dataset, effectively normalizing the user regardless of physical appearance.
Comparison: Feed these vectors into a CoreML model trained on "Reference Skeletons" (recorded by native signers).
Feedback Loop: The app calculates the geometric distance between the user's pose and the reference pose to provide specific correction (e.g., "Raise your elbow 10 degrees").
Why this approach?
Solves Occlusion: LiDAR handles depth much better than standard RGB cameras when hands cross the body.
Privacy: We are processing coordinates, not video streams.
Efficiency: Comparing vector sequences is computationally cheaper than video analysis, preserving battery life.
Has anyone experimented with using ARKit Body Anchors specifically for comparing complex gesture sequences against a stored "correct" database? I believe this "Skeleton First" approach is the key to scalable Sign Language education apps.
Looking forward to hearing your thoughts.