I have a TextField and entered for example "sg?!". At the TextField I set the modifier speechAlwaysIncludesPunctuation(). But when I activate VoiceOver the content of TextField is reading. The special characters don't read out.
How can I fix this?
Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I’m trying to understand the best practice for assigning accessibilityTraits to a UITableViewCell that users can select from a list of options.
In Apple’s first-party apps like Settings, I’ve noticed an inconsistent approach—some cells use the Button trait, while others simply announce the label along with the Selected trait when applicable, without any additional role like Button or Adjustable.
So my question is:
What is the most appropriate accessibility trait to use for a selectable table view cell that updates a selection (like a settings option)?
Is using .button the right approach, or should we rely solely on .selected?
Is there any user experience guideline from Apple that recommends one over the other?
Would love to hear how others handle this for clarity and consistency in VoiceOver behavior.
Please excuse me if this is obvious. I'm new to Apple development.
Is there a SwiftUI Accessibility Inspector? I run the standard one, in Xcode 26b3, and it shows me warnings for things that I didn't create in SwiftUI. I presume that "SwiftUI" is primarily implemented using macros and that these things are either generated or boilerplate lower-level things. But if so, then why would they trip Accessibility Inspector warnings? Is there something I can do from SwiftUI to clear them?
Or... is there a demangler somewhere that will translate from these names into something this human might recognize?
I'm targeting macos, btw, if that makes any difference.
Topic:
Accessibility & Inclusion
SubTopic:
General
Request: Name Recognition → Shortcut for SOS Flashlight + Vibration
Right now, iOS Name Recognition works, but all I can do is flash the tiny notification light. It would be much more useful if Name Recognition could trigger a Shortcut. That way, I could set it to flash the flashlight in an SOS pattern and vibrate, making the alert impossible to miss.
I tried using Custom Alarm, but it won’t let me record my spoken name, so it doesn’t really solve the problem. If Apple allowed Name Recognition to trigger Shortcuts — or expanded “Custom” to support names/words — this would open up far more practical, real-world alerts.
Topic:
Accessibility & Inclusion
SubTopic:
General
VoiceOver reads out all visible content on the screen, which is essential for visually challenged users. However, this raises a privacy concern—what if a user accidentally focuses on sensitive information, like a bank account password, and it gets read aloud?
How can developers prevent VoiceOver from exposing confidential data while still maintaining accessibility? Are there best practices or recommended approaches to handle such scenarios effectively?
Hello!
I was doing some accessibility testing for my app and found out that when the user switches the text size, all of the data in the text fields is reset, which causes major disruption.
I've tried looking for documentation, but all I've found is information on how to dynamically scale the UI for different text sizes, which I've already implemented.
My guess is that every time Dynamic Type registers a change, it redraws my UI instead of just updating it.
How can I make sure the data is not reset when the text size changes?
I’m experiencing an issue where Siri incorrectly announces currency values in notifications. Instead of reading the local currency correctly, it always reads amounts as US dollars.
Issue details:
My iPhone is set to Region: Chile and Language: Spanish (Chile).
In Chile, the currency symbol $ represents Chilean Pesos (CLP), not US dollars.
A notification with the text:
let content = UNMutableNotificationContent()
content.body = "¡Has recibido un pago por $5.000!"
is read aloud by Siri as:
”¡Has recibido un pago por 5.000 dólares!”
(English: “You have received a payment of five thousand dollars!”)
instead of the correct:
”¡Has recibido un pago por 5.000 pesos!”
(English: “You have received a payment of five thousand pesos!”)
Another developer already reported the same issue back in 2023, and it remains unresolved: https://developer.apple.com/forums/thread/723177
This incorrect behavior is not limited to iOS notifications; it also occurs in other Apple services:
watchOS, iPadOS, and macOS (Siri misreads currency values in various system interactions).
Siri’s currency conversion feature misinterprets $ as USD even when the device is set to a region where $ represents a different currency.
Announce Notifications on AirPods also exhibits this issue, making it confusing when Siri announces transaction amounts incorrectly.
Apple Intelligence interactions are also affected—for example, asking Siri to “read my latest emails” when one of them contains a monetary value results in Siri misreading the currency.
I have submitted a bug report via Feedback Assistant, and the Feedback ID is FB16561348.
This issue significantly impacts accessibility and localization for users in regions where the currency symbol $ is not associated with US dollars.
Has anyone found a workaround, or is there any update from Apple on this?
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Siri and Voice
User Notifications
Localization
Apple Intelligence
I am testing the accessibility feature available in the Settings app called "Speak Screen". The help text in the Setting app states that swiping down with two fingers will cause the screen content to be spoken. However, I've been unable to get this feature to work. Every time I try the double finger swipe down, it behaves the same as the single finger swipe down gesture. Usually this manifests as making scroll views bounce.
I've tried toggling the feature on and off, turning off Reachability, and rebooting my phone, but I can't get the speak screen gesture to work. If I access the speak screen feature from the "Speech Controller" button, then the screens content is spoken, as expected, so I know the feature is enabled. It's just the gesture that doesn't work.
Is there something else I need to do to get this gesture to work? I don't want to tell my users to turn this feature on if I can't verify that the gesture will work with my app.
I have reported this bug on multiple macOS version and it never gets fixed, so I am posting it here in hopes someone at Apple will see this and fix the issue. I use voice control extensively (in fact, it is THE reason I use Macs. I am an amputee, and voice control makes life much easier and there is nothing even close on Windows (or Linux, but Linux is barely usable for people with with two arms)). Voice control has a setting to have a screen overlay numbers, names, or a grid, to help indicate what item you're referring to. There is an option to have no overlay. However, even when overlay is set to None, the numbers overlay still appears on screen, even when I haven't triggered something by voice. If I right click on the desktop, for example, the numbers appear on the menu. This bug has been in macOS for as long as I can remember. I really hope someone at Apple can fix this. There are quite a few other bugs I've reported with Feedback assistant over the years that go unfixed, this is one of the more annoying ones.
Topic:
Accessibility & Inclusion
SubTopic:
General
I have been working to remediate PDFs for a client. The documents/forms have many tables. When I correctly tag a table, using Foxit Editor Pro, it works beautifully on a PC reading it with NVDA. On Mac using VoiceOver the table isn't accessible. It doesn't matter if I try to read it in Adobe Acrobat, Foxit, or Preview. The reader often says the document is empty, omits column headers, and/or associates the wrong header with the column data.
The documents have essentially the same coding behind them as for the web. Why is it they perform so well on a PC with NVDA, but so poorly with Mac VoiceOver? I am a Quality Assurance Specialist. I review websites apps, and documents for accessibility. Why can't I do my job using only my Mac system?
As a Mac user, it frustrates me that I can't use my preferred system for checking documents to see if they are accessible because VoiceOver doesn't work well. I actually have to recommend to my clients and their customers that they need to use a PC with NVDA or Jaws for these documents to be able to get all the information. Unfortunately, most people aren't able to have, or maintain, both systems. Overall, Mac products are very high quality. This, and other issues with VoiceOver, seems to be a large gap in Apple's offerings and functionality.
I would appreciate a human response to the original email I sent about this on 7/30/2025.
Topic:
Accessibility & Inclusion
SubTopic:
General
I'm encountering an issue related to BLE device discovery on iOS.
I have a BLE peripheral device that I initially connected to using an iOS device. After this connection, the BLE device's advertised name was programmatically changed by the peripheral. Now, when I try to scan for this device using other iOS devices, it does not appear in the scan results in most apps — including nRF Connect and our own custom BLE app that uses CoreBluetooth.
A few observations:
The device is definitely powered on and advertising (confirmed via Android).
The name change is reflected correctly on Android and on the iOS device that originally connected to it.
Other iOS devices no longer see the device in their scan list.
I have implemented a SwiftUI view containing a grid of TextField elements, where focus moves automatically to the next field upon input. This behavior works well on iOS 16 and 17, maintaining proper focus highlighting when keyboard full access is enabled.
However, in iOS 18 and above, the keyboard full access focus behaves differently. It always stays behind the actual focus state, causing a mismatch between the visually highlighted field and the active text input. This leads to usability issues, especially for users navigating with an external keyboard.
Below is the SwiftUI code for reference:
struct AutoFocusGridTextFieldsView: View {
private let fieldCount: Int
private let columns: Int
@State private var textFields: [String]
@FocusState private var focusedField: Int?
init(fieldCount: Int = 17, columns: Int = 5) {
self.fieldCount = fieldCount
self.columns = columns
_textFields = State(initialValue: Array(repeating: "", count: fieldCount))
}
var body: some View {
let rows = (fieldCount / columns) + (fieldCount % columns == 0 ? 0 : 1)
VStack(spacing: 10) {
ForEach(0..<rows, id: \.self) { row in
HStack(spacing: 10) {
ForEach(0..<columns, id: \.self) { col in
let index = row * columns + col
if index < fieldCount {
TextField("", text: $textFields[index])
.frame(width: 40, height: 40)
.multilineTextAlignment(.center)
.textFieldStyle(RoundedBorderTextFieldStyle())
.focused($focusedField, equals: index)
.onChange(of: textFields[index]) { newValue in
if newValue.count > 1 {
textFields[index] = String(newValue.prefix(1))
}
if !textFields[index].isEmpty {
moveToNextField(from: index)
}
}
}
}
}
}
}
.padding()
.onAppear {
focusedField = 0
}
}
private func moveToNextField(from index: Int) {
if index + 1 < fieldCount {
focusedField = index + 1
}
}
}
struct AutoFocusGridTextFieldsView_Previews: PreviewProvider {
static var previews: some View {
AutoFocusGridTextFieldsView(fieldCount: 10, columns: 5)
}
}
Has anyone else encountered this issue with FocusState in iOS 18?
I really do believe that this is a bug strictly connected to keyboard navigation since I experienced similar problem also on UIKit equivalent of the view.
Any insights or suggestions would be greatly appreciated!
Many of us Bangladeshi iPhone users were upset when Apple changed the font to Bangla in the most recent iOS version (18.4.1). We prefer the old Bangla typeface. I want the old Bangla typeface to return, and so do we. Please consider this.
Hello,
I'm observing a persistent and frustrating issue with an accessibility feature called Guided Access that seems to affect many users across different devices and iOS versions.
Problem
The triple-click gesture (side or home button) to activate Guided Access intermittently stops working after the device has been in normal use for a few days (typically 2-7 days) without a restart.
I have done some debugging for Apple in FB16094026 but received no updates after 6 months. So I'm posting here in the hope that this will be solved sooner. A core accessibility feature shouldn't require daily device restarts to function reliably.
Details:
Guided Access is correctly enabled in Settings > Accessibility.
Initially, the triple-click works perfectly.
After a period of normal device use (2-7 days), the triple-click no longer triggers Guided Access in any app.
Restarting the device temporarily resolves the issue, and Guided Access triple-click works again immediately after a reboot. However, the problem recurs after continued use.
Simply toggling the Guided Access setting on/off does NOT fix it.
Additional observation: Even trying to select Guided Access manually via the Accessibility Shortcut menu (if multiple shortcuts are enabled) sometimes fails to launch the feature when in this state.
Affected:
iPhones and iPads
Observed on iOS/iPadOS 16, 17, and now 18, indicating it's a long-standing bug.
Impact:
Guided Access is a crucial accessibility feature for many users (for focus, special needs, parental controls, etc.). Its unreliable activation significantly disrupts daily workflows and reliance on this function. This issue appears to be widespread, with many reports across forums like Apple Support Communities and Reddit.
For example, this post received over 1k upvotes.
To see more examples please refer to FB16094026.
Could Apple please investigate this bug urgently? Thanks.
Topic:
Accessibility & Inclusion
SubTopic:
General
I remember that Vision Pro's dwell control could previously be set to 0.1 seconds, but now it can't. Is there a way to adjust it?
I’m trying to add the .header accessibility trait to a UISegmentedControl so that VoiceOver recognizes it accordingly. However, setting the trait using the following code doesn’t seem to have any effect:
segmentControl.accessibilityTraits = segmentControl.accessibilityTraits.union(.header)
Even after applying this, VoiceOver doesn’t announce it as a header. Is there any workaround or recommended approach to achieve this?
why did the screen recorder button disappear? It cannot be found anywhere.
Topic:
Accessibility & Inclusion
SubTopic:
General
When using iOS VoiceOver to navigate a webpage, selecting a element correctly activates the :focus-visible state. However, when VoiceOver moves to a non-button element (such as a or ), the previously focused button retains its :focus-visible state. The focus indicator only updates when VoiceOver moves to another .
This behavior can be confusing for screen reader users, as it creates the appearance of multiple elements being focused simultaneously. It also differs from expected keyboard navigation behavior, where focus styles typically update as soon as the user moves to a new interactive element.
Is this an intentional VoiceOver behavior, or could this be a bug? If intentional, is there a recommended workaround to ensure correct focus indication when moving between different types of elements?
Steps to Reproduce:
Enable VoiceOver on an iOS device.
Navigate using swipe gestures or explore-by-touch to focus on a .
Observe that the button correctly receives the :focus-visible styling.
Move to a non-button element (e.g., a with tabindex="0" or an ).
Notice that the button still retains its :focus-visible state, even though VoiceOver has moved to a new element.
Expected Behavior:
The previously focused should lose its :focus-visible state when VoiceOver moves to a different interactive element, just as it does when using keyboard navigation.
Actual Behavior:
The :focus-visible state remains on the previously focused button unless VoiceOver moves to another . This can create confusion by displaying multiple focus indicators at once.
Tested On:
iOS 17.7, 18.3.1
iOS Safari
iPhone 11 Pro, iPhone 14 Pro Max
In SwiftUI, the date picker component is breaking in colour contrast accessibility. Below code has been use to create date picker:
struct ContentView: View {
@State private var date = Date()
@State private var selectedDate: Date = .init()
var body: some View {
let min = Calendar.current.date(byAdding: .day, value: 14, to: Date()) ?? Date()
let max = Calendar.current.date(byAdding: .year, value: 4, to: Date()) ?? Date()
DatePicker(
"Start Date",
selection: $date,
in: min ... max,
displayedComponents: [.date]
)
.datePickerStyle(.graphical)
.frame(alignment: .topLeading)
.onAppear {
selectedDate = Calendar.current.date(byAdding: .day, value: 14, to: Date()) ?? Date()
}
}
}
#Preview {
ContentView()
}
attaching the screenshot of failure accessibility.
I’m trying to customize the keyboard focus appearance in SwiftUI.
In UIKit (see WWDC 2021 session Focus on iPad keyboard navigation), it’s possible to remove the default UIFocusHaloEffect and change a view’s appearance depending on whether it has focus or not.
In SwiftUI I’ve tried the following:
.focusable() // .focusable(true, interactions: .activate)
.focusEffectDisabled()
.focused($isFocused)
However, I’m running into several issues:
.focusable(true, interactions: .activate) causes an infinite loop, so keyboard navigation stops responding
.focusEffectDisabled() doesn’t seem to remove the default focus effect on iOS
Using @FocusState prevents Space from triggering the action when the view has keyboard focus
My main questions:
How can I reliably detect whether a SwiftUI view has keyboard focus? (Is there an alternative to FocusState that integrates better with keyboard navigation on iOS?)
What’s the recommended way in SwiftUI to disable the default focus effect (the blue overlay) and replace it with a custom border?
Any guidance or best practices would be greatly appreciated!
Here's my sample code:
import SwiftUI
struct KeyboardFocusExample: View {
var body: some View {
// The ScrollView is required, otherwise the custom focus value resets to false after a few seconds. I also need it for my actual use case
ScrollView {
VStack {
Text("First button")
.keyboardFocus()
.button {
print("First button tapped")
}
Text("Second button")
.keyboardFocus()
.button {
print("Second button tapped")
}
}
}
}
}
// MARK: - Focus Modifier
struct KeyboardFocusModifier: ViewModifier {
@FocusState private var isFocused: Bool
func body(content: Content) -> some View {
content
.focusable() // ⚠️ Must come before .focused(), otherwise the FocusState won’t be recognized
// .focusable(true, interactions: .activate) // ⚠️ This causes an infinite loop, so keyboard navigation no longer responds
.focusEffectDisabled() // ⚠️ Has no effect on iOS
.focused($isFocused)
// Custom Halo effect
.padding(4)
.overlay(
RoundedRectangle(cornerRadius: 18)
.strokeBorder(
isFocused ? .red : .clear,
lineWidth: 2
)
)
.padding(-4)
}
}
extension View {
public func keyboardFocus() -> some View {
modifier(KeyboardFocusModifier())
}
}
// MARK: - Button Modifier
/// ⚠️ Using a Button view makes no difference
struct ButtonModifier: ViewModifier {
let action: () -> Void
func body(content: Content) -> some View {
content
.contentShape(Rectangle())
.onTapGesture {
action()
}
.accessibilityAction {
action()
}
.accessibilityAddTraits(.isButton)
.accessibilityElement(children: .combine)
.accessibilityRespondsToUserInteraction()
}
}
extension View {
public func button(action: @escaping () -> Void) -> some View {
modifier(ButtonModifier(action: action))
}
}