In SwiftUI, iOS 18.1.1, Xcode 16.1, the following control:
Text(12345678, format: .byteCount(style: .binary))
displays text with MB (megabytes) unit, but German VoiceOver reads it as "millibars".
I tried explicitly specify units with:
Text(12345678, format: .byteCount(style: .memory, allowedUnits: .mb))
but the result is the same (German VoiceOver still says "millibars").
Aside from creating own accessibility label, is there any way to go around that?
Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi I'm planning to make macos App and distribute to MacOS App Store.
The question is should i make force update when update is needed.
The reason why I want to make this feature is I don't want to make user use previous version of app.
My plan is like this.
when app needed update, make user reach special page that describe why update is needed and set a button that can download new version of app.
the download will be automatically doing at background don't need to visit app store.
I search several forums and gpt but there is no positive reply of this..
so finally i make a post to know is there no way to make this.
Thank you!
Topic:
Accessibility & Inclusion
SubTopic:
General
I’m currently exploring VoiceOver accessibility in iOS and looking for the best way to reduce the number of swipes required to navigate a UITableView. I’ve come across a couple of potential solutions but am unsure which is preferred.
Solution 1: Grouping Subviews in Each Cell
Combine all subviews inside a UITableViewCell into a single accessibility element.
Provide a concise and meaningful accessibilityLabel.
Use custom actions (UIAccessibilityCustomAction) or accessibilityActivationPoint to handle interactions on specific elements within the cell.
Solution 2: Using UIAccessibilityContainerDataTableCell & UIAccessibilityContainerDataTable
Implement UIAccessibilityContainerDataTable for structured table navigation.
Make each cell conform to UIAccessibilityContainerDataTableCell, defining its row and column positions.
However, I’m finding this approach a bit complex, and I need guidance on properly implementing these protocols.
Additionally, in my case, VoiceOver is not navigating to Section 2—I’m not sure why.
Questions:
Which of these approaches is generally preferred for better VoiceOver navigation?
How do I properly implement UIAccessibilityContainerDataTable so that all sections and rows are navigable?
Any best practices or alternative recommendations?
Would really appreciate any insights or guidance!
I learned that I need to create an iTunes Connect account to publish my book translations on Apple Books, following the instructions on the Apple Support page. I was then directed to the iTunes Connect website. Despite trying multiple Apple accounts with different credit cards on my Mac, iPad, and iPhone, I kept getting the error “This Apple Account does not have a valid credit card on file.” In the end, I began to wonder if iTunes Connect is unavailable in Turkey. What do I need to do to publish content on Apple Books from Turkey? Do I need to obtain a developer account, or is this service not available in Turkey?
The Apple customer service representative I contacted in Turkey said they didn’t have information on the matter and directed me here.
Topic:
Accessibility & Inclusion
SubTopic:
General
iOS 18.3.1, iPhone 16 Pro.
I pick photos using connected physical keyboard from the user's photo library using:
.photosPicker(isPresented: $viewModel.isImagePickerPresented, selection: $viewModel.selectedImageItem, matching: .images)
When picker appears, accessibility focus is moved to "dynamic Island" instead of cancel button. There is no possibility to navigate by keyboard in photos picker view without tapping on this view and move focus to this view manually . I noticed the same behavior in Notes app.
I’ve tried implementing the accessibilityPerformMagicTap() method in a specific UIViewController, its view, and even in AppDelegate, but I am not receiving any callbacks.
I directly overrode this method in the mentioned areas, but it never gets triggered when performing a magic tap.
How can I properly observe and handle the accessibilityPerformMagicTap() action?
Hey folks,
I want VoiceOver to speak punctuation in certain cases. On iOS, there seems to be the UIAccessibilitySpeechAttributePunctuation attributed string key to achieve that, but I can't find an alternative for macOS.
What is the recommended approach for achieving the same result?
Hi apple, ive been having a problem with my ipad pro 5th generation.
since updating my ipad it has been acting weird lately… to be specific it keeps closing twice randomly and the widgets turn white andcmy screen keeps going black
when i go on apps it keeps exiting out of the app
also the new siri is so slow and wouldnt answer when i say [hey siri] only on random ocasions
please help me fix this problem because i need my ipad for studying…
thank you.
Topic:
Accessibility & Inclusion
SubTopic:
General
Hello Apple Community,
I'm experiencing an issue with "Vocal Shortcuts" on iOS. I created a trigger in Vocal Shortcuts to run a specific shortcut, and it works perfectly on the first day. However, by the next day, the voice command stops functioning entirely. To make it work again, I have to disable and then re-enable Vocal Shortcuts in the settings.
I've tested this on multiple devices (iPhone 11, iPhone 13, and iPhone X), all running the latest iOS version, and the same problem occurs on each one. Is there any additional configuration needed, or could this be a bug?
Any advice or insights would be greatly appreciated!
Thank you in advance,
In some places of our app we make use of NSAccessibilityElement subclasses to vend some extra items to accessibility clients.
We need to know which item has the VoiceOver focus so we can keep track of it.
setAccessibilityFocused: does not get called when accessibility clients focus NSAccessibilityElements. This method is only called when accessibility clients focus view-based accessibility elements (i.e. when a NSView subclass gets focused).
At the same time we need to programmatically move VoiceOver focus to those items when something happens. Those accessibility elements inherit from NSObject so we can't make them first responder.
Is this the expected behavior? What are our options in terms of reacting to VoiceOver cursor moving around? What are our options in terms of programmatically moving the VoiceOver cursor to a different element?
Here's a sample project that demonstrates the first part of the issue: https://github.com/vendruscolo/apple-rdars/tree/master/DTS12368714%20-%20NSAccessibilityElement%20focus%20tracking
If you run the app, a window will show up. It contains a button and a red square. If you enable VoiceOver you'll be able to move the cursor over the red square, and a message will be logged. You'll also notice there's an extra element after the red square. That element is available to VoiceOver, however when it gets focuses, no message gets logged.
In our application we are using UITableView for data population and that TableView cell contains a button. When we are enabling full keyboard access that time only TableView cell is focusing not the button. We need to focus on cell and button differently.
In our application we are using a Search bar in a pop over view and we have enabled Accessibility full keyboard access and we are using external keyboard. Now if the focus is on Searcher that time by next Tab key press Search bar will dismiss and focus needs to shift to the next UIElement.
In our application we are using OTP login. When accessibility full keyboard access is enabled, and we are trying to enter OTP in the OTP field that time in iOS 17 focus is moving to the next text field accordingly but in iOS 18 focus is staying the first OTP field only and not moving to the next text field.
In our application we are using UIAlertViewController. When accessibility full keyboard access is enabled, and we are trying to dismiss that AlertViewController with Esc key from external keyboard that is not working. We are presenting AlertViewController as a popover. We need dismiss the AlertViewController with Esc key press from external keyboard.
Hello,
the AVSpeechSynthesisVoice has a audioFileSettings attributes
let utterance = AVSpeechUtterance(string: text)
utterance.voice = AVSpeechSynthesisVoice(identifier: voiceSelected!)
print("- voice \(utterance.voice!.audioFileSettings)")
["AVLinearPCMIsBigEndianKey": 0, "AVLinearPCMIsFloatKey": 1, "AVLinearPCMIsNonInterleaved": 1, "AVNumberOfChannelsKey": 1, "AVSampleRateKey": 22050, "AVFormatIDKey": 1819304813, "AVLinearPCMBitDepthKey": 32]
This is declared in
AVSpeechSynthesisVoice {
...
@available(iOS 13.0, *)
open var **audioFileSettings:** [String : Any] { get }
@available(iOS 17.0, *)
open var voiceTraits: AVSpeechSynthesisVoice.Traits { get }
}
How can we specify the audioFileSettings attributes in a AVSpeechSynthesisProviderVoice ?
Cause in AVSpeechSynthesisProviderVoice there is no such field
AVSpeechSynthesisProviderVoice {
open var name: String { get }
open var identifier: String { get }
open var primaryLanguages: [String] { get }
open var supportedLanguages: [String] { get }
open var voiceSize: Int64
open var version: String
open var gender: AVSpeechSynthesisVoiceGender
open var age: Int
}
Regards
Topic:
Accessibility & Inclusion
SubTopic:
General
My team is designing an app for retail associates that need to share managed iPads. We keep the app in Guided Access mode on our login app until an auth token is obtained. Then the iPad is opened for general use. Upon signout we need to re-enter guided access mode and we can do this via manual signout easily. But with idle signout, ie after 60 minutes of inactivity, we need to be able to make a call from the background (in a locked state even) and sign out the user, clear the pin code and enter single app mode before restarting. So that hopefully once the device restarts, we have the app in a locked state again until the next user provides credentials that can obtain a new auth token.
We are struggling to see if this is even possible. Our bosses will be displeased if we tell them it isn't. So anybody with any tips would be very appreciated.
Hello!
I'm trying to improve the accessibility of a UIKit login form in our iOS app. If an error occurs, an error message is shown in a label that is hidden by default. For our VoiceOver users, I want to move the focus to the error message label so that VoiceOver reads out the error message.
I'm trying to achieve this using UIAccessibility.post, but try as I might, it does not work. To better understand the problem, I created a very simple App which shows a button and a label (always visible), and on pressing the button, I post an accessibility notification:
UIAccessibility.post(notification: .layoutChanged, argument: label)
What I expect to happen is for the focus to move from the button to the label. What happens instead is the focus stays with the button and VoiceOver reads out the button's label again. So it seems to process the notification, but ignore the argument.
Am I misunderstanding how accessibility notifications work or is this simply broken at the moment? I am testing this withy my iPhone with the current iOS version 18.2.1
By the way, using the more modern variant leads to the same result:
AccessibilityNotification.LayoutChanged(label).post()
SwiftUI provides the accessibilityCustomContent(_:_:) modifier to add additional accessibility information for an element. However, I couldn’t find a similar approach in UIKit.
Is there a way to achieve this in UIKit?
We currently have an odd issue with VoiceOver spelling a word letter by letter while the same word is spoken as a whole for other items.
The app is in German.
I have a View in SwiftUI whose button traits are removed, then a label "Start Tab 1 von 5" is added. "Tab is spoken as a whole word here, all fine.
If I change the label to "Tab-Schaltfläche" or for example "SimplyGo Tab 3 von 5", then "Tab" is spoken as "T A B", letter by letter. is there a way to force VoiceOver to speak it as a whole?
Topic:
Accessibility & Inclusion
SubTopic:
General
We are unable to programmatically enable AppleScript automation for VoiceOver on macOS 15 (Sequoia)
In macOS 15, Apple moved the VoiceOver configuration from:
~/Library/Preferences/com.apple.VoiceOver4/default.plist
to a sandboxed path:
~/Library/Group Containers/group.com.apple.VoiceOver/Library/Preferences/com.apple.VoiceOver4/default.plist
Steps to Reproduce:
Use a macOS 15 (ARM64) machine (or GitHub Actions runner image with macOS 15 ARM).
Open VoiceOver:
open /System/Library/CoreServices/VoiceOver.app
Set the SCREnableAppleScript flag to true in the new sandboxed .plist:
plutil -replace SCREnableAppleScript -bool true ~/Library/Group\ Containers/group.com.apple.VoiceOver/Library/Preferences/com.apple.VoiceOver4/default.plist
Confirm csrutil status is either disabled or not enforced.
Attempt to control VoiceOver via AppleScript (e.g., using osascript voiceOverPerform.applescript).
Observe that the AppleScript command fails with no useful output (exit code 1), and VoiceOver does not respond to automation.
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
macOS
Accessibility
App Sandbox
AppleScript