My app does not automatically switch languages (voices) in VoiceOver when I have VoiceOver on and the screen includes both English and Spanish content. Instead of switching between the correctly accented voice, whatever my manual Voices rotor setting is, that's what the content is announced as. I can manually switch the Voice in the rotor to make words sound inteligible but my main concern is that language changes are not auto-detected even though that feature in my Settings is on.
VO does detect language changes in other apps, so I think there must be either misplaced or missing accessibiiltyLanguage strings somewhere in my app. Or is it more than that for localization considerations?
I reached out to the Apple Accessibilty team and was directed to open a ticket here, as my question is about the underlying code.
I am a novice developer and primarily accessibility SME; i expect that wnen "detect languages" is on in the user settings for VoiceOver, that the voice for the screen reader speech output will automatically switch to the correct language / accent. I recognize there is a problem but am not sure where the breakdown is. I would like guidance how to fix it to relay to my teams.
https://developer.apple.com/documentation/objectivec/nsobject/1615192-accessibilitylanguage
Accessibility
RSS for tagMake your apps function for a broad range of users using Accessibility APIs across all Apple platforms.
Posts under Accessibility tag
123 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
As of iOS 18.1 being released we are having issues with our users experiencing issues with our app that relies on strobing the device torch.
We have narrowed this down to being caused on devices with adaptive true-tone flash and have submitted a radar: FB15787160.
The issue seems to be caused by ambient light levels. If run in a dark room, the torch strobes exactly as effectively as in previous iOS versions, if run in a light room, or outdoors, or near a window, the strobe will run for ~1s and then the torch will get stuck on for half a second or so (less frequently it gets stuck off) and then it will strobe again for ~1s and this behaviour repeats indefinitely.
If we go to a darker environment, and background and then foreground the app (this is required) the issue is resolved, until moving to an area with higher ambient light levels again. We have done a lot of debugging, and also discovered that turning off "Auto-Brightness" from Settings -> Accessibility -> Display & Text Size resolves the issue.
We have also viewed logs from Console.app at the time of the issue occurring and it seems to be that there are quite sporadic ambient light level readings at the time at which the issue occurs. The light readings transition from ~100 Lux to ~8000 Lux at the point that the issue starts occurring (seemingly caused by the rear sensor being affected by the torch). With "Auto-Brightness" turned off, it seems these readings stay at lower levels.
This is rendering the primary use case of our app essentially useless, would be great to get to the bottom of it! We can't even really detect it in-app as I believe using SensorKit is restricted to research applications and requires a review process with Apple before accessing?
Edit: It's worth noting this is also affecting other apps with strobe functionality in the exact same way
I have a stackview which have 2 labels
class TextView: UIView {
@IBOutlet private weak var stackView: UIStackView! {
didSet {
stackView.isAccessibilityElement = true
stackView.accessibilityLabel = label1.text + label2.text
}
}
@IBOutlet private weak var label1: UILabel! {
didSet {
label1.accessibilityIdentifier = "label1"
}
}
@IBOutlet private weak var: UILabel!{
didSet {
label2.accessibilityIdentifier = "label2"
}
}
}
My goal here is to have a combines accessibility label for the stackview and yet able to access the accessibilityIdentifier of child elements for automation.
VoiceOver does not support the plist property CFBundleSpokenName. This is wrong and should be fixed.
Ultimately the issue I am dealing with is that our app name is UWCU, and instead of VoiceOver pronouncing each letter, it tries to read this as a word and horribly butchers our organization's/app's name.
Alternatives such as using U.W.C.U. and U W C U are not acceptable.
@Apple, I know you're first response is going to be "no, it is working perfectly," but quite frankly you are wrong. I know you feel strongly about this, given your response in posts like this:
https://forums.developer.apple.com/forums/thread/734545?answerId=760084022
HOWEVER, with iOS 18, your argument for "VoiceOver should read what's on the screen" doesn't hold water anymore. With iOS 18, you Apple have added a new feature that lets users customize their home screens and completely remove the name of apps. Here's your own guide:
https://support.apple.com/guide/iphone/customize-apps-and-widgets-on-the-home-screen-iph385473442/ios
Quoted from your guide:
Make the icons bigger: Tap Large. (In large size, the names of the apps disappear.)
With large icons + VoiceOver turned on, VoiceOver still reads the app name even though it has disappeared from the screen. So, your own argument "VoiceOver should read the text as it appears on the screen" is invalid, because there is NO text on the screen.
If you can't tell, I'm pretty peeved about all this. There's a reason why screen readers support aria attributes to help deliver the right accessible experience. It's a simple ask for VoiceOver to do the same thing.
In iOS 17, the call to "UIApplication.shared.open("App-prefs:ACCESSIBILITY&path=HEARING_AID_TITLE")" was opening the device Settings and going to Accessibility and then Hearing Device which is very helpful.
Now in iOS 18, this call only opens the device Settings at the root.
I would like to know how to replace the URL so that it works like before.
canOpenUrl does return true, so I'm wondering if something is broken, or
if canOpenUrl is kind of lying a bit.
I also tried other paths to go to other screens and they don't work either.
So, I'm trying to create my own text-to-speech setup. Problem I'm having is whenever I do a test run, the speech gets a bit choppy at the start kind of skipping over maybe a word or a few characters.
A few details:
I've essentially built a separate class for handling the speech events.
AVSpeechSynthesizer is set up as a private variable for the class so I don't expect deallocation to be the issue. Especially since it's a problem at the start.
I've got a queue set up for what it's worth so that shouldn't be a problem.
I'd appreciate any advice.
Hi, is there any way to interact with the eye tracking accessibility feature in iOS and iPadOS? I want to be able to trigger an eye tracker recalibration programmatically. Also if possible, I would like to be able to retrieve gaze location data.
These are not intended to be features on an app being published to the app store but rather a custom made accessibility app.
I have a record button that either starts or stops a recording using the default action. When the user is recording, I want to add a custom action to discard the recording instead of saving it. That all works fine with the following code:
if isRecording {
recordButton.accessibilityCustomActions = [
.init(name: String(localized: "discard recording"), actionHandler: { [weak delegate] _ in
delegate?.discardRecording()
return true
})
]
recordButton.accessibilityLabel = String(localized: "stop recording", comment: "accessibility label")
} else {
recordButton.accessibilityCustomActions = []
recordButton.accessibilityLabel = String(localized: "start recording", comment: "accessibility label")
}
The problem I have is that when a user chose "discard recording", it becomes the default selected action again the next time the user records, and instead of stopping and saving the recording, the user might accidentally discard the next one as well.
How can I programmatically reset the selected action on this recordButton to the default action?
I want to create a utility to import a list of words to the Voice Control user custom vocabulary. Is there an API to do this? I noticed if you use the built-in export vocabulary functionality (Settings > Accessibility > Voice Control > ...) the file that gets exported is a plist document type. If there is no API to add words programmatically should I just create a utility that generates a plist file and import it using the built-in import vocabulary functionality (Settings > Accessibility > Voice Control > ...)?
When the native info panel (which displays the title, subtitle, description, and custom buttons) opens, the focus immediately shifts to the first button. As a result, VoiceOver skips the description, which is crucial for users relying on accessibility features.
I haven’t found a way to detect when it opens. Knowing this would allow me to trigger custom VoiceOver announcements or adjust the focus order dynamically.
Are any other people experiencing this issue, and how do we solve it?
Last November 13 I came up with a phone keyboard layout (strategy) that can make key size bigger hence less mistyping.
The typical phone keyboard looks like this:
My proposed keyboard looks like this:
Essentially, it's a split keyboard with the left-hand part stacked above/below the right-hand part. Key size/width/height and the vertical distance between the left-hand part and right-hand part may be adjustable to suit different phone widths and user hand sizes.
You guys can show the proposed keyboard's image on your phone and fit this keyboard to your phone width so you can actually simulate typing on it to see how it feels. On my phone, the letter keys in it are a little too big for my thumbs to reach the farthest keys, but as I said, key size should be adjustable to suit different phone widths and user hand sizes.
I frequently use my iPad to develop remotely in VSCode or prototype designs in Figma. This is currently all done in the browser. Given that many of these experiences rely heavily on the keyboard I was hoping there would be a solution to make a keyboard persistent on the screen, or at least a few hot keys.
Is it possible for me to develop an accessibility tool that could stay persistent on the screen? Perhaps something that would talk with assistive touch? Or is that in Apple’s no no square?
My app uses PDFKit, but I don't know how to solve this bug at all. Under the same IOS system and device model, some users' devices may experience crashes, while our own devices are functioning normally.
The following is the stack information for crashing:
0 libsystem_platform.dylib__os_unfair_lock_recursive_abort + 36
1 libsystem_platform.dylib__os_unfair_lock_lock_slow + 308
2 CoreGraphics_CGPDFPageCopyRootTaggedNode + 56
3 PDFKit-[PDFPageViewAccessibility accessibilityElements] + 76
4 UIAccessibility-[NSObject(AXPrivCategory) _accessibilityElements] + 56
5 UIAccessibility-[NSObjectAccessibility accessibilityElementCount] + 68
6 UIAccessibility-[NSObject(AXPrivCategory) _accessibilityHasOrderedChildren] + 44
7 UIAccessibility-[NSObject(AXPrivCategory) _accessibilityFrameForSorting] + 216
8 UIAccessibility-[NSObject _accessibilityCompareGeometry:] + 116
9 UIAccessibility-[NSObject(AXPrivCategory) accessibilityCompareGeometry:] + 52
10 CoreFoundation___CFSimpleMergeSort + 100
11 CoreFoundation___CFSimpleMergeSort + 248
12 CoreFoundation_CFSortIndexes + 260
13 CoreFoundation-[NSArray sortedArrayFromRange:options:usingComparator:] + 732
14 CoreFoundation-[NSMutableArray sortedArrayFromRange:options:usingComparator:] + 60
15 CoreFoundation-[NSArray sortedArrayUsingSelector:] + 168
16 UIAccessibility___57-[NSObject(AXPrivCategory) _accessibilityFindDescendant:]_block_invoke + 268
17 UIAccessibility___96-[NSObject(AXPrivCategory) _accessibilityEnumerateAXDescendants:passingTest:byYieldingElements:]_block_invoke + 140
18 UIAccessibility-[NSObject _accessibilityEnumerateAXDescendants:passingTest:byYieldingElements:] + 244
19 UIAccessibility-[NSObject _accessibilityFindFirstAXDescendantPassingTest:byYieldingElements:] + 272
20 UIAccessibility-[NSObject(AXPrivCategory) _accessibilityFindDescendant:] + 100
21 UIAccessibility__axuiElementForNotificationData + 276
22 UIAccessibility__massageAssociatedElementBeforePost + 36
23 UIAccessibility__UIAXBroadcastMainThread + 292
24 libdispatch.dylib__dispatch_call_block_and_release + 32
25 libdispatch.dylib__dispatch_client_callout + 20
26 libdispatch.dylib__dispatch_main_queue_drain + 980
27 libdispatch.dylib__dispatch_main_queue_callback_4CF + 44
28 CoreFoundation___CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__ + 16
29 CoreFoundation___CFRunLoopRun + 1996
30 CoreFoundation_CFRunLoopRunSpecific + 572
31 GraphicsServices_GSEventRunModal + 164
32 UIKitCore-[UIApplication _run] + 816
33 UIKitCore_UIApplicationMain + 340
34 SwiftUIclosure #1 (Swift.UnsafeMutablePointer<Swift.UnsafeMutablePointer<Swift.Int8>?>) -> Swift.Never in SwiftUI.(KitRendererCommon in _ACC2C5639A7D76F611E170E831FCA491)(Swift.AnyObject.Type) -> Swift.Never + 168
35 SwiftUI SwiftUI.runApp(A) -> Swift.Never + 100
36 SwiftUI static (extension in SwiftUI):SwiftUI.App.main() -> () + 180
I got a UIControl, and I want to make it behavior like a custom UIAccessibilityElement.
UIControl *control = [[UIControl alloc] init];
control.isAccessibilityElement = NO;
CustomAccessibilityElement *elem = [[CustomAccessibilityElement alloc] initWithAccessibilityContainer:control];
elem.isAccessibilityElement = YES;
// some custom setting here
control.accessibilityElements = @[elem];
It worked well with an iPhone 13, iOS 15.5.1, but crashed with an iPhone SE, iOS 15.4.1 and the crash msg is :
"-[UIAccessibilityElement _addAccessibilityElementsAndOrderedContainersWithOptions:toCollection:]: unrecognized selector sent to instance 0x283b7c680"
Can you tell me the reason ?
Thanks a lot.
accessibilityUserInputLabels is working fine with any view I tried this on. Meaning that the control can be toggled with the provided alternative names when using Voice Control.
When setting this property on any UIBarButtonItem though, it seems Voice Control ignores the alternative names provided by setting accessibilityUserInputLabels. For comparison, accessibilityLabel works perfectly when set on UIBarButtonItem.
Is anyone facing the same issue?
Using Xcode 16.0 (16A242) on iOS 18
I have an issue while unlocking the app from Single App mode. Device Dock is popping up on top of my app and this is disturbing the experience of my app.
I have already done MDM configuration and the indented functionality is working fine with the below code, the app is successfully switching to single app mode and back.
The sample code below reproduces the issue.
Tap on Lock, the completion blocks returns true and the app is successfully switched to single app mode.
Tap on Unlock, the completion block returns true and the app is successfully switched from the single app mode.
Now the device dock is popped up on top of the app.
struct LockApp: App {
var body: some Scene {
WindowGroup {
VStack {
Button("Lock", systemImage: "lock") {
UIAccessibility.requestGuidedAccessSession(enabled: true) { success in
print("App has been locked", success)
}
}
.buttonStyle(.borderedProminent)
Button("UnLock", systemImage: "lock.open") {
UIAccessibility.requestGuidedAccessSession(enabled: false) { success in
print("App has been unlocked", success)
}
}
.buttonStyle(.borderedProminent)
}
}
}
}
Xcode Version: 16.0
Device: iPad Pro 12.9 inch 6th gen. OS: 18.1
Is this intended behaviour? Has anyone come across this issue?
I'm facing an accessibility issue, where when I call UIViewController.addChild(_:) and pass in another instance of a UIViewController, the VoiceOver focus is jumping to the "Back" button in the navigation bar. How might one go about avoid this behaviour and having the accessibility/voiceover focus remain where it was at the time of adding the child?
I am trying to get a Notification if Guided access is enabled or disabled on the VisionPro.
For doing so you would normally just call:
NotificationCenter.default.addObserver(forName: UIAccessibility.guidedAccessStatusDidChangeNotification, object: nil, queue: .main){ noti in
print("guided access did change")
}
and this works fine on iOS devices.
But running the exact same code in Vision os Results in not getting a notification at all, even though Guided Access gets enabled or Disabled.
For testing i ran a simple default app, that works perfectly on both os types.
import SwiftUI
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.onAppear{
print("is appeairng")
NotificationCenter.default.addObserver(forName: UIAccessibility.guidedAccessStatusDidChangeNotification, object: nil, queue: .main){_ in
print("guided access did change")
}
}
.padding()
}
}
But as said it prints "guided access did change on iOS" but not on the Vision Pro.
I'm trying to include Apple's Personal Voice feature in an app I'm working on, but I want to use a button or toggle to request access, rather than firing the request on first launch. The problem is that, if AVSpeechSynthesizer is used during the same session, before Personal Voice is authorized, the app has to be restarted to use the feature.
Here is a basic example that demonstrates the issue on my iPhone (running 18.1 beta, but the issue was present at least in 18.0, maybe before):
import AVFoundation
import SwiftUI
struct TestView: View {
let synthesizer = AVSpeechSynthesizer()
@State private var personalVoices: [AVSpeechSynthesisVoice] = []
var body: some View {
VStack(spacing: 100) {
Text("Personal Voices Available: \(personalVoices.count)")
Button {
speakUtterance(string: "Hello, world!")
} label: {
Image(systemName: "hand.wave.fill")
.font(.system(size: 100))
}
Button("Fetch Personal Voices") {
Task { await fetchPersonalVoices() }
}
}
}
func fetchPersonalVoices() async {
AVSpeechSynthesizer.requestPersonalVoiceAuthorization() { status in
if status == .authorized {
personalVoices = AVSpeechSynthesisVoice.speechVoices().filter { $0.voiceTraits.contains(.isPersonalVoice) }
}
}
}
func speakUtterance(string: String) {
let utterance = AVSpeechUtterance(string: string)
if let voice = personalVoices.first {
utterance.voice = voice
} else {
utterance.voice = AVSpeechSynthesisVoice(language: Locale.preferredLanguages[0])
}
synthesizer.speak(utterance)
}
}
If you tap the hand symbol first (before authorizing Personal Voice), you'll probably notice that the Personal Voices Available number never increases. If you authorize Personal Voice before tapping the hand symbol, it should speak using your Personal Voice as expected.
The example code is mostly taken directly from this WWDC23 video (Personal Voice info begins around the 10-minute mark).
Does anyone have any idea what could be causing this?
Note: Personal Voice can't be tested in Simulator. The code will need to be run on a physical device that has Personal Voice set up, to test.
I can't figure out if I've found a VoiceOver problem with Swift Charts or if I'm doing something incorrectly.
I have a loop within a loop showing 2 sets of data in the same chart.
If I touch a month then VO correctly says there are two data series. But if I keep swiping down only data from the first series is read.
ChatGPT said try referencing the outer loop and sure enough that worked if it done in BOTH the label and value.
It sounds really awkward though. For example, "High 89 degrees F High October".
Below the "bad" chart only says something such as "92 degrees F October" when swiping down. The "good" chart will read the high and low temperature data.
VStack {
headerText("BAD")
Chart {
ForEach(processedMonthlyInput) { oneMonth in
ForEach(oneMonth.temperatures, id: \.month) { element in
LineMark(
x: .value("Month", element.month, unit: .month),
y: .value("Temperature", element.tempVal.converted(to: .fahrenheit).value)
)
.accessibilityLabel("\(element.month.formatted(.dateTime.month(.wide)))")
.accessibilityValue(Text("\(element.tempVal.converted(to: tempUnit).formatted(.measurement(width: .abbreviated, numberFormatStyle: .number.precision(.fractionLength(0)))))"))
}
.symbol(by: .value("Type", oneMonth.theType))
.foregroundStyle(by: .value("Type", oneMonth.theType))
.interpolationMethod(.catmullRom)
}
}
.frame(maxHeight: paddingAmount)
.padding(.horizontal)
headerText("GOOD")
Chart {
ForEach(processedMonthlyInput) { oneMonth in
ForEach(oneMonth.temperatures, id: \.month) { element in
LineMark(
x: .value("Month", element.month, unit: .month),
y: .value("Temperature", element.tempVal.converted(to: .fahrenheit).value)
)
.accessibilityLabel("\(oneMonth.theType) \(element.month.formatted(.dateTime.month(.wide)))")
.accessibilityValue(Text("\(oneMonth.theType) \(element.tempVal.converted(to: tempUnit).formatted(.measurement(width: .abbreviated, numberFormatStyle: .number.precision(.fractionLength(0)))))"))
}
.symbol(by: .value("Type", oneMonth.theType))
.foregroundStyle(by: .value("Type", oneMonth.theType))
.interpolationMethod(.catmullRom)
}
}
.frame(maxHeight: paddingAmount)
.padding(.horizontal)
}