Is the accessibility feature, voice command recording available on the Apple Vision Pro? It does not start on my device.
The Apple Vision Pro is on 26.1.
Regular single voice commands work on the Apple Vision Pro.
Recording commands worked on other devices. (iPad and iPhone)
Explore best practices for creating inclusive apps for users of Apple accessibility features and users from diverse backgrounds.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I am testing the accessibility feature available in the Settings app called "Speak Screen". The help text in the Setting app states that swiping down with two fingers will cause the screen content to be spoken. However, I've been unable to get this feature to work. Every time I try the double finger swipe down, it behaves the same as the single finger swipe down gesture. Usually this manifests as making scroll views bounce.
I've tried toggling the feature on and off, turning off Reachability, and rebooting my phone, but I can't get the speak screen gesture to work. If I access the speak screen feature from the "Speech Controller" button, then the screens content is spoken, as expected, so I know the feature is enabled. It's just the gesture that doesn't work.
Is there something else I need to do to get this gesture to work? I don't want to tell my users to turn this feature on if I can't verify that the gesture will work with my app.
Currently I am not finding any API to read the status of Message Filter Extension from settings. Are we planning for any future releases ?
A Summary of the WWDC25 Group Lab - Accessibility
At WWDC25 we launched a new type of Lab event for the developer community - Group Labs. A Group Lab is a panel Q&A designed for a large audience of developers. Group Labs are a unique opportunity for the community to submit questions directly to a panel of Apple engineers and designers. Here are the highlights from the WWDC25 Group Lab for Accessibility.
Accessibility Nutrition Labels are a really big step forward for the experience people have on the App Store to find apps that will work for them. How should developers get started with Accessibility Nutrition Labels?
A good starting point is to review the Accessibility Nutrition Label evaluation criteria on App Store Connect Help. It's a concise document, roughly 10 pages, and you can approach it section by section after the introduction. Even with prior experience using accessibility features like VoiceOver, the criteria offer valuable insights that might not be immediately apparent. For those newer to accessibility, a good entry point might be one of the visual feature labels, such as Dark Interface, which is a popular and frequently used feature.
Which accessibility features can I indicate support for in Accessibility Nutrition Labels?
The accessibility features covered include support for assistive technologies like VoiceOver and Voice Control, media enhancements such as captions and audio descriptions, and display accommodations. These display accommodations cover options like larger text, dark interface, differentiating without color alone, sufficient contrast, and reduced motion.
With the new Accessibility Nutrition Labels, will app store reviewers validate what we select?
The Accessibility Nutrition Label can be edited at any time without requiring a new app submission. However, if an app inaccurately claims feature support, App Review may contact the developer and request an update to the label or the app.
Are there any updates to tools for analyzing the accessibility of our apps?
Although there aren't new updates this year, continued support for Accessibility Audits is available through Xcode's built-in Accessibility Inspector. XCTest also supports accessibility audits, enabling developers to test app accessibility with every build. These audits analyze aspects like contrast, dynamic type, text clipping, element labels, and more within each view. For a deeper dive, the "Perform accessibility audits for your app" session from WWDC 2023 is a valuable resource.
What are accessibility features you wish more people integrated?
Accessibility features encompassing user input labels optimized for voice control, keyboard navigation and shortcuts, and dynamic type support could be more used to benefit users.
What were some of the biggest accessibility challenges your team encountered while developing Liquid Glass?
Apple is known for its innovation and strives to deliver a high-quality experience for everyone. Accessibility is considered a core component of visual design from the outset. For example, the Liquid Glass design inherently supports reduced transparency and increased contrast. As design continues to evolve, user feedback submitted through Feedback Assistant is invaluable.
How does Liquid Glass respond to contrast? Especially for text and low contrast environments.
Content legibility is a crucial aspect of the Liquid Glass design. It inherently supports accessibility features like reduced transparency and increased contrast. Your feedback during the beta period and beyond is essential to ensuring Liquid Glass provides a great experience within your apps.
What are some Apple apps that stand out for their accessibility?
Apps like Keynote in the iWork suite offer groundbreaking VoiceOver features to enhance creative productivity for all users. Assistive Access makes core apps such as Messages, Photos, Camera, Phone, and Music more accessible. Podcasts provides transcripts to broaden its reach, and frameworks like SwiftUI ensure that apps built with the latest UI frameworks have excellent built-in accessibility.
After updating to the iOS 26 Beta version, the screenshot option within the AssistiveTouch menu has stopped working. Tapping on the "Screenshot" icon does not perform any action.
Topic:
Accessibility & Inclusion
SubTopic:
General
I am trying to build an app that will, on optimized occasions, use the latest iphone's gyroscopic and accelerometer features.
Am I able to use them if I can provide a valid justification for their use?
Topic:
Accessibility & Inclusion
SubTopic:
General
Description
When calling AppStore.showManageSubscriptions(in:), the system modal for managing subscriptions appears visually. However, it is not automatically focused by VoiceOver, and in some cases, VoiceOver still allows interaction with elements in the underlying view controller, such as buttons and labels. This creates confusion and violates accessibility expectations.
Steps to Reproduce
1. In a UIKit app, present the system subscription sheet via AppStore.showManageSubscriptions(in:).
2. Ensure VoiceOver is enabled on the device.
3. Observe the focus behavior when the modal appears.
4. Try swiping right/left — VoiceOver continues to announce items in the presenting view controller.
Expected Result
The modal should automatically take VoiceOver focus, and all elements behind it should be non-accessible until dismissed.
Actual Result
VoiceOver continues to focus and interact with elements behind the presented modal.
Notes
• Tested on iOS 18.5
• Reproducible on device
• Using Swift/UIKit (not SwiftUI)
Topic:
Accessibility & Inclusion
SubTopic:
General
I have a UIImageView as the background of a custom UIView subclass. The image itself does not contain any text. On top of this image view, I have added two UILabels.
To improve accessibility, I converted the entire view into a single accessibility element and set a proper accessibilityLabel. Additionally, I disabled accessibility for the UIImageView and the labels by setting isAccessibilityElement = false.
However, when VoiceOver's Accessibility Recognition's Text Recognition feature is enabled, VoiceOver still detects and announces the text inside the UILabels at the end after reading my custom accessibility properties. This text should not be announced.
It seems that VoiceOver treats the UILabel content as part of the UIImageView. Additionally, when using the Explore Image rotor action, the entire subview is recognized as a single image.
Is this the expected behavior? If so, is there a way to disable VoiceOver’s text recognition for this view while keeping custom accessibility intact?
class BackgroundLabelView: UIView {
private let backgroundImageView = UIImageView()
private let backgroundImageView2 = UIImageView()
private let titleLabel = UILabel()
private let subtitleLabel = UILabel()
override init(frame: CGRect) {
super.init(frame: frame)
setupView()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
setupView()
configureAceesibility()
}
private func configureAceesibility() {
backgroundImageView.isAccessibilityElement = false
backgroundImageView2.isAccessibilityElement = false
titleLabel.isAccessibilityElement = false
subtitleLabel.isAccessibilityElement = false
isAccessibilityElement = true
accessibilityTraits = .button
}
func configure(backgroundImage: UIImage?, title: String, subtitle: String) {
backgroundImageView.image = backgroundImage
titleLabel.text = title
subtitleLabel.text = subtitle
accessibilityLabel = "Holiday Offer ," + title + "," + subtitle
}
private func setupView() {
backgroundImageView2.contentMode = .scaleAspectFill
backgroundImageView2.clipsToBounds = true
backgroundImageView2.translatesAutoresizingMaskIntoConstraints = false
backgroundImageView2.image = UIImage(resource: .bannerfestival)
addSubview(backgroundImageView2)
backgroundImageView.contentMode = .scaleAspectFit
backgroundImageView.clipsToBounds = true
backgroundImageView.translatesAutoresizingMaskIntoConstraints = false
addSubview(backgroundImageView)
titleLabel.font = UIFont.systemFont(ofSize: 18, weight: .bold)
titleLabel.textColor = .white
titleLabel.translatesAutoresizingMaskIntoConstraints = false
titleLabel.numberOfLines = 0
addSubview(titleLabel)
subtitleLabel.font = UIFont.systemFont(ofSize: 14, weight: .regular)
subtitleLabel.textColor = .white.withAlphaComponent(0.8)
subtitleLabel.translatesAutoresizingMaskIntoConstraints = false
subtitleLabel.numberOfLines = 0
addSubview(subtitleLabel)
NSLayoutConstraint.activate([
backgroundImageView2.leadingAnchor.constraint(equalTo: leadingAnchor),
backgroundImageView2.trailingAnchor.constraint(equalTo: trailingAnchor),
backgroundImageView2.heightAnchor.constraint(equalToConstant: 200),
backgroundImageView.centerYAnchor.constraint(equalTo: centerYAnchor),
backgroundImageView.topAnchor.constraint(equalTo: topAnchor),
backgroundImageView.leadingAnchor.constraint(greaterThanOrEqualTo: leadingAnchor),
backgroundImageView.trailingAnchor.constraint(equalTo: trailingAnchor),
backgroundImageView.bottomAnchor.constraint(equalTo: bottomAnchor),
titleLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 16),
titleLabel.trailingAnchor.constraint(lessThanOrEqualTo: centerXAnchor),
titleLabel.bottomAnchor.constraint(equalTo: centerYAnchor, constant: -4),
subtitleLabel.leadingAnchor.constraint(equalTo: leadingAnchor, constant: 16),
subtitleLabel.trailingAnchor.constraint(lessThanOrEqualTo: centerXAnchor),
subtitleLabel.topAnchor.constraint(equalTo: centerYAnchor, constant: 4)
])
}
override func layoutSubviews() {
super.layoutSubviews()
backgroundImageView.layer.cornerRadius = layer.cornerRadius
}
}
Hi everyone,
I’m developing a React Native iOS app that includes a custom keyboard extension for sending stickers across apps. The project builds successfully, and the main app installs fine on my test device.
However, I’m not seeing the keyboard extension appear under Settings → General → Keyboard → Keyboards → Add New Keyboard, which means I can’t activate it or grant access. At this point, I’m not even sure if the extension is actually being installed on the device along with the main app.
Here’s what I’ve done so far. I created a Keyboard Extension target in Xcode, set the correct bundle identifiers and provisioning profiles, and enabled “Requests Open Access” in the extension’s Info.plist. I built and installed the app on a physical device rather than the simulator to ensure proper testing.
My main questions are: how can I confirm that the extension is being installed on the device, and if it isn’t, what might prevent it from installing even though the build completes successfully?
Any insights, troubleshooting steps, or guidance would be greatly appreciated.
Hi everybody,
I'm trying to build a QR-Code Scanner and Generator App for IOS.
Whenever I try to implement the camera the app crashes with this comment:
This app has crashed because it attempted to access privacy-sensitive data without a usage description. The app's Info.plist must contain an NSCameraUsageDescription key with a string value explaining to the user how the app uses this data.
I tried to reduce the app to the minimum of nothing but camera with the same result.
Any ideas?
Tank you and
best Regards
Horst Schippers
Topic:
Accessibility & Inclusion
SubTopic:
General
Amogst the two languages my app would have lets say 10% and 90%.
I am launching an app for a unlisted Primary Language. I consider whatever is inside the app as the primary and that wont be English. The secondary language
I have some doubts about how VoiceOver handles focus when the screen updates.
When a new UIViewController is pushed onto a UINavigationController or presented modally, how does VoiceOver decide which element to focus on? Is there a way to control or customize this behavior?
In a UISplitViewController, when an item is selected in the primary view controller, the focus should shift to the relevant content in the secondary view controller. How can we ensure that VoiceOver correctly moves focus to the right element in the secondary panel?
I’m using a 9th gen iPad, updated iPadOS 26 a few weeks ago, but I can’t use the 3D background, can’t find any way to use the 3D background. Is it a system issue or is it my iPad’s issue?
Hi everyone,
My team and I are developing an accessibility-focused VisionOS app (MindTap) as part of a university project, aiming to support individuals with Locked-In Syndrome using Brain-Computer Interface (BCI) signals to trigger interactions (e.g., tapping) within the Apple Vision Pro environment.
Problem 1: Simulating Eye Tracking in Simulator
We are testing onHover with Send pointer to the device under I/O > Input in the simulator, and while it mostly works (a bit laggy), we found that onHover won't function on the actual Vision Pro hardware. From what I understand, we should be using FocusState for proper gaze interaction, but testing this requires the physical device. Is there any workaround or official Apple-recommended way to simulate Focus-based gaze detection without a real Vision Pro?
Problem 2: WebSocket-triggered "Click" doesn't work outside the app
We successfully use WebSocket to send a custom signal (a "1" from the brain signal device) to trigger an action inside our app. However, when the user opens a third-party app like Apple News, the WebSocket-triggered "click" no longer works.
We suspect this is due to sandbox restrictions or lack of system-level permissions.
Is it possible in anyway to:
Trigger interaction events outside the app using custom input (like BCI via Websocket)?
Access system-wide click/tap simulation APIs from within VisionOS apps
Integrate this with accessibility services (like Voice Control or AssistiveTouch)
We'd appreciate any official guidance or tips from others building similar accessibility apps with alternative input methods in VisionOS.
Thanks in advance for any insight you can provide!
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Xcode
Accessibility
iPad and iOS apps on visionOS
Is there any way to get history?
Topic:
Accessibility & Inclusion
SubTopic:
General
I’m developing an ARKit application where I aim to attach procedurally generated audio to detected planes in the environment. While using a static audio file with SCNAudioSource and SCNAudioPlayer works as expected, integrating procedural audio via AVAudioSourceNode does not produce any sound, nor does it generate any error messages: Stack Overflow Post
Working Implementation with Static Audio File:
let audioPlayer = SCNAudioPlayer(source: audioSource)
node.addAudioPlayer(audioPlayer)
Attempted Implementation with Procedural Audio:
// Audio generation code
}
let audioPlayer = SCNAudioPlayer(avAudioNode: audioNode)
node.addAudioPlayer(audioPlayer)
In this setup, the AVAudioSourceNode successfully generates audio when connected directly to an AVAudioEngine. However, when used with SCNAudioPlayer and attached to an SCNNode, it fails to produce sound. What doesn’t work is creating some procedural audio with an AVAudioNode, as documented here:
Apple docs
Additionally, I explored the WWDC18 AR game project, SwiftShot, which utilizes SCNAudioPlayer(avAudioNode:). After updating it for the latest Xcode, the graphics function correctly, but the audio does not play. I also noted that the Apple documentation mentions an audioPlayerWithAVAudioNode: method, stating:
Using this initializer is typically not necessary. Instead, call the audioPlayerWithAVAudioNode: method, which returns a cached audio player object if one for the specified AVAudioNode object has already been created and is available for use.
However, this method does not appear to be available in Swift. Any insights or guidance on this matter would be greatly appreciated.
I am trying to implement voice over to my game, and have encountered an issue where a static text will take focus of all other interactions. I have a tutorial scene where I have one short "static text" accessibility node, but rest of gameplay is without such. This static text field occupies small part of screen and works fine, but I am not able to click on anything else, including any of my gameplay elements, wherever on the screen I click, it just re-highlights that static text.
It there a requirement for all elements to use Accessibility Nodes and can't have mixed setup with some not having them ?
How can I get around it?
Question number 2: What decides which Accessibility node gets selected when entering a new UI screen, I have multiple buttons and am observing rather random behaviour, every time different button is highlighted first.
Question number 3: The plugin documentation mentiones runtime support in play mode, are there any specific steps for this to work as I can't seem to be able to. I have VoiceOver enabled on macOS unity is on macOS (also tried iOS) platform but it doesn't seem to do anything. Note my buttons and label accessibility nodes work correctly on iOS build.
Thanks in advance for any help
My app uses CoreData based on iOS 13.0 combined with iCloud to store data. This function automatically manages the data collaboration between CoreData and iCloud. Some users have reported that after going abroad, their original data disappeared, and when they returned to China, the data could be displayed normally again. I'm located in Mainland China. I've learned that iCloud data in China is all stored in Guizhou-Cloud Big Data (the data center in Guizhou). Could this problem be caused by display abnormalities resulting from the switching of the iCloud data storage centers accessed in different regions?
Topic:
Accessibility & Inclusion
SubTopic:
General
Say I have a UI element that moves on the screen. Is it possible to update its accessibility frame as it moves while VoiceOver is focused on it? From my tests, VoiceOver ignores UIAccessibilityLayoutChangedNotification if it's sent repeatedly in a short period of time on iOS, while sending NSAccessibilityLayoutChangedNotification on macOS triggers VoiceOver to reannounce the focused element repeatedly.
Hello, my submission is based on Haptics. Without it the App doesn't make sense. And only real iPhone can give this opportunity. But it says that Xcode playgrounds will be tested on Simulator.
Is it indeed like this? What can I do?
Thank you in advance!