Accessibility

RSS for tag

Make your apps function for a broad range of users using Accessibility APIs across all Apple platforms.

Posts under Accessibility tag

104 Posts

Post

Replies

Boosts

Views

Activity

Using WebSocket for BCI Click Input in VisionOS - FocusState vs. System-Level Limitations
Hi everyone, My team and I are developing an accessibility-focused VisionOS app (MindTap) as part of a university project, aiming to support individuals with Locked-In Syndrome using Brain-Computer Interface (BCI) signals to trigger interactions (e.g., tapping) within the Apple Vision Pro environment. Problem 1: Simulating Eye Tracking in Simulator We are testing onHover with Send pointer to the device under I/O > Input in the simulator, and while it mostly works (a bit laggy), we found that onHover won't function on the actual Vision Pro hardware. From what I understand, we should be using FocusState for proper gaze interaction, but testing this requires the physical device. Is there any workaround or official Apple-recommended way to simulate Focus-based gaze detection without a real Vision Pro? Problem 2: WebSocket-triggered "Click" doesn't work outside the app We successfully use WebSocket to send a custom signal (a "1" from the brain signal device) to trigger an action inside our app. However, when the user opens a third-party app like Apple News, the WebSocket-triggered "click" no longer works. We suspect this is due to sandbox restrictions or lack of system-level permissions. Is it possible in anyway to: Trigger interaction events outside the app using custom input (like BCI via Websocket)? Access system-wide click/tap simulation APIs from within VisionOS apps Integrate this with accessibility services (like Voice Control or AssistiveTouch) We'd appreciate any official guidance or tips from others building similar accessibility apps with alternative input methods in VisionOS. Thanks in advance for any insight you can provide!
0
0
232
Apr ’25
On iOS 16, VoiceOver will not speak "¥1,230" in Japanese.
In iOS 16, VoiceOver no longer speaks numbers such as “¥1,230" in Japanese, which worked correctly in iOS 15 and earlier. This is a very important bug in Accessibility support. Steps to reproduce Create iOS App Project on Xcode. (I used Xcode 14) Set the Development Localization to "Japanese" on the project settings. (to make VoiceOver speak Japanese) For example, write the following code. Build and run the App targeting the actual device, not the simulator. Actual devices running iOS 15 or earlier will correctly read out loud with VoiceOver, while iOS 16 will have some reading problems. import SwiftUI @main struct VoiceOverProblemApp: App {   var body: some Scene {     WindowGroup {       // on project settings, set the development localization to Japanese.       VStack {         // ✅ said "Hello, world!"         Text("Hello, world!")                   // ✅ said "残高 (ざんだか, zan-daka)"         Text("残高")                   // ❌ said nothing (until iOS 15 or earlier, said "千二百三十円 (せんにひゃくさんじゅうえん, sen-nihyaku-san-ju-yen)"         Text("¥1,230")                   // ❌ said wrong         //  correct (iOS 15 or earlier):  "残高は千二百三十円です (zan-daka wa sen-nihyaku-san-ju-yen desu)"         //  incorrect (iOS 16)      : "残高はです (zan-daka wa desu)"         Text("残高は ¥1,230 です")       }     }   } } The above sample code uses SwiftUI's Text but the same problem occurs with UIKit's UILabel.
1
0
1.3k
Apr ’25
Handling VoiceOver Focus When Screen Changes (Push, Present, and SplitViewController)
I have some doubts about how VoiceOver handles focus when the screen updates. When a new UIViewController is pushed onto a UINavigationController or presented modally, how does VoiceOver decide which element to focus on? Is there a way to control or customize this behavior? In a UISplitViewController, when an item is selected in the primary view controller, the focus should shift to the relevant content in the secondary view controller. How can we ensure that VoiceOver correctly moves focus to the right element in the secondary panel?
0
0
186
Apr ’25
VisionOS - Gamepad steals focus
I am developing a vision os app for controlling an underwater ROV. I have ornaments with telemetry and buttons around a central video view feed. I have custom buttons mappings, such as "A" for locking the depth of the drone. However, when I look at buttons or certain ornaments, my custom gamepad logic is kept from running. This means that when a SwiftUI Button gains focus on visionOS, pressing the controller’s A button triggers the system’s default “click” on that Button rather than my custom buttonA handler. Essentially, focus interception by the system is stealing my A-press events and preventing my custom gamepad logic from running. Is there a way to disable the built in gamepad interaction and only allow my custom gamepad mappings?
1
0
291
Apr ’25
Using WebSocket for BCI Click Input in VisionOS - FocusState vs. System-Level Limitations
Hi everyone, My team and I are developing an accessibility-focused VisionOS app (MindTap) as part of a university project, aiming to support individuals with Locked-In Syndrome using Brain-Computer Interface (BCI) signals to trigger interactions (e.g., tapping) within the Apple Vision Pro environment. Problem 1: Simulating Eye Tracking in Simulator We are testing onHover with Send pointer to the device under I/O > Input in the simulator, and while it mostly works (a bit laggy), we found that onHover won't function on the actual Vision Pro hardware. From what I understand, we should be using FocusState for proper gaze interaction, but testing this requires the physical device. Is there any workaround or official Apple-recommended way to simulate Focus-based gaze detection without a real Vision Pro? Problem 2: WebSocket-triggered "Click" doesn't work outside the app We successfully use WebSocket to send a custom signal (a "1" from the brain signal device) to trigger an action inside our app. However, when the user opens a third-party app like Apple News, the WebSocket-triggered "click" no longer works. We suspect this is due to sandbox restrictions or lack of system-level permissions. Is it possible in anyway to: Trigger interaction events outside the app using custom input (like BCI via Websocket)? Access system-wide click/tap simulation APIs from within VisionOS apps Integrate this with accessibility services (like Voice Control or AssistiveTouch) We'd appreciate any official guidance or tips from others building similar accessibility apps with alternative input methods in VisionOS. Thanks in advance for any insight you can provide!
Replies
0
Boosts
0
Views
232
Activity
Apr ’25
On iOS 16, VoiceOver will not speak "¥1,230" in Japanese.
In iOS 16, VoiceOver no longer speaks numbers such as “¥1,230" in Japanese, which worked correctly in iOS 15 and earlier. This is a very important bug in Accessibility support. Steps to reproduce Create iOS App Project on Xcode. (I used Xcode 14) Set the Development Localization to "Japanese" on the project settings. (to make VoiceOver speak Japanese) For example, write the following code. Build and run the App targeting the actual device, not the simulator. Actual devices running iOS 15 or earlier will correctly read out loud with VoiceOver, while iOS 16 will have some reading problems. import SwiftUI @main struct VoiceOverProblemApp: App {   var body: some Scene {     WindowGroup {       // on project settings, set the development localization to Japanese.       VStack {         // ✅ said "Hello, world!"         Text("Hello, world!")                   // ✅ said "残高 (ざんだか, zan-daka)"         Text("残高")                   // ❌ said nothing (until iOS 15 or earlier, said "千二百三十円 (せんにひゃくさんじゅうえん, sen-nihyaku-san-ju-yen)"         Text("¥1,230")                   // ❌ said wrong         //  correct (iOS 15 or earlier):  "残高は千二百三十円です (zan-daka wa sen-nihyaku-san-ju-yen desu)"         //  incorrect (iOS 16)      : "残高はです (zan-daka wa desu)"         Text("残高は ¥1,230 です")       }     }   } } The above sample code uses SwiftUI's Text but the same problem occurs with UIKit's UILabel.
Replies
1
Boosts
0
Views
1.3k
Activity
Apr ’25
Handling VoiceOver Focus When Screen Changes (Push, Present, and SplitViewController)
I have some doubts about how VoiceOver handles focus when the screen updates. When a new UIViewController is pushed onto a UINavigationController or presented modally, how does VoiceOver decide which element to focus on? Is there a way to control or customize this behavior? In a UISplitViewController, when an item is selected in the primary view controller, the focus should shift to the relevant content in the secondary view controller. How can we ensure that VoiceOver correctly moves focus to the right element in the secondary panel?
Replies
0
Boosts
0
Views
186
Activity
Apr ’25
VisionOS - Gamepad steals focus
I am developing a vision os app for controlling an underwater ROV. I have ornaments with telemetry and buttons around a central video view feed. I have custom buttons mappings, such as "A" for locking the depth of the drone. However, when I look at buttons or certain ornaments, my custom gamepad logic is kept from running. This means that when a SwiftUI Button gains focus on visionOS, pressing the controller’s A button triggers the system’s default “click” on that Button rather than my custom buttonA handler. Essentially, focus interception by the system is stealing my A-press events and preventing my custom gamepad logic from running. Is there a way to disable the built in gamepad interaction and only allow my custom gamepad mappings?
Replies
1
Boosts
0
Views
291
Activity
Apr ’25