Explore best practices for creating inclusive apps that cater to users with diverse abilities

Learn More

Post

Replies

Boosts

Views

Activity

AVSpeechSynthesizer getting long in the tooth
I'm wondering if we can expect an enhancement in the text to speech functionality anytime soon. AVSpeechSynthesizer is a little outdated compared to many speech synthesizers nowadays that are more natural and can distinguish between acronyms and words. It would be an awesome upgrade that it seems Apple could do very well. Thanks.
0
0
456
Jun ’24
Eye Tracking Availability and App Requirements in iPadOS 18
Very excited about the new eye tracking in iPadOS and iOS 18. Some general eye tracking questions. Does the initial iPadOS 18 beta include eye tracking? If not, in which beta will it be included? Do developers need to do anything to their app for users to control their app using eye tracking? Will all standard UIKit and SwiftUI views and controls work with eye tracking without code changes? Will custom subclasses of UIControl work with eye tracking without code changes? Looking forward to testing eye tracking.
4
4
1.3k
Jun ’24
Triple Click Accessibility Shortcut triggers the Shut Down Screen
The triple click accessibility shortcut is assigned to trigger the assistive touch icon , this works very rarely as intended and triggers the shut down screen. I have tried assigning different shortcuts but it the shut down screen is always triggered. On restarting the phone , it seems to work a few more times as intended , but then again reverts to the bugged state. I have create a feedback (FB13459492) for this in December 2023 but haven't received any reply. Updated recently with a screen recording.
1
0
638
Jun ’24
iOS App with custom HID over USB
Hello, I have to develop an application for a customer running on iOS (iPhone). The app needs to read data from and send data to a custom HID-Device connected via USB-C. As I read the docs, I thought HIDDriverKit (https://developer.apple.com/documentation/hiddriverkit) would be perfekt for that, but it is only available for macOS. Is there anything similar for iOS? Or can I go with DriverKit (https://developer.apple.com/documentation/driverkit)? Are ther any code examples? Has anybody done something like that before? Is it possible to acces custom HID-Device on iOS at all? Any help appreciated!!! Immanuel
1
0
771
Jun ’24
VoiceOver Headings Accessibility Rotor with SwiftUI
Hi, I'd like to mark views that are inside a LazyVStack as headers for VoiceOver (make them appear in the headings rotor). In a VStack, you just have add .accessibilityAddTraits(.isHeader) to your header view. However, if your view is in a LazyVStack, that won't work if the view is not visible. As its name implies, LazyVStack is lazy so that makes sense. There is very little information online about system rotors, but it seems you are supposed to use .accessibilityRotor() with the headings system rotor (.accessibilityRotor(.headings)) outside of the LazyVStack. Something like the following. .accessibilityRotor(.headings) { ForEach(entries) { entry in // entry.id must be the same as the id of the SwiftUI view it is about AccessibilityRotorEntry(entry.name, id: entry.id) } } It kinds of work, but only kind of. When using .accessibilityAddTraits(.isHeader) in a VStack, the view is in the headings rotor as soon as you change screen. However, when using .accessibilityRotor(.headings), the headers (headings?) are not in the headings rotor at the time the screen appears. You have to move the accessibility focus inside the screen before your headers show up. I'm a beginner in regards to VoiceOver, so I don't know how a blind user used to VoiceOver would perceive this, but it feels to me that having to move the focus before the headers are in the headings rotor would mean some users would miss them. So my question is: is there a way to have headers inside a LazyVStack (and are not necessarily visible at first) to be in the headings rotor as soon as the screen appears? (be it using .accessibilityRotor(.headings) or anything else) The "SwiftUI Accessibility: Beyond the basics" talk from WWDC 2021 mentions custom rotors, not system rotors, but that should be close enough. It mentions that for accessibilityRotor to work properly it has to be applied on an accessibility container, so just in case I tried to move my .accessibilityRotor(.headings) to multiple places, with and without the accessibilityElement(children: .contain) modifier, but that did not seem to change the behavior (and I could not understand why accessibilityRotor could not automatically make the view it is applied on an accessibility container if needed). Also, a related question: when using .accessibilityRotor(.headings) on a screen, is it fine to mix uses of .accessibilityRotor(.headings) and .accessibilityRotor(.headings)? In a screen with multiple type of contents (something like ScrollView { VStack { MyHeader(); LazyVStack { /* some content */ }; LazyVStack { /* something else */ } } }), having to declare all headers in one place would make code reusability harder. Thanks
1
1
567
Jun ’24
AX Elements in some apps only exposed when using VoiceOver or Accessibility Inspector
i build apps that act as Screen Readers to 1) add Vim motions everywhere on macOS 2) click (and more) AX Elements through the keyboard 3) scroll through the keyboard. it works extremely well with native apps. with non-native apps, i need to blast them with some extra AX Attributes (AXManualAccessibility, AXEnhancedUserInterface) to get them to expose their AX Elements. but there are a couple of apps tho which i can't get them to expose their AX Elements programmatically. now the weird thing is as soon as i start VoiceOver, those apps open up. or for some, if i use the Accessibility Inspector to go through their AX Elements, then they start opening up. so i'm wondering, is there one public known way that i'm missing to open up those apps, or is Apple using private APIs? any way i could make my apps behave like VoiceOver or the Accessibility Inspector to force those recalcitrant apps to open up? thanks in advance.
1
1
481
Jun ’24
Voiceover cursor in TabView jumps to previous tab
Hello, When I run the following view on my iPhone with VoiceOver enabled, I encounter an issue with the voiceover cursor when I perform the following steps: Move the VoiceOver cursor to the tabview dots for paging. Swipe up with one finger to go to the next tab. --> Tabview moves to the next tab. --> The VoiceOver cursor jumps up to the tab. But instead of the actual tab the previous tab is shown in the tabview again. You can also see that the border of the VoiceOver cursor extends into the previous tab. I suspect it has to do with the fact that despite the clipped modifier, the size of the image remains the same and extends into the previous tab. struct ContentView: View { var body: some View { VStack { TabView { ForEach(1..<6) { index in Button( action: { }, label: { ZStack { GeometryReader { geo in Image(systemName: "\(index).circle") .resizable() .scaledToFill() .frame(width: geo.size.width) .clipped() } } } ) .buttonStyle(PlainButtonStyle()) .padding([.bottom], 50) } } .tabViewStyle(PageTabViewStyle(indexDisplayMode: .always)) .indexViewStyle(PageIndexViewStyle(backgroundDisplayMode: .always)) } .padding() } } How can I fix this? Best regards Pawel
0
0
341
Jun ’24
Disabling New Hand Gesture Features in Vision Pro App on visionOS 2
Question: Hi everyone, I'm developing a Vision Pro app using the latest visionOS 2, and I've encountered some issues with the new hand gestures introduced in this update. My app is designed to display a UI element when a user's palm is detected. However, the new hand gestures for navigating key functions like Home View, Control Center, and adjusting the volume are interfering with my app's functionality. What I'm Trying to Achieve Detect when a user's palm is open and display a UI element. Ensure that my app's custom hand gestures are not disturbed by the new default gestures in visionOS 2. Problem The new hand gestures in visionOS 2 (such as those for Home View, Control Center, and volume adjustment) are activating while my app is open, causing disruptions to my app's functionality. I want to disable these system-level gestures when my app is running.
3
2
1.3k
Jun ’24
Eye Tracking as a General API Feature for iPadOS 18+?
I'd like to use the eye tracking feature in the latest iPadOS 18 update as more than an accessibility feature. i.e. another input modality that can be detected by event + enum checks similar to how we can detect and distinguish between touches and Apple pencil inputs. This might make it a lot easier to control and interact with iPad-based AR experiences that involve walking around, regardless of whether eye-tracking is enabled for accessibility. When walking, it's challenging to hold the device and interact with the screen with touch or pencil at all. Eye tracking + speech as input modalities could assist here. Also, this would help us create non-immersive AR experiences that parallel visionOS experiences that use eye tracking. I propose an API option for enabling eye-tracking (and an optional calibration dialogue within-app), as well as a specific UIControl class that simply detect when the eye looks at the control using the standard (begin/changed/end) events. My specific use case is that I'd like to treat eye-tracking-enabled UI elements or game objects differently depending on whether something is looked at with the eyes. For example, to select game objects while using speech recognition, suppose we have 4 buttons with the same name in 4 corners of the screen. Call them "corner" buttons. If I have my proposed invisible UI element for gaze detection, I can create 4 large rectangular regions on the screen. Then if the user says "select the corner" the system could parse this command and disambiguate between the 4 corners by checking which of the rectangular regions I'm currently looking at. (Note: the idea would be to make the gaze regions rather large to compensate for error.) The above is just a simple example, but the advantage over other methods like dwell is that it could be a lot faster. Another simple example: Using the same rectangular regions, instead of speech input, I could hold a button placed in just one spot on the screen, and look around the screen with my gaze to produce a laser beam for some kind of game, or draw curves (that I might smooth-out to reduce inaccuracy). OR maybe someone who does not have their hands available. This would require us to have the ability to get the coordinates of the eye gaze, but otherwise the other approach of just opting to trigger uicontrol elements might work for coarse selection. Would other developers find this useful as well? I'd like to propose this feature in feedback assistant, but I'm also opening-up a little discussion if by chance someone sees this. In short, I propose: a formal eye-tracking API for iPadOS 18+ that allows for turning on/off the tracking within the app, with the necessary user permissions the API should produce begin/changed/ended events similar to the existing events in UIKit, including screen coordinates. There should be a way to identify that an event came from eye-tracking. alternatively, we should have at minimum an invisible UIControl subclass that can detect when the eyes enter/leave the region.
0
6
1.2k
Jun ’24
Seeking Best Practices for Enhancing Accessibility in iOS Applications
Hello everyone, I'm currently working on an iOS application and I want to ensure it is as accessible as possible for all users, including those with disabilities. While I have a basic understanding of iOS accessibility features, I'm looking to deepen my knowledge and apply best practices comprehensively. Specifically, I would like to know: VoiceOver Optimization: What are the best practices for ensuring all UI elements are properly labeled and navigable with VoiceOver? Color Contrast and Visual Design: How can I effectively check and adjust color contrast ratios to accommodate users with visual impairments? Dynamic Type and Font Sizes:What are the recommended approaches for supporting Dynamic Type, and how can I ensure a consistent and readable layout across different text sizes? Accessibility in Custom UI Elements: How can I make custom controls (e.g., custom buttons, sliders) accessible, especially when they do not follow standard UIKit components? Testing Tools and Techniques: Which tools and techniques are most effective for QA to test accessibility on iOS? Thank you in advance for your insights and advice. Best regards, Ale
1
1
637
Jun ’24
(After upgrading to iOS 17.5.1) MFi hearing devices appeared to be paired, but app is unable to resolve the connection to the peripheral
Hey Apple, We (our customer support teams) have received feedback from our customers complaining their hearing devices (hearing aids) appear to be connected to MFi and OS controls are working, but audio stream isn't working, and the app is unable to resolve a connection to the device via the CBCentralManager.retrieveConnectedPeripherals(withServices:) The issues appear to disappear once the user unpairs and re-pair the hearing devices in the Accessibility > Hearing Devices options (they might also need to uninstall and reinstall the app as it is getting stuck due to invalid state), but we are unable to replicate this issue on our environments given it is intermittent and once we have upgraded a device to iOS 17.5.1, we don't have a mechanism to revert to an earlier version of it. So far, we have received about 30 reports for the past 2 weeks. Most of our customers complain about the app not connecting to the devices, but the fact the audio stream is not happening could hint to a deeper problem than our app. Are you guys aware of a problem affecting MFi hearing devices restoring after the OS upgrade process?
2
1
869
Jun ’24
Caller does not have kTCCServiceVoiceBanking access to personal voices. No speech will be generated
Hello, I'm trying to leverage PersonalVoice to read a phrase in my iOS application. My implementation works correctly on an iPhone 15, but does not work when I run the iOS application on an M2 Macbook Air. Here are some snippets from my implementation // This is how I request Personal Voice AVSpeechSynthesizer.requestPersonalVoiceAuthorization() { status in if status == .authorized { var personalVoices = AVSpeechSynthesisVoice.speechVoices().filter { $0.voiceTraits.contains(.isPersonalVoice) } } } // this is how I'm attempting to read let utterance = AVSpeechUtterance(string:textToRead) if let voice = personalVoices.first { utterance.voice = voice } var synthesizer = AVSpeechSynthesizer() synthesizer.speak(utterance) I get the following error messages when I try this: Cannot use AVSpeechSynthesizerBufferCallback with Personal Voices, defaulting to output channel. Caller does not have kTCCServiceVoiceBanking access to personal voices. No speech will be generated Voice not allowed to render speech! Will not set up synthesizer. Bailing now Any suggestions on how to mitigate this issue?
0
0
671
Jun ’24
iOS 18 beta bug(Dictation)
About half the time or more when dictating text, if dictation mode is manually deactivated immediately when done speaking the last word is duplicated. For example, if you dictate a text message (without using Siri) using the microphone button on the keyboard and are dictating a text message to someone by saying, ‘I’m on my way, be there soon.’ and hit send or stop the dictation as soon as you are done talking the dictated text will read. ‘I’m on my way, Be there soon soon.’ -currently running iOS 18, beta 1 and I’ve experienced this multiple times.
3
1
844
Jun ’24