Explore best practices for creating inclusive apps that cater to users with diverse abilities

Learn More

Posts under General subtopic

Post

Replies

Boosts

Views

Activity

Critical Bug: Children Can Disable Screen Time Apps Like Choreio Without Parental ApprovalI
Dear Apple Support, I am reporting a critical issue affecting parental control apps like my app, Choreio, which is live on the App Store. When Screen Time settings are configured to require a parent’s password for changes, parents must log in on their child’s device to make any adjustments. This restriction is expected to extend to apps using the Screen Time API, such as Choreio. However, I’ve discovered a significant bug: children can bypass this restriction by simply toggling off Choreio in the Screen Time settings—without needing the parent’s password. This effectively disables the app and defeats its purpose as a parental control tool. Please address this issue as soon as possible to ensure the intended functionality of parental controls. Let me know if you need any additional information to assist with resolving this. Thank you for your attention to this matter. Best regards, Jeff Houston STEPS TO REPRODUCE Here are the steps to reproduce the issue clearly: Install Choreio from the App Store on the child’s phone. Enable parental controls in Screen Time and set it to require the parent’s password for any changes to Screen Time settings. Go to the Screen Time settings on the child’s phone. Observe that the child can simply toggle off Choreio, effectively deactivating the app, without needing the parent’s password. Expected behavior: Toggling off Choreio should require the parent’s password, just like it does for other Screen Time settings. Let me know if additional details are needed!
4
0
379
Feb ’25
Proposal: Using ARKit Body Tracking & LiDAR for Sign Language Education (Real-time Feedback)
Hi everyone, I’ve been analyzing the current state of Sign Language accessibility tools, and I noticed a significant gap in learning tools: we lack real-time feedback for students (e.g., "Is my hand position correct?"). Most current solutions rely on 2D video processing, which struggles with depth perception and occlusion (hand-over-hand or hand-over-face gestures), which are critical in Sign Language grammar. I'd like to propose/discuss an architecture leveraging the current LiDAR + Neural Engine capabilities found in iPhone devices to solve this. The Concept: Skeleton-based Normalization Instead of training ML models on raw video frames (which introduces noise from lighting, skin tone, and clothing), we could use ARKit's Body Tracking to abstract the input. Capture: Use ARKit/LiDAR to track the user's upper body and hand joints in 3D space. Data Normalization: Extract only the vector coordinates (X, Y, Z of joints). This creates a "clean" dataset, effectively normalizing the user regardless of physical appearance. Comparison: Feed these vectors into a CoreML model trained on "Reference Skeletons" (recorded by native signers). Feedback Loop: The app calculates the geometric distance between the user's pose and the reference pose to provide specific correction (e.g., "Raise your elbow 10 degrees"). Why this approach? Solves Occlusion: LiDAR handles depth much better than standard RGB cameras when hands cross the body. Privacy: We are processing coordinates, not video streams. Efficiency: Comparing vector sequences is computationally cheaper than video analysis, preserving battery life. Has anyone experimented with using ARKit Body Anchors specifically for comparing complex gesture sequences against a stored "correct" database? I believe this "Skeleton First" approach is the key to scalable Sign Language education apps. Looking forward to hearing your thoughts.
1
0
591
Dec ’25
iOS VoiceOver Does Not Remove :focus-visible from Button When Moving to Non-Button Elements
When using iOS VoiceOver to navigate a webpage, selecting a element correctly activates the :focus-visible state. However, when VoiceOver moves to a non-button element (such as a or ), the previously focused button retains its :focus-visible state. The focus indicator only updates when VoiceOver moves to another . This behavior can be confusing for screen reader users, as it creates the appearance of multiple elements being focused simultaneously. It also differs from expected keyboard navigation behavior, where focus styles typically update as soon as the user moves to a new interactive element. Is this an intentional VoiceOver behavior, or could this be a bug? If intentional, is there a recommended workaround to ensure correct focus indication when moving between different types of elements? Steps to Reproduce: Enable VoiceOver on an iOS device. Navigate using swipe gestures or explore-by-touch to focus on a . Observe that the button correctly receives the :focus-visible styling. Move to a non-button element (e.g., a with tabindex="0" or an ). Notice that the button still retains its :focus-visible state, even though VoiceOver has moved to a new element. Expected Behavior: The previously focused should lose its :focus-visible state when VoiceOver moves to a different interactive element, just as it does when using keyboard navigation. Actual Behavior: The :focus-visible state remains on the previously focused button unless VoiceOver moves to another . This can create confusion by displaying multiple focus indicators at once. Tested On: iOS 17.7, 18.3.1 iOS Safari iPhone 11 Pro, iPhone 14 Pro Max
1
0
691
Feb ’25
Japanese “Hattori” TTS voice missing from Settings > General > Read & Speak > Voices > Japanese on iOS 26
Japanese “Hattori” TTS voice missing from Settings > General > Read & Speak > Voices > Japanese on iOS 26 Steps: Open the path above → “Hattori” is not listed and cannot be downloaded Expected: Hattori is available to download and select Actual: Hattori is absent from the catalog Regression: Was available on iOS 18.x on the same device
0
0
414
Sep ’25
SwiftUI List Accessibility VoiceOver
I have been working on a feature, where I have a List in SwiftUI with previous and next data loading, user can scroll up and down to load previous/next page data. Recently, I faced one accessibility issue while testing voice-over, when user lands on the listing screen and swipe across the screen from navigation and when focus comes on list it should highlight the first item visible. But when user swipes back: Should it load the previous data and announce the previous item or it should go back to the navigation items? If it loads the previous item, what if the user wants to go to the navigation to switch to other actions and vice-versa? Did anyone come across this kind of issue? What can be the standard expected behavior in this case if list has both previous and next page scroll? I different tried gestures https://support.apple.com/en-in/guide/iphone/iph3e2e2281/ios, but it isn't working
2
0
124
Apr ’25
How to disable the default focus effect and detect keyboard focus in SwiftUI?
I’m trying to customize the keyboard focus appearance in SwiftUI. In UIKit (see WWDC 2021 session Focus on iPad keyboard navigation), it’s possible to remove the default UIFocusHaloEffect and change a view’s appearance depending on whether it has focus or not. In SwiftUI I’ve tried the following: .focusable() // .focusable(true, interactions: .activate) .focusEffectDisabled() .focused($isFocused) However, I’m running into several issues: .focusable(true, interactions: .activate) causes an infinite loop, so keyboard navigation stops responding .focusEffectDisabled() doesn’t seem to remove the default focus effect on iOS Using @FocusState prevents Space from triggering the action when the view has keyboard focus My main questions: How can I reliably detect whether a SwiftUI view has keyboard focus? (Is there an alternative to FocusState that integrates better with keyboard navigation on iOS?) What’s the recommended way in SwiftUI to disable the default focus effect (the blue overlay) and replace it with a custom border? Any guidance or best practices would be greatly appreciated! Here's my sample code: import SwiftUI struct KeyboardFocusExample: View { var body: some View { // The ScrollView is required, otherwise the custom focus value resets to false after a few seconds. I also need it for my actual use case ScrollView { VStack { Text("First button") .keyboardFocus() .button { print("First button tapped") } Text("Second button") .keyboardFocus() .button { print("Second button tapped") } } } } } // MARK: - Focus Modifier struct KeyboardFocusModifier: ViewModifier { @FocusState private var isFocused: Bool func body(content: Content) -> some View { content .focusable() // ⚠️ Must come before .focused(), otherwise the FocusState won’t be recognized // .focusable(true, interactions: .activate) // ⚠️ This causes an infinite loop, so keyboard navigation no longer responds .focusEffectDisabled() // ⚠️ Has no effect on iOS .focused($isFocused) // Custom Halo effect .padding(4) .overlay( RoundedRectangle(cornerRadius: 18) .strokeBorder( isFocused ? .red : .clear, lineWidth: 2 ) ) .padding(-4) } } extension View { public func keyboardFocus() -> some View { modifier(KeyboardFocusModifier()) } } // MARK: - Button Modifier /// ⚠️ Using a Button view makes no difference struct ButtonModifier: ViewModifier { let action: () -> Void func body(content: Content) -> some View { content .contentShape(Rectangle()) .onTapGesture { action() } .accessibilityAction { action() } .accessibilityAddTraits(.isButton) .accessibilityElement(children: .combine) .accessibilityRespondsToUserInteraction() } } extension View { public func button(action: @escaping () -> Void) -> some View { modifier(ButtonModifier(action: action)) } }
1
0
477
Sep ’25
"illegal character encoding in string literal" warnings in Xcode
Good day! I have a long-term project ported all the way up from old Think C through many versions of Xcode. Its source files are encoded in "Western (Mac OS Roman)". Some of my error messages have characters outside the straight ASCII character set (i.e. "å"). The editor correctly displays these, but I get plenty of Illegal Character warnings and the messages do not display properly. I imagine there's a way to have seperate files of localized text for internationalized applications, but I am the only end-user of this application, and it used to just plain work in earlier Xcode versions. Furthermore, there must be developers throughout Europe who use such characters in string literals, just typing in their native languages, straight off their keyboards. I was thinking that there must be a Clang setting or something, but have been unable to find it, and an internet search turns up no solution except to cumbersomely escape each individual character. I can't imagine that a French programmer does that every time they want to type "è", "é", or "à"! Any help? (Disclaimer: I'm an English speaker and only use such characters whimsically, but want to keep them for legacy's sake.) Thanks.... p.s. using Xcode 15.3, and under Settings->Text Editing->Editing, "Western (Mac OS Roman)" is already selected as the default text encoding with "Convert existing files on save" checked.
3
0
205
Jun ’25
App Store Connect – “Unable to Handle This Request” Error
Hello, I'm currently unable to access App Store Connect. When I try to open https://appstoreconnect.apple.com, I receive the following error message: “appstoreconnect.apple.com is currently unable to handle this request.” I’ve tried the following steps, but the issue persists: Cleared browser cache and cookies Tried different browsers (Safari, Chrome) Attempted from multiple devices and networks Is this a known issue or is there any workaround available? Would appreciate any help or update on the current status. Thank you,
0
0
151
Jun ’25
Is it possible to animate the accessibility frame on iOS and macOS?
Say I have a UI element that moves on the screen. Is it possible to update its accessibility frame as it moves while VoiceOver is focused on it? From my tests, VoiceOver ignores UIAccessibilityLayoutChangedNotification if it's sent repeatedly in a short period of time on iOS, while sending NSAccessibilityLayoutChangedNotification on macOS triggers VoiceOver to reannounce the focused element repeatedly.
2
0
256
Jul ’25
AXChildren does not get all children
I'd like to add borders to all buttons in the iOS simulator from my Mac app. First I get the simulator window. Then I access the children of all AXGroup and if it's a button or a static text, I add a border. But for some buttons this does not work. In the example image the NavigationBarButtons are not found. I guess the problem is, that for some AXGroup the children array access with AXChildren is empty. Here is some relevant code: - (NSArray<DDHOverlayElement *> *)overlayChildrenOfUIElement:(AXUIElementRef)element index:(NSInteger)index { NSMutableArray<DDHOverlayElement *> *tempOverlayElements = [[NSMutableArray alloc] init]; NSLog(@">>> -----------------------------------------------------"); NSString *role = [UIElementUtilities roleOfUIElement:element]; NSRect frame = [UIElementUtilities frameOfUIElement:element]; NSLog(@"%@, role: %@, %@", element, role, [NSValue valueWithRect:frame]); NSArray *lineage = [UIElementUtilities lineageOfUIElement:element]; NSLog(@"lineage: %@", lineage); NSArray<NSValue *> *children = [UIElementUtilities childrenOfUIElement:element]; if (children.count < 1) { NSLog(@"NO CHILDREN"); } for (NSInteger i = 0; i < [children count]; i++) { NSValue *child = children[i]; AXUIElementRef uiElement = (__bridge AXUIElementRef)child; NSString *role = [UIElementUtilities roleOfUIElement:uiElement]; NSRect frame = [UIElementUtilities frameOfUIElement:uiElement]; NSLog(@"----%@, role: %@, %@", child, role, [NSValue valueWithRect:frame]); } NSLog(@"<<< -----------------------------------------------------"); for (NSInteger i = 0; i < [children count]; i++) { NSValue *child = children[i]; AXUIElementRef uiElement = (__bridge AXUIElementRef)child; NSString *role = [UIElementUtilities roleOfUIElement:uiElement]; NSRect frame = [UIElementUtilities frameOfUIElement:uiElement]; NSLog(@"%@, role: %@, %@", child, role, [NSValue valueWithRect:frame]); if ([role isEqualToString:@"AXButton"] || [role isEqualToString:@"AXTextField"] || [role isEqualToString:@"AXStaticText"]) { NSString *tag = [NSString stringWithFormat:@"%ld%ld", (long)index, (long)i]; NSLog(@"tag: %@", tag); DDHOverlayElement *overlayElement = [[DDHOverlayElement alloc] initWithUIElementValue:child tag:tag]; [tempOverlayElements addObject:overlayElement]; } else if ([role isEqualToString:@"AXGroup"] || [role isEqualToString:@"AXToolbar"]) { [tempOverlayElements addObjectsFromArray:[self overlayChildrenOfUIElement:uiElement index:++index]]; } else if ([role isEqualToString:@"AXWindow"]) { [self.overlayWindowController setFrame:[UIElementUtilities frameOfUIElement:uiElement]]; [tempOverlayElements addObjectsFromArray:[self overlayChildrenOfUIElement:uiElement index:index]]; } } return [tempOverlayElements copy]; } For some AXGroup the children are found. For some they are empty. I cannot figure out why. Does anyone have an idea what I'm doing wrong?
2
0
154
May ’25
Making PhotoLibrary UIImagePickerController a11y compliant
I am invoking the UIImagePickerController of type UIImagePickerControllerSourceTypePhotoLibrary from my viewController. I want shift the keyboard focus to the Cancel button which is the first interactive element on the gallery picker. When a user has full keyboard access turned on they should be able to tap tab and interact with the gallery picker modal. How do I achieve this?
1
0
155
May ’25
Speak Screen gesture not working
I am testing the accessibility feature available in the Settings app called "Speak Screen". The help text in the Setting app states that swiping down with two fingers will cause the screen content to be spoken. However, I've been unable to get this feature to work. Every time I try the double finger swipe down, it behaves the same as the single finger swipe down gesture. Usually this manifests as making scroll views bounce. I've tried toggling the feature on and off, turning off Reachability, and rebooting my phone, but I can't get the speak screen gesture to work. If I access the speak screen feature from the "Speech Controller" button, then the screens content is spoken, as expected, so I know the feature is enabled. It's just the gesture that doesn't work. Is there something else I need to do to get this gesture to work? I don't want to tell my users to turn this feature on if I can't verify that the gesture will work with my app.
1
0
209
Jul ’25