Explore best practices for creating inclusive apps that cater to users with diverse abilities

Learn More

Posts under General subtopic

Post

Replies

Boosts

Views

Created

pairedUUIDsDidChangeNotification never fires, even with MFi hearing aids paired
Hi everyone — I’m implementing the new Hearing Device Support API described here: https://developer.apple.com/documentation/accessibility/hearing-device-support I have MFi hearing aids paired and visible under Settings → Accessibility → Hearing Devices, and I’ve added the com.apple.developer.hearing.aid.app entitlement (and also tested with Wireless Accessory Configuration: https://developer.apple.com/documentation/bundleresources/entitlements/com.apple.external-accessory.wireless-configuration ). com.apple.developer.hearing.aid.app xxxxx but the app won't even compile with this entitlement Problem NotificationCenter.default.addObserver(...) for pairedUUIDsDidChangeNotification never fires — not on app launch, not after pairing/unpairing, and not after reconnecting the hearing aids. Because the notification never triggers, calls like: HearingDeviceSession.shared.pairedDevices always return an empty list. What I expected According to the docs, the notification should be posted whenever paired device UUIDs change, and the session should expose those devices — but nothing happens. Questions Does the hearing.aid.app entitlement require special approval from Apple beyond adding it to the entitlements file? Is there a way to verify that iOS is actually honoring this entitlement? Has anyone successfully received this notification on a real device? Any help or confirmation would be greatly appreciated.
1
0
150
1d
How to Implement Dynamic Type for UITextFields Without Resetting Data
Hello! I was doing some accessibility testing for my app and found out that when the user switches the text size, all of the data in the text fields is reset, which causes major disruption. I've tried looking for documentation, but all I've found is information on how to dynamically scale the UI for different text sizes, which I've already implemented. My guess is that every time Dynamic Type registers a change, it redraws my UI instead of just updating it. How can I make sure the data is not reset when the text size changes?
1
1
457
4d
iOS 26 regression: `DeviceActivityEvent`: `eventDidReachThreshold` called immediately (instead of waiting till threshold is reached)
Hello Albert! I am experiencing some strange bugs around DeviceActivityEvents (part of the DeviceActivity framework) on iOS 26 / iOS 26.1 / iOS 26.2 beta: When creating a DeviceActivityEvent we can assign a threshold and applicationTokens. The idea is, that after the user has spent said threshold on said apps, eventDidReachThreshold() is called. The property includesPastActivity is set to false. On iOS 26 however, it happens (quite reliably after updating to a new beta seed) quite often that eventDidReachThreshold() is called immediately (after a couple of seconds) instead of waiting for the threshold to be met. Is anyone else seeing similar issues on iOS 26 / iOS 26.1 / iOS 26.2 beta? Only workaround I have found is to ask users to revoke and re-grant Screen Time permissions. This only holds for about two weeks though or at most until the next iOS 26 beta update is installed, so it is not a permanent solution unfortunately. Feedback (incl. sysdiagnoses and sample project) is filed under: FB18061981 FB18927456 One of our users has filed their own feedback request as well: FB20817853 Thanks a lot for any help on this!
0
0
270
1w
Proposal: Using ARKit Body Tracking & LiDAR for Sign Language Education (Real-time Feedback)
Hi everyone, I’ve been analyzing the current state of Sign Language accessibility tools, and I noticed a significant gap in learning tools: we lack real-time feedback for students (e.g., "Is my hand position correct?"). Most current solutions rely on 2D video processing, which struggles with depth perception and occlusion (hand-over-hand or hand-over-face gestures), which are critical in Sign Language grammar. I'd like to propose/discuss an architecture leveraging the current LiDAR + Neural Engine capabilities found in iPhone devices to solve this. The Concept: Skeleton-based Normalization Instead of training ML models on raw video frames (which introduces noise from lighting, skin tone, and clothing), we could use ARKit's Body Tracking to abstract the input. Capture: Use ARKit/LiDAR to track the user's upper body and hand joints in 3D space. Data Normalization: Extract only the vector coordinates (X, Y, Z of joints). This creates a "clean" dataset, effectively normalizing the user regardless of physical appearance. Comparison: Feed these vectors into a CoreML model trained on "Reference Skeletons" (recorded by native signers). Feedback Loop: The app calculates the geometric distance between the user's pose and the reference pose to provide specific correction (e.g., "Raise your elbow 10 degrees"). Why this approach? Solves Occlusion: LiDAR handles depth much better than standard RGB cameras when hands cross the body. Privacy: We are processing coordinates, not video streams. Efficiency: Comparing vector sequences is computationally cheaper than video analysis, preserving battery life. Has anyone experimented with using ARKit Body Anchors specifically for comparing complex gesture sequences against a stored "correct" database? I believe this "Skeleton First" approach is the key to scalable Sign Language education apps. Looking forward to hearing your thoughts.
1
0
506
1w
VoiceOver accessibility issue in UIKit for line granularity
Context: We are using UIKit to provide accessibility in our app for our iOS users. Our app majorly contains documents/books that user can read. Issue: The issue is VoiceOver is skipping the lines given to it when there are some leading spaces in it. We have observed this issue in different languages. This is only happening for line granularity, other granularities seems to be working as expected. Implementation: We are using below API's to provide line content to voice over. UIAccessibilityReadingContent - accessibilityPageContent - accessibilityFrameForLineNumber - accessibilityContentForLineNumber We are creating UIAccessibilityElement objects to pass to VoiceOver and each UIAccessibilityElement implements UIAccessibilityReadingContent to provide readable content. We also use below APIs to cross element boundaries for all granular navigations. accessibilityNextTextNavigationElement accessibilityPreviousTextNavigationElement We want to know whether skipping the line when provided with leading spaces is expected or a bug in UIKit.
1
0
260
2w
Double Payment Issue — Apple Developer Account Still Not Activated
Hi everyone, I completed my Apple Developer registration on November 8, 2025, and paid the fee. Afterwards, I was charged another €99, so I basically paid twice. But nothing has changed on my account so far — it doesn’t even show as “pending” or “waiting” anymore. I also can’t reach Apple Support, since I’m unable to get a call or send an email. Has anyone else experienced something like this or knows what I can do?
1
1
233
3w
Developer Program enrollment still pending after payment
Hi everyone, I subscribed to the Apple Developer Program on Tuesday evening, November 4th, 2025. The payment has already been charged to my bank account, but my account still shows the status “Pending” with the message “Subscribe your membership”. It’s now been several days, and I haven’t received any confirmation email or any request for additional information. I already contacted Apple Support by email, but I’d like to know if other developers have experienced the same situation and how long it took before their account was activated. Thanks in advance for your help and feedback! — Martin
16
7
1.1k
3w
Download Voices screen
System settings => Accessibility => System Voice => the little (i) beside the pulldown => Voices => THIS SCREEN will allow you to download Premium Voices Is there a way to trigger this screen programmatically. Or at least a link to get my users there without having to dig thru that swamp of screens?
0
1
377
3w
Custom Keyboard Extension Not Showing in Settings for Activation
Hi everyone, I’m developing a React Native iOS app that includes a custom keyboard extension for sending stickers across apps. The project builds successfully, and the main app installs fine on my test device. However, I’m not seeing the keyboard extension appear under Settings → General → Keyboard → Keyboards → Add New Keyboard, which means I can’t activate it or grant access. At this point, I’m not even sure if the extension is actually being installed on the device along with the main app. Here’s what I’ve done so far. I created a Keyboard Extension target in Xcode, set the correct bundle identifiers and provisioning profiles, and enabled “Requests Open Access” in the extension’s Info.plist. I built and installed the app on a physical device rather than the simulator to ensure proper testing. My main questions are: how can I confirm that the extension is being installed on the device, and if it isn’t, what might prevent it from installing even though the build completes successfully? Any insights, troubleshooting steps, or guidance would be greatly appreciated.
0
0
656
Nov ’25
IOS 26 Full Keyboard Access (navigation) and WKWebView
We use an embedded WKWebView for several screens in our app. Recently, we have been testing keyboard navigation via Full Keyboard Access in our apps. On IOS 18, everything works pretty much as expected. On IOS 26, it does not. On IOS 26, you can "tab" away from the webview and then never tab back to the webview for keyboard navigation. Is this a known issue? Are there workarounds for this issue that anyone is aware of?
2
0
334
Nov ’25
WiFi aware demo paring issue
I am developing a program on my chip and attempting to establish a connection with the WiFi Aware demo app launched by iOS 26. Currently, I am encountering an issue during the pairing phase. If I am the subscriber of the service and successfully complete the follow-up frame exchange of pairing bootstrapping, I see the PIN code displayed by iOS. Question 1: How should I use this PIN code? Question 2: Subsequently, I need to negotiate keys with iOS through PASN. What should I use as the password for the PASN SAE process? If I am the subscriber of the service and successfully complete the follow-up frame exchange of pairing bootstrapping, I should display the PIN code. Question 3: How do I generate this PIN code? Question 4: Subsequently, I need to negotiate keys with iOS through PASN. What should I use as the password for the PASN SAE process?
1
0
557
Nov ’25
AirPods Pro 3 HRV Data Access Through HealthKit?
Hey everyone I'm working on a health app that's heavily focused on HRV tracking and analysis, and I'm trying to figure out what's actually possible with AirPods Pro 3 from a developer standpoint. The hardware clearly has a much better heart rate sensor than the previous generation, but I'm hitting some walls when it comes to actually accessing the data I need. So here's the situation I'm dealing with: When I query HealthKit for HRV samples, I'm not seeing anything coming from AirPods Pro 3. The device is obviously capable of tracking heart rate continuously during workouts and listening sessions, and from what I've read about the hardware, it should theoretically be able to capture the inter-beat intervals needed for HRV calculation. But either that data isn't being processed on-device, or it's just not being made available through the standard HealthKit data types that third-party apps can access. What I'm really after is either direct HRV metrics (like SDNN, which Apple Watch already provides through HKQuantityTypeIdentifierHeartRateVariabilitySDNN) or even better, access to the raw R-R interval data. With R-R intervals, I could calculate RMSSD, pNN50, and other time-domain and frequency-domain HRV metrics that are super valuable for tracking recovery, autonomic nervous system balance, and stress levels. This would be especially useful since a lot of users wear AirPods during activities when they're not wearing their Apple Watch. Has anyone managed to find a way to pull this data from AirPods Pro 3? Are there any private frameworks or entitlements I should be looking into? Or is this just fundamentally not exposed to developers at the OS level right now? I've gone through the HealthKit documentation pretty thoroughly and haven't found anything that specifically addresses this, but I'm wondering if I'm missing something or if there are any known workarounds. I'm also curious if anyone has heard anything from Apple about future plans to expose this data. It seems like a missed opportunity given how capable the hardware is and how much value developers could provide with access to this physiological data. Would love to hear if anyone else is working on similar features or has insights into the technical limitations here.
1
0
603
Oct ’25
Wi-Fi connectivity Issue - Captive.apple.com returns “application/octet-stream” instead of “text/html”,
In our system, when a user enables a mobile hotspot and the system connects to it, the system attempts to verify WIFI availability by sending an HTTP GET request to http://captive.apple.com. Normally, the server returns: HTTP Status: 200 (OK) Content-Type: text/html This has always been used as a sign of normal connectivity. Issue: Since last Friday, the server sometimes responds with: Content-Type: application/octet-stream When this occurs, our system determines that the network is unavailable and displays a connection warning (a “!” icon). Question: Has Apple recently made any backend or CDN configuration changes to captive.apple.com that could affect the response type? Any advice how can we solve this problem? Thanks!
2
1
930
Oct ’25
AVSpeechSynthesisVoice ignores user-selected voices in iOS 26 (Regression)
We've identified a regression in iOS 26.0 and 26.1 Beta 4 where AVSpeechSynthesisVoice(language:) no longer respects user-selected voices from Accessibility settings. Issue: When users select a specific voice in Settings → Accessibility → Spoken Content → Voices, calling AVSpeechSynthesisVoice(language:) returns the system default voice instead of the user's selection. This worked correctly in iOS 18.6.2. Particularly affects: Third-party speech synthesis voices (CereProc, Grammatek, etc.) Apps relying on automatic voice selection based on user preferences Example: // User selected CereProc Heather for en-GB in Accessibility settings let voice = AVSpeechSynthesisVoice(language: "en-GB") print(voice?.name) // iOS 18.6.2: "HEATHER", iOS 26: "Daniel" (system default) Interesting observation: The new Accessibility Reader feature in iOS 26 correctly uses the user-selected voice, but Tap to Speak and the API both ignore the setting. Tested methods: AVSpeechSynthesisVoice(language:) AVSpeechUtterance auto-selection Reflection for new APIs All return the system default voice, not the user's preference. Filed: FB[20271264] Has anyone else encountered this? Any known workarounds to programmatically access the user's preferred voice selection?
4
1
321
Oct ’25