Overview

Post

Replies

Boosts

Views

Activity

Issue with multiple touches with "Defer System Gestures" on, with iOS
I'm developing a rhythm game for iOS which has four buttons spanning the width of the screen in portrait. I noticed that my testers were having some missed inputs on the buttons on the left and right due to the fact that iOS, by default, tries to ignore accidental touches on the edges of the screen. So I enabled "Defer System Gestures" on the left and right edges, but then quickly started to notice a new, very specific, issue. Description of the issue If you have finger #1 touching and holding anywhere in the middle of the screen, and finger #2 touches on the far right or left edge of the screen just below the horizontal position of finger #1, those touches are inconsistently not recognized. If finger #1 is not present, this issue does not occur. If finger #2 is above or well below finger #1, this issue also does not occur. A dead zone is created on the right and left edges of the screen just below the horizontal position of the first touch. Here is a rough representative example of where touches #1 and #2 need to be for this issue to manifest, in case the text above is not clear. |				| |				| |				| |				| |	 1		| |			 2| |				| It just so happens that this issue is causing major usability problems with my game, as it results in what the user sees as sporadic and inconsistent response when the game calls for two notes to be played at the same time. Steps to recreate the issue Here are the steps if you want to recreate the problem yourself using the "Create New Gesture" pane in "Assistive Touch" (Note that this problem is not specific to the Settings app, but rather is an issue across the system—however this panel defers system gestures and shows where touches are being read, so it is a great place to demonstrate): (1) Go to Settings > Accessibility > Touch > Assistive Touch > Create New Gesture...; (2) With one finger, touch the middle of the screen and hold it through step 3; (3) With a second finger, tap 4 times along the right (or left) edge of the screen in the following places: (a) well above the vertical position of the first touch, (b) just above the vertical position of the first touch, (c) just below the vertical position of the first touch, and (d) well below the vertical position of the first touch; (4) Notice how, more than half the time, touch (c) does not register. I have found that this problem is more replicatable when the first touch is on the lower half of the screen, but I have been able to replicate it when the finger is higher as well, just not as consistently. Here are the four positions described in the steps above: Position a: both touches register |				| |				| |				| |			 2| |	 1		| |				| |				| Position b: both touches usually register |				| |				| |				| |				| |	 1	 2| |				| |				| Position c: only touch 1 registers |				| |				| |				| |				| |	 1		| |			 2| |				| Position d: both touches register |				| |				| |				| |				| |	 1		| |				| |			 2| Is there anything I can do to resolve this behavior? My app requires gesture deferment to be on for the expected experience by the user, and this bug is causing other issues for my testers that kind of need to be resolved before I can confidently release the game.
2
2
1.3k
3w
New Virtualization features in macOS Tahoe
I'm pleased to share some significant updates that have recently been released for our Hypervisor and Virtualization frameworks. We've focused on enhancing efficiency, expanding capabilities, and addressing common developer needs. I believe these will be valuable for many of you. Here’s a look at what’s new: Hypervisor Updates We've introduced support for configuring the intermediate physical address (IPA) memory granularity of a VM. This allows for more granular memory mappings, enabling granularity sizes down to 4KB. This is particularly useful for certain specialized device drivers requiring finer memory control. Virtualization Framework Updates More Efficient VM Image Storage with ASIF: We've integrated support for the Apple Sparse Image Format (ASIF). This results in a smaller disk footprint and optimized transfer for VM disk images when using VZDiskImageStorageDeviceAttachment, improving storage efficiency. Custom Network Topologies with vmnet: We've added support for vmnet custom network topologies. This enables more flexible VM-to-VM communication based on logical networks with customized configurations, useful for complex testing or development environments. See VZVmnetNetworkDeviceAttachment to get started. Simplified VM Queue Discovery: It's now easier to discover a VM’s on-process thanks to a new property on VZVirtualMachine. This should aid in development and debugging when interacting directly with the VM's queue. These are some of the key highlights of the first beta, and I'm looking forward to seeing how these improvements will be utilized. I encourage you to explore the documentation for full details on these features.
1
3
441
2w
Foundation Model Framework
Greetings! I was trying to get a response from the LanguageModelSession but I just keep getting the following: Error getting response: Model Catalog error: Error Domain=com.apple.UnifiedAssetFramework Code=5000 "There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides" UserInfo={NSLocalizedFailureReason=There are no underlying assets (neither atomic instance nor asset roots) for consistency token for asset set com.apple.MobileAsset.UAF.FM.Overrides} This occurs both in macOS 15.5 running the new Xcode beta with an iOS 26 simulator, and also on a macOS 26 with Xcode beta. The simulators are both Pro iPhone 16s. I was wondering if anyone had any advice?
16
3
1.5k
6d
My app doesn't respond on iPhone Air iOS 26.1.
My app doesn't respond on iPhone Air iOS 26.1. After startup, my app shows the main view with a tab bar controller containing 4 navigation controllers. However, when a second-level view controller is pushed onto any navigation controller, the UI freezes and becomes unresponsive. The iPhone simulator running iOS 26.1 exhibits the same problem. The debug profile shows CPU usage at 100%. However, other devices and simulators do not have this problem.
6
3
350
3w
something that's probably easy to fix but it's driving me crazy
hy! apple developer community! No, I have a problem that's driving me crazy: Multiple commands produce '/Users/giulia/Library/Developer/Xcode/DerivedData/ScheduledStudy-dcmmhcdncitvfkdditbjoipfalqg/Build/Products/Debug-iphonesimulator/singintestwidgetExtension.appex/Info.plist'. Can you help me fix it? I know it's probably stupid, but I've been trying to fix it for days. There aren't two info.plist file, and that's why I don't quite understand. If you could help me, you'd be doing me a favor. Thanks for your time. P.S. I'm sorry, but I'm just starting out with programming and I'm also quite young.
6
0
140
18h
Question: How to support landscape-only on iPad app after 'Support for all orientations will soon be required' warning
Dear Apple Customer Support, I’m developing a new Swift iPadOS app and I want the app to run in landscape only (portrait disabled). In Xcode, under Target > General > Deployment Info > Device Orientation, if I select only Landscape Left and Landscape Right, the app builds successfully, but during upload/validation I receive this message and the upload is blocked: “Update the Info.plist: Support for all orientations will soon be required.” Could you please advise what the correct/recommended way is to keep an iPad app locked to landscape only while complying with the current App Store upload requirements? Is there a specific Info.plist configuration (e.g., UISupportedInterfaceOrientations~ipad) or another setting that should be used? Thank you,
3
2
214
22h
Playgrounds app with the latest SDK (OS 26)
I am preparing for Swift Student Challenge, but it seems like the iPad Playground app still not support the latest SDK. I can't use frameworks like FoundationModel, etc., directly in playground app My playground for this year would require iPad environment since it uses PencilKit, ARKit, etc., and I also want to use the latest tech + the liquid glass UI Right now, I develop the project as a normal Xcode project, and I am wondering how do I post it? As Xcode playground, it must run on macOS As Swift Playground project, the iPad version of Playground does not support latest APIs and it can't compile
0
2
43
22h
PSVR2 controller button quirks
I have an open Feedback conversation with Apple on this topic, but I am curious if others have run into this, or want to try out my sample code in their set up. there are two API’s for reading controller buttons, axis, and D pads: GCPhysicalInputProfile and GCControllerLiveInput. There are inconsistencies in behaviour between the two of them. Apple recommends we use GCControllerLiveInput, however, there are some capabilities on these controllers that are only accessible through GCPhysicalInputProfile, as I’ll discuss below. PSVR2 R2/L2 buttons, a.k.a. triggers, have force input analogue values. These can only be accessed on GCPhysicalInputProfile PSVR2 thumbstick direction values are read through “axes” on GCPhysicalInputProfile, but only “dpads” on GCControllerLiveInput on both GCPhysicalInputProfile and GCControllerLiveInput, All pressed events of all buttons are fired properly using generic aliases ( Trigger, Grip ,Menu, Right Thumbstick, Left Thumbstick, Right Button A & B (Circle & Cross), Left Button A&B (Triangle and Square) ). Apple reserves the system button as the equivalent of a home button for the OS. on GCPhysicalInputProfile, touch events are fired when the button is also pressed, but not for only touches. on GCControllerLiveInput , Touch events only works for the following buttons: Left Thumbstick, Right Thumbstick, Right Button A (Circle), and Right Button B (Cross). But Right Button B touch event isn’t labelled correctly, it fires as the Right Button A event. I observed this inside ALVR which uses a polling based approach to event processing: https://github.com/alvr-org/alvr-visionos/blob/17b5968f9d894944b53e97134b39dfce0993302a/ALVRClient/WorldTracker.swift#L301 To simplify to see this on a very simple app, I used the Apple example TrackingAccessories application: https://developer.apple.com/documentation/ARKit/tracking-accessories-in-volumetric-windows I’ve attached the code that replaces the AccessoryTrackingModel class. I added code that prints out what is touched/pressed, see the trackAllConnectedSpatialControllers method: https://github.com/svrc/TrackingAccessories
6
0
788
3w
How to regenerate passkey for appstoreconnect.apple.com
When logging into appstoreconnect I get the option to use a password or passkey: Password works fine. However passkey always results in: This is probably because I don't actually have any passkeys: So I think what is happening is that Apple thinks I have a passkey, which is why it's offering that as an option. I can't see any option to create a passkey, or regenerate one. I'm not sure how I've ended up in this state. I have a vague memory that sometime years ago when passkeys were first introduced on appstoreconnect that I tried to create one, but because it wouldn't let me store it in BitWarden, I backed out. It's not a major problem because I can still log in using password. But I'm worried in case some time in the future passkey becomes the only option. Does anyone know of any way I can reset or regenerate my passkey for appstoreconnect?
2
1
493
2w
How to properly use PermissionKit to ask permission
For testing permission response we can use the sandbox account However, when testing permission requests using the AskCenter API, none of the ask API works for me in xcode 26.2 rc and iOS 26.2 rc. For SignificantAppUpdateTopic, I got errors like "The user is in a region that does not support this type of ask" in the console log, but I've already set my billing address to Texas. For CommunicationTopic, the console shows several XPC-related errors, and I’m not sure which of them are relevant. Both of them show an alert view of "Can't ask, An unknown error occurred" Can someone help to guide us how to test the request flow? Thanks
8
2
625
1d
DeviceActivity Report Extension cannot pass App Store Connect validation without becoming un-installable on device
I'm running into a contradictory requirement involving the DeviceActivity Report extension (com.apple.deviceactivityui.report-extension) that makes it impossible to both: upload the app to App Store Connect, and install the app on a physical device. This creates a complete catch-22. 📌 Overview My extension: Path: Runner.app/PlugIns/LoADeviceActivityReport.appex Extension point: com.apple.deviceactivityui.report-extension Implementation (SwiftUI): import SwiftUI import DeviceActivity @main struct LoADeviceActivityReport: DeviceActivityReportExtension { var body: some DeviceActivityReportScene { // ... } } This is the standard SwiftUI @main DeviceActivityReportExtension template. 🟥 Side A — iOS runtime behavior (device installer) If I add either of these keys to the extension's Info.plist: NSExtensionPrincipalClass NSExtensionMainStoryboard then the app cannot be installed on a real iPhone/iPad. The device installer fails with: Error 3002 AppexBundleContainsClassOrStoryboard NSExtensionPrincipalClass and NSExtensionMainStoryboard are not allowed for extension point com.apple.deviceactivityui.report-extension. To make the app install and run, I must remove both keys completely. This leaves the extension Info.plist like: NSExtension NSExtensionPointIdentifier com.apple.deviceactivityui.report-extension With this, the app installs and runs correctly. 🟥 Side B — App Store Connect upload validator However, when I upload the IPA with the runtime-correct Info.plist, App Store Connect rejects it: State: STATE_ERROR.VALIDATION_ERROR (HTTP 409) Missing Info.plist values. No values for NSExtensionMainStoryboard or NSExtensionPrincipalClass found in extension Info.plist for Runner.app/PlugIns/LoADeviceActivityReport.appex. So ASC requires that at least one of those keys be present. 💥 The catch-22 If I add PrincipalClass / MainStoryboard: ✔ App Store Connect validation passes ❌ But the app can NOT be installed on any device (Error 3002) If I remove PrincipalClass / MainStoryboard: ✔ The app installs and runs correctly ❌ ASC rejects the upload with “Missing Info.plist values” There is currently NO Info.plist configuration that satisfies both: Runtime: "NSExtensionPrincipalClass and NSExtensionMainStoryboard are not allowed." App Store Connect: "You must include NSExtensionPrincipalClass or NSExtensionMainStoryboard." 📌 Expected behavior For SwiftUI @main DeviceActivityReportExtension, the documentation and examples suggest the correct configuration is: NSExtensionPointIdentifier com.apple.deviceactivityui.report-extension with no principal class or storyboard at all. If that is correct for runtime, ASC seems to need updated validation rules for this extension type. ❓My Questions What is the officially correct Info.plist configuration for a SwiftUI DeviceActivityReportExtension? Should principal class / storyboard not be required for this extension type? Is this a known issue with App Store Connect validation? Is there currently a workaround that allows: installation on device and successful App Store Connect upload, without violating runtime restrictions?
3
2
187
2w
Matter commissioning issue with Matter support extension
My team has developed an app with a Matter commissioner feature (for own ecosystem) using the Matter framework on the MatterSupport extension. Recently, we've noticed that commissioning Matter devices with the MatterSupport extension has become very unstable. Occasionally, the HomeUIService stops the flow after commissioning to the first fabric successfully, displaying the error: "Failed to perform Matter device setup: Error Domain=HMErrorDomain Code=2." (normally, it should send open commissioning window to the device and then add the device to the 2nd fabric). The issue is never seen before until recently few weeks and there is no code changes in the app. We are suspected that there is some data that fail to download from the icloud or apple account that cause this problem. For evaluation, we tried removing the HomeSupport extension and run the Matter framework directly in developer mode, this issue disappears, and commissioning works without any problems.
18
0
726
2w
Metal debug log in Swift Package
My goal is to print a debug message from a shader. I follow the guide that orders to set -fmetal-enable-logging metal compiler flag and following environment variables: MTL_LOG_LEVEL=MTLLogLevelDebug MTL_LOG_BUFFER_SIZE=2048 MTL_LOG_TO_STDERR=1 However there's an issue with the guide, it's only covers Xcode project setup, however I'm working on a Swift Package. It has a Metal-only target that's included into main target like this: targets: [ // A separate target for shaders. .target( name: "MetalShaders", resources: [ .process("Metal") ], plugins: [ // https://github.com/schwa/MetalCompilerPlugin .plugin(name: "MetalCompilerPlugin", package: "MetalCompilerPlugin") ] ), // Main target .target( name: "MegApp", dependencies: ["MetalShaders"] ), .testTarget( name: "MegAppTests", dependencies: [ "MegApp", "MetalShaders", ] ] So to apply compiler flag I use MetalCompilerPlugin which emits debug.metallib, it also allows to define DEBUG macro for shaders. This code compiles: #ifdef DEBUG logger.log_error("Hello There!"); os_log_default.log_debug("Hello thread: %d", gid); // this proves that code exectutes result.flag = true; #endif Environment is set via .xctestplan and valideted to work with ProcessInfo. However, nothing is printed to Xcode console nor to Console app. In attempt to fix it I'm trying to setup a MTLLogState, however the makeLogState(descriptor:) fails with error: if #available(iOS 18.0, *) { let logDescriptor = MTLLogStateDescriptor() logDescriptor.level = .debug logDescriptor.bufferSize = 2048 // Error Domain=MTLLogStateErrorDomain Code=2 "Cannot create residency set for MTLLogState: (null)" UserInfo={NSLocalizedDescription=Cannot create residency set for MTLLogState: (null)} let logState = try! device.makeLogState(descriptor: logDescriptor) commandBufferDescriptor.logState = logState } Some LLMs suggested that this is connected with Simulator, and truly, I run the tests on simulator. However tests don't want to run on iPhone... I found solution running them on My Mac (Mac Catalyst). Surprisingly descriptor log works there, even without MTLLogState. But the Simulator behaviour seems like a bug...
1
1
364
2d
Extend Measurement to support inverse units
There are many units which are inverse of standard units, e.g. wave period vs Hz, pace vs speed, Siemens vs Ohms, ... Dimension can be subclassed to create the custom units. How to extend Measurement.converted( to: )? I was looking at somehow using UnitConverter and subclass to something like UnitConverterInverse. Thoughts?
1
1
153
2d
SpriteKit framerate drop on iOS 26.0
Hello, I have noticed a performance drop on SpriteKit-based projects running on iOS 26.0 (23A341). Below is a SpriteKit scene used to test framerate on different devices: import SpriteKit import SwiftUI class BareboneScene: SKScene { override func didMove(to view: SKView) { size = view.bounds.size anchorPoint = CGPoint(x: 0.5, y: 0.5) backgroundColor = .darkGray let roundedSquare = SKShapeNode(rectOf: CGSize(width: 150, height: 75), cornerRadius: 12) roundedSquare.fillColor = .systemRed roundedSquare.strokeColor = .black roundedSquare.lineWidth = 3 addChild(roundedSquare) let action = SKAction.rotate(byAngle: .pi, duration: 1) roundedSquare.run(.repeatForever(action)) } } struct BareboneSceneView: View { var body: some View { SpriteView( scene: BareboneScene(), debugOptions: [.showsFPS] ) .ignoresSafeArea() } } #Preview { BareboneSceneView() } The scene is very simple, yet framerate drops to ~40 fps as shown by the Metal HUD. Tested on: iPhone 13, iOS 26.0: framerate drops to 40 fps. Sometimes it runs at near 60fps. But if the screen is touched repeatedly, the framerate drops to 40-50 fps again. iPhone 11 Pro, iOS 26.0: ~40fps. iPad 9th Gen, iOS 18.6.2: 60fps, no issues. See screenshots attached. These numbers were observed by me and members of our beloved SpriteKit Discord server. Thank you for your attention.
12
6
2.4k
1w
Behavior of BGContinuedProcessingTask on Failure
Hi there, First thanks for all the work on BGContinuedProcessingTask! It looks really promising. I have a question / issue around the behavior when a BGContinuedProcessingTask expires. Here is my setup. I have an app who's responsible for uploading large files in the field (AKA wifi is not expected) For a given file, it can likely fail due to network conditions I'm using Multipart upload though so I can retry a file to pick up where it left off. I use one taskIdentifier per file, and when the file fails, I can retry the task and have it continue where it left off (I am reusing the taskIdentifier here for retries, let me know if I shouldn't be doing that) Here is the behavior I am seeing I start an upload, it seems to be uploading normally I turn on airplane mode to simulate expiration of the task the task fails as expected after ~30 seconds, and I see the failure in my home screen. I have callbacks in the task to put my app in the proper state on expiration / failure I turn back on airplane mode and I retry the task, the way I do this is I do NOT re-register, I simply re-submit the task with the same TaskIdentifier. What I would have expected is that the failure task is REPLACED with the new task and new progress. Instead what I see is TWO ContinuedBackgroundProcessingTasks, one in the failure state and one in progress. My question is How can I make retries reuse the same task notification item? OR if that's not possible, how do I programmatically clear the task failure? I've tried cancelTask but that doesn't seem to clear it.
5
1
295
4h
# [CRITICAL] Metal RHI Memory Leak - Resource exhaustion vulnerability (CWE-400) - Bug Report
[CRITICAL] Metal API Memory Leak - Heap Memory Never Released to OS (CWE-400) Security Classification This issue constitutes a resource exhaustion vulnerability (CWE-400): Aspect Details Type Uncontrolled Resource Consumption CWE CWE-400 Vector Local (any Metal application) Impact System instability, denial of service User Control None - no mitigation available Recovery Requires application restart Summary Metal heap allocations are never released back to macOS, even when the memory is entirely unused. This causes continuous, unbounded memory growth until system instability or crash. The issue affects any application using Metal API heap allocation. This was discovered in Unreal Engine 5, but reproduces in a completely blank UE5 project with zero application code - confirming this is Metal framework behavior, not application-level. Environment OS: macOS Tahoe 26.2 Hardware: Apple Silicon M4 Max (also reproduced on M1, M2, M3) API: Metal Reproduction Steps Run any Metal application that allocates and deallocates GPU buffers via Metal heaps Open Activity Monitor and observe the application's memory usage Let the application run idle (no user interaction required) Observe memory growing continuously at ~1-2 MB per second Memory never plateaus or stabilizes Eventually system becomes unstable For testing: Any Unreal Engine 5.4+ project on macOS will reproduce this. Even a blank project with no gameplay code exhibits the leak. (Tested on UE 5.7.1) Observed Behavior Memory Analysis Using Unreal's memreport -full command, two reports taken 86 seconds apart: Metric Report 1 (183s) Report 2 (269s) Delta Process Physical 4373.64 MB 4463.39 MB +89.75 MB Metal Heap Buffer 7168 MB 8192 MB +1024 MB Unused Heap 3453 MB 4477 MB +1024 MB Object Count 73,840 73,840 0 (no change) Key Finding Metal Heap grew by exactly 1 GB while "Unused Heap" also grew by 1 GB. This demonstrates: Metal is allocating new heap blocks in ~1 GB increments Previously allocated heap memory becomes "unused" but is never released The unused memory accumulates indefinitely No application-level objects are leaking (count remains constant) Memory Growth Pattern Continuous growth while idle (no user interaction) Growth rate: approximately 1-2 MB per second No plateau or stabilization occurs Metal allocates new 1 GB heap blocks rather than reusing freed space Eventually leads to system instability and crash What is NOT Causing This We verified the following are NOT the source: Application objects - Object count remains constant Application code - Blank project with no code reproduces the issue Texture streaming - Disabling texture streaming had no effect CPU garbage collection - Running GC has no effect (this is GPU memory) Mitigations Attempted (None Worked) setPurgeableState Setting resources to purgeable state before release: [buffer setPurgeableState:MTLPurgeableStateEmpty]; Result: Metal ignores this hint and does not reclaim heap memory. Avoiding Heap Pooling Forcing individual buffer allocations instead of heap-based pooling. Result: Leak persists - Metal still manages underlying allocations. Aggressive Buffer Compaction Attempting to compact/defragment buffers within heaps every frame. Result: Only moves data between existing heaps. Does NOT release heaps back to OS. Reducing Pool Sizes Minimizing all buffer pool sizes to force more frequent reuse. Result: Slightly slows the leak rate but does not stop it. Root Cause Analysis How Metal Heap Allocation Appears to Work Metal allocates GPU heap blocks in large chunks (~1 GB observed) Application requests buffers from these heaps When application releases buffers, memory becomes "unused" within the heap Metal does NOT release heap blocks back to macOS, even when entirely unused When fragmentation prevents reuse, Metal allocates new heap blocks Result: Continuous memory growth with no upper bound The Core Problem There appears to be no Metal API to force heap memory release. The only way to reclaim this memory is to destroy the Metal device entirely, which requires restarting the application. Expected Behavior Metal should: Release unused heaps - When a heap block is entirely unused, release it back to macOS Respect purgeable hints - Honor setPurgeableState calls from applications Compact allocations - Defragment heap allocations to reduce fragmentation Provide control APIs - Allow applications to request heap compaction or release Enforce limits - Have configurable maximum heap memory consumption Security Implications Local Denial of Service - Any Metal application can exhaust system memory, causing instability affecting all running applications Memory Pressure Attack - Forces other applications to swap to disk, degrading system-wide performance No Upper Bound - Memory consumption continues until system failure Unmitigable - End users have no way to prevent or limit the leak Affects All Metal Apps - Any application using Metal heaps is potentially affected Impact Applications become unstable after extended use System-wide performance degrades as memory pressure increases Users must periodically restart applications Developers cannot work around this at the application level Long-running applications (games, creative tools, servers) are particularly affected Request Investigate Metal heap memory management behavior Implement heap release when blocks become entirely unused Honor setPurgeableState hints from applications Consider providing an API for applications to request heap compaction Document any intended behavior or workarounds Additional Notes This issue has been observed across multiple Unreal Engine versions (5.4, 5.7) and multiple Apple Silicon generations (M1 through M4). The behavior is consistent and reproducible. The Unreal Engine team has implemented various CVars to attempt mitigation (rhi.Metal.HeapBufferBytesToCompact, rhi.Metal.ResourcePurgeInPool, etc.) but none successfully address the issue because the root cause is at the Metal framework level. Tested: January 2026 Platform: macOS Tahoe 26.2, Apple Silicon (M1/M2/M3/M4)
0
2
336
1d
How to test SignificantChange Permission Ask
I'm developing for compliance with Texas law in the United States. Currently, I'm encountering an issue where I want to test the feature point "when the app undergoes significant changes, a supervised user initiates a request." However, during actual testing, the app pops up an error message: "Can't Ask, An Unknown error occurred." Additionally, I see the following error message in the Xcode Console: "Error Domain=AskToCore.ATMessageComposeValidationError Code=4 'The user is in a region that does not support this type of ask.' UserInfo={NSLocalizedFailureReason=The user must be in a supported region to use this feature., NSLocalizedRecoverySuggestion=Please ensure the user is in an eligible region., NSLocalizedDescription=The user is in a region that does not support this type of ask.}" I am indeed not in the Texas region. I want to conduct full-process testing before the feature is released to the App Store. What should I do? Apart from Sandbox testing (which, in fact, doesn't show any pop-ups either), how can I test under real-world conditions?
2
2
224
3w
Why my App Clip Card can not be opened?
Hello, I have an AppClip technical issue I'd like to ask about: Our Xiaohongshu App (Version 9.14 & Build 9140845 & appid 741292507) was submitted to the App Store for review on December 18th and approved on December 19th, starting a phased rollout. This version was the first to support AppClip functionality, and as of today the app has been fully rolled out, but I still cannot activate the Xiaohongshu App Clip Card using the NFC tag (built-in URL: https://xhslink.com/n/22UG9jlaOfv?sourceid=911). I checked all the configurations of our project's AppClip target (Associated Domains:appclips:xhslink.com), the default experience in the ASC backend (URL: https://appclip.apple.com/id?p=com.xingin.discover.Clip), and the advanced experience (URL: https://xhslink.com/n/Arxpo4IVY9X), and all showed no abnormalities. Furthermore, the default experience URL, after being written with an NFC tag, can be successfully launched by tapping the NFC tag to open the Xiaohongshu App Clip Card, and the Card can also be opened normally in Safari on the phone. However, our advanced experience URL, after being written with an NFC tag, cannot be launched by tapping the Card. Therefore, my question is: Who please can help me check why the Xiaohongshu App Clip Card cannot be launched normally using the advanced experience URL? Thank you very much! My Expectations: My advanced experience URL configured on the ASC platform, such as https://xhslink.com/n/Arxpo4IVY9X, should be recognized and validated by the iOS system when the phone is near the NFC tag and reads https://xhslink.com/n/22UG9jlaOfv?sourceid=911, ultimately triggering the App Clip Card configured in Xiaohongshu's advanced experience settings. Additionally: We have already deployed a large number of NFC tags and other materials in many online stores, and the URL (https://xhslink.com/n/Arxpo4IVY9X) is built-in, and our 9.14 version is fully implemented. However, the newly developed App Clip feature is not working, which is very urgent and I hope Apple can help me find the cause of the problem and how to improve it as soon as possible. Thank you very much!
1
1
102
2w
iOS 26 NavigationStack Title Rendering Issue
Is anyone else experiencing NavigationStack title disappearing in iOS 26? Hey everyone, I just updated to iOS 26 and I'm running into a really frustrating issue with my app. Wondering if anyone else is seeing this or if I'm missing something obvious. What's happening: My navigation titles are completely blank when the app first loads, but then magically appear when I scroll down. It's super weird and makes my app look broken at first glance. My setup: I'm using NavigationStack with some pretty standard stuff: Navigation title with .navigationTitle() A toolbar with a settings button ScrollView content with a gradient background Here's basically what I have: NavigationStack { ScrollView { VStack(spacing: 24) { // My app content here - cards, etc. ForEach(myItems) { item in // Content cards } } .padding() } .background( LinearGradient( gradient: Gradient(colors: [ Color.surfacePrimary, Color.surfacePrimary.opacity(0.95), Color.surfaceSecondary.opacity(0.3) ]), startPoint: .top, endPoint: .bottom ) ) .navigationTitle("My Title") // ← This doesn't show up initially! .toolbar { ToolbarItem(placement: .navigationBarTrailing) { Button("Settings") { // Settings action } } } } The weird part: Works perfectly fine on my iOS 18.6.2 device Completely broken on iOS 26 - blank navigation area As soon as I scroll, the title appears and stays there Happens on both simulator and physical device What I've tried: Double-checking my code (it's identical to what worked before) Testing on multiple devices Different navigation title strings Removing and re-adding the toolbar Has anyone else run into this? I'm thinking it might be an iOS 26 bug with NavigationStack, but I wanted to check if others are seeing it before filing a radar. It's affecting my main landing screens (Tennis, NFL, and Prediction Markets tabs) and honestly looks pretty bad when users first open the app and see blank headers. Temporary fix I found: If anyone else hits this, I discovered that forcing inline titles works as a workaround: .navigationTitle("My Title") .navigationBarTitleDisplayMode(.inline) // This fixes it but kills large titles Obviously not ideal since we lose the nice large title behavior, but at least the titles show up. Questions: Is this happening to anyone else's iOS 26 apps? Any better workarounds that preserve large titles? Should I file this as a radar or is Apple already aware? Really hoping this gets fixed soon - having to choose between broken navigation or losing large titles is pretty frustrating. Thanks for any insights! Anyone else dealing with this NavigationStack nightmare in iOS 26? 😅
4
2
286
3d