I tested it on the app I work with and others I use and the notification message is not appearing when using sleep mode
Iphone: 13 mini
IOS: 18.4.1
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
The specific identifier (com.utel.prod) is not getting listed in app creation form in app store connect. I hereby attached my screenshot.
Topic:
App Store Distribution & Marketing
SubTopic:
App Store Connect
Tags:
App Store
iOS
App Store Connect
Dears,
We are facing some issue in ios 18.4.1. Recently some of our end users who updated their ios devices to 18.4.1 have experienced random 403 errors in runtime. as per our analysis, We identified that these errors are associated with "CSRF token mismatch".
After successful login, the user's CSRF token is causing issue and it was changed in runtime, this causes the cookie mismatch, and the users is getting 403 errors, and the user session is getting invalid suddenly.
let me know if anyone facing the same issue in ios 18.4.1 and let me know Is there any workaround for this issue.
Thanks.
Since last week while testing our in-app purchases, I've had a lot of trouble with the sandbox account and StoreKit. I often had to wait for a minute or longer to get a response from the app store, or timed out before that. This occurs within the app but also when managing subscriptions under deverloper options -> sandbox account -> subscriptions.
Here is one of the error responses:
<SKPaymentQueue: 0x106ba06f0>: Payment completed with error: Error Domain=ASDErrorDomain Code=530 "(null)" UserInfo={client-environment-type=Sandbox, storefront-country-code=CHE, NSUnderlyingError=0x106922010 {Error Domain=AMSErrorDomain Code=100 "Authentication Failed The authentication failed."
The task I'm trying to accomplish is change our third party verification and subscription management server. Because of this I can sadly not use StoreKit2 and the transaction manager within Xcode, which I enjoyed a lot in the past. Working with sandbox accounts is very frustrating like this and I'm not confident to release our app with new subscription management until I can properly test this.
I've seen similar topics in this forum and so I want to ask: What might be the cause for these problems and when can we expect a stable verison again?
Hello,
I had submitted a question to clarify which components have accessibility APIs that trigger haptics for VoiceOver users https://developer.apple.com/forums/thread/773182.
The question stems from perhaps a more direct question about specific components: do tablists and disclosures natively intend to include haptics or screen reader hint or other state or properties to indicate to screen reader users where the component begins or ends?
In some web experiences there are screen reader hint text stating "end of..." or "entering" as a way to define the boundaries of these inline dialogs.
I had asked about haptics in the prior thread because I do not recall natively implemented version of this except in some haptic cues but have not experienced them consistently so I am not sure if that is an intended native Swift implementation or perhaps something custom.
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
iOS
Accessibility
Sound and Haptics
Core Haptics
TabView page control element has a bug on iOS 16 if tabview is configured as RTL with PageTabViewStyle.
Found iOS 16 Issues:
Page indicators display dots in reverse order (appears to treat layout as LTR while showing RTL)
Index selection is reversed - tapping indicators selects wrong pages
Using the page control directly to navigate eventually breaks the index binding
The underlying index counting logic conflicts with the visual presentation
iOS 18 Behavior:
Works as expected with correct dot order and index selection.
Xcode version:
Version 16.3 (16E140)
Conclusion:
Confirmed broken on iOS 16
Confirmed working on iOS 18
iOS 17 and earlier versions not yet tested
I've opened a feedback assistant ticket quite a while ago but there is no answer. There's a code example and a video there.
Anyone else had experience with this particular bug?
Here's the code:
public struct PagingView<Content: View>: View {
//MARK: - Public Properties
let pages: (Int) -> Content
let numberOfPages: Int
let pageMargin: CGFloat
@Binding var currentPage: Int
//MARK: - Object's Lifecycle
public init(currentPage: Binding<Int>,
pageMargin: CGFloat = 20,
numberOfPages: Int,
@ViewBuilder pages: @escaping (Int) -> Content) {
self.pages = pages
self.numberOfPages = numberOfPages
self.pageMargin = pageMargin
_currentPage = currentPage
}
//MARK: - View's Layout
public var body: some View {
TabView(selection: $currentPage) {
ForEach(0..<numberOfPages, id: \.self) { index in
pages(index)
.padding(.horizontal, pageMargin)
}
}
.tabViewStyle(PageTabViewStyle(indexDisplayMode: .always))
.ignoresSafeArea()
}
}
//MARK: - Previews
struct ContentView: View {
@State var currentIndex: Int = 0
var body: some View {
ZStack {
Rectangle()
.frame(height: 300)
.foregroundStyle(Color.gray.opacity(0.2))
PagingView(
currentPage: $currentIndex.onChange({ index in
print("currentIndex: ", index)
}),
pageMargin: 20,
numberOfPages: 10) { index in
ZStack {
Rectangle()
.frame(width: 200, height: 200)
.foregroundStyle(Color.gray.opacity(0.2))
Text("\(index)")
.foregroundStyle(.brown)
.background(Color.yellow)
}
}.frame(height: 200)
}
}
}
#Preview("ContentView") {
ContentView()
}
extension Binding {
@MainActor
func onChange(_ handler: @escaping (Value) -> Void) -> Binding<Value> {
Binding(
get: { self.wrappedValue },
set: { newValue in
self.wrappedValue = newValue
handler(newValue)
}
)
}
}
Hello,
I have a question regarding the new battery usage interface in iOS 26.
As shown in the attached screenshot, the system displays battery usage per app as a percentage (%).
I’m curious about how this percentage is calculated.
From what I can tell, it doesn’t seem to reflect the actual battery consumption per process, excluding the device’s base standby power.
It rather appears to be calculated as a relative percentage based on total battery drain, possibly including system idle power.
Is there any way to understand or estimate the actual battery usage per app, excluding the device’s inherent standby power consumption?
Thank you.
Our app connects to the headend to get a IDP login URL for each connection session, for example: “https://myvpn.ocwa.com/+CSCOE+/saml/sp/login?ctx=3627097090&acsamlcap=v2” and then open embedded webview to load the page. (Note: the value of ctx is session token which changes every time). Quite often the webview shows blank white screen. After user cancel the connection and re-connect, the 2nd time webview loads the content successfully.
The working case logs shows:
didReceiveAuthenticationChallenge is called
decidePolicyForNavigationAction is called twice
didReceiveAuthenticationChallenge is called
decidePolicyForNavigationResponse is called
didReceiveAuthenticationChallenge is called
But the failure case shows:
Filed to terminate process: Error Domain=com.apple.extensionKit.errorDomain Code=18 "(null)" UserInfo={NSUnderlyingError=0x11461c240 {Error Domain=RBSRequestErrorDomain Code=3 "No such process found" UserInfo={NSLocalizedFailureReason=No such process found}}}
didReceiveAuthenticationChallenge is called
decidePolicyForNavigationAction is called
decidePolicyForNavigationResponse is called
If we stop calling evaluateJavaScript code to get userAgent, the blank page happens less frequently. Below is the code we put in makeUIView():
func makeUIView(context: Context) -> WKWebView
{
if let url = URL(string: self.myUrl)
{
let request = URLRequest(url: url)
webview.evaluateJavaScript("navigator.userAgent")
{
result, error in
if let error = error
{
NSLog("evaluateJavaScript Error: \(error)")
}
else
{
let agent = result as! String + " " + self.myUserAgent
webview.customUserAgent = agent
webview.load(request)
}
}
}
return self.webview
}
Found some posts saying call evaluateJavaScript only after WKWebView has finished loading its content. However, it will block us to send the userAgent info via HTTP request. And I don’t think it is the root cause since the problem still occurs with less frequency.
There is no problem to load same web page on Windows desktop and Android devices. The problem only occurs on iOS and macOS which both use WKWebview APIs.
Is there a bug in WKWebview?
Thanks,
Ying
I've discovered a bug in the Phone app on iOS related to how long verdicts are displayed.
When a call is identified by a third-party Caller ID app, long verdicts display correctly during the call (they auto-scroll) and in the call log (with an ellipsis at the end). However, on the call details screen, the text is strangely truncated - showing only the beginning of the string and the last word.
For testing, I used this verdict: "Musclemen grow on trees. They can tense their muscles and look good in a mirror. So what? I'm interested in practical strength that's going to help me run, jump, twist, punch."
I'll attach a screenshots demonstrating the problem:
Hi,
Upon reviewing our app, we got feedback that our app icon within the Wallet app is not behaving as expected when the home screen is set to "light mode" only.
In that case, on the home screen, the app icon remains its default color (e.g., red), regardless of the device's appearance settings (light or dark), which is expected.
However, in the apple Wallet, e.g., under the From Apps from your device, app icons change their color (e.g., red in light mode, black in dark mode) when iOS appearance is changed - which is reported as an app issue.
I've noticed that all apps in that section are changing the color, not just ours, so it seems to me like a bug in iOS or a behavior that was not clearly defined in the app store guidelines.
If there is an API we must use to cover that case, which one would that be?
Is this a bug that Apple should resolve, or is this the intended behaviour?
Title: Frequent SIGSEGV crashes in QuartzCore's copy_image (iOS 18.4)
We're experiencing numerous crashes with the following signature:
Exception Codes: fault addr: 0x00000000000000e0
Crashed Thread: 0
Thread 0
0 QuartzCore CA::Render::copy_image(CGImage*, CGColorSpace*, unsigned int, double, double) + 1972
1 QuartzCore CA::Render::copy_image(CGImage*, CGColorSpace*, unsigned int, double, double) + 1260
2 QuartzCore CA::Render::prepare_image(CGImage*, CGColorSpace*, unsigned int, double) + 24
3 QuartzCore CA::Layer::prepare_contents(CALayer*, CA::Transaction*) + 220
4 QuartzCore CA::Layer::prepare_commit(CA::Transaction*) + 284
5 QuartzCore CA::Context::commit_transaction(CA::Transaction*, double, double*) + 488
6 QuartzCore CA::Transaction::commit() + 644
7 UIKitCore ___34-[UIApplication _firstCommitBlock]_block_invoke_2 + 36
8 CoreFoundation ___CFRUNLOOP_IS_CALLING_OUT_TO_A_BLOCK__ + 28
9 CoreFoundation ___CFRunLoopDoBlocks + 352
10 CoreFoundation ___CFRunLoopRun + 868
11 CoreFoundation _CFRunLoopRunSpecific + 572
12 GraphicsServices _GSEventRunModal + 168
13 UIKitCore -[UIApplication _run] + 816
14 UIKitCore _UIApplicationMain + 336
15 kugou _main + 132
16 dyld __dyld_process_info_create + 33284
Observations:
1.Crashes consistently occur in Core Animation's image processing pipeline
2.100% of occurrences are on iOS 18.4 devices
3.Crash signature suggests memory access violation during image/copy operations
4.Not tied to any specific device model
Questions for Apple:
1.Is this crash pattern recognized as a known issue in iOS 18.4?
2.Are there specific conditions that could trigger SEGV_ACCERR in CA::Render::copy_image?
3.Could this be related to color space handling or image format requirements changes?
4.Any recommended workarounds while waiting for a system update?
I'm currently finding it impossible to get a text filtering extension to be invoked when there's an incoming text message.
There isn't a problem with the app/extension because this is the same app and code that is already developed, tested, and unchanged since I last observed it working.
I know if there's any history of the incoming number being "known" then the extension won't get invoked, and I used to find this no hindrance to testing previously provided that:
the incoming number isn't in contacts
there's no outgoing messages to that number
there's no outgoing phone calls to the number.
This always used to work in the past, but not anymore.
However, I've ensured the incoming text's number isn't in contacts, in fact I've deleted all the contacts.
I've deleted the entire phone history, incoming and outgoing, and I've also searched in messages and made sure there's no interactions with that number.
There's logging in the extension so I can see its being invoked when turned on from the settings app, but its not getting invoked when there's a message.
The one difference between now and when I used to have no problem with this - the phone now has iOS 18.5 on it.
Its as if in iOS 18.5 there ever was any past association with a text number, its not impossible to remove that association.
Has there been some known change in 18.5 that would affect this call filtering behavior and not being able to rid of the incoming message caller as being "known" to the phone?
Update
I completely reset the phone and then I was able to see the the message filter extension being invoked. That's not an ideal situation though.
What else needs to be done beyond what I mentioned above in order to get a phone to forget about a message's number and thus get an message filtering extension to be invoked when there's a message from that number?
I’m updating my app’s alternate icons using UIApplication.shared.setAlternateIconName, and I noticed that in iOS 26, Xcode now supports the new .icon file format for App Icons.
Is it possible to reference .icon files directly for alternate icons?
Or does setAlternateIconName still only support traditional .png assets inside the AppIcon set?
I couldn’t find any documentation confirming this either way, and I want to ensure compatibility with the new format while supporting alternate icons.
Any clarification or Apple documentation link would be greatly appreciated!
Hi Everyone. I wanna run the live activity in terminated state. I have implemented the push notification flow which is working fine in foreground and background state. In the terminated state, the push to start token is not getting generated. How to make it work?
I have a visionOS app that I’m adding support for IOS and will like to keep using RealityView.
I know there are the following modifiers to add some navigation
.realityViewCameraControls(.orbit)
.realityViewCameraControls(.dolly)
.realityViewCameraControls(.pan)
But how can I add more than one? For example I would like to orbit with one finger, Pan with 2 fingers and dolly by pinching. Is this possible and if so can someone share some sample code on how to achieve that?
Thanks,
Guillermo
I’m trying to add a TextField to the toolbar using .principal placement, and I want it to either fill the screen width or expand based on the surrounding content. However, it’s not resizing as expected — the TextField only resizes correctly when I provide a hardcoded width value. This behavior was working fine in previous versions of Xcode, but seems to be broken in Xcode 26. Not sure if this is an intentional change or a bug. i am using iOS26 beta and Xcode 26 beta
struct ContentView: View {
var body: some View {
VStack {
Image(systemName: "globe")
.imageScale(.large)
.foregroundStyle(.tint)
Text("Hello, world!")
}
.padding()
.toolbar {
ToolbarItem(placement: .principal) {
HStack {
TextField("Search", text: .constant(""))
.textFieldStyle(.roundedBorder)
.frame(maxWidth: .infinity)
// .frame(width: 300)
Button("cancel") {
}
}
.frame(maxWidth: .infinity)
}
}
}
}
#Preview {
NavigationView {
ContentView()
}
}
In xr model mic not working after beta 18.4 update please check
iPhone:
iPhone 12 pro max, iOS:18.3.1
Step:
Connect CarPlay to my car, after connection success, CarPlay screen is black.
This issue only happens of first time connection;
I have asked OEM factory, they told me because of my iOS version is larger than 17.5, and above this version, will have this issue.
So, I need your help to figure problem and update in new version.
Hello,
I’d like to clarify the technical limitations around app updates in an Apple School Manager (ASM) + MDM environment.
Environment
• iOS/iPadOS devices supervised and managed via Apple School Manager
• Apps are distributed via ASM (VPP / Custom App) and managed by MDM
• Apps are App Store–signed (not Enterprise/In-House)
• Some apps include NetworkExtension (VPN) functionality
• Automatic app updates are enabled in MDM
Question
From a technical and platform-design perspective, is it possible to:
Deploy app updates for ASM/MDM-distributed App Store apps via a separate/custom update server, and trigger updates simultaneously across all managed devices, bypassing or supplementing the App Store update mechanism?
In other words:
• Can an organization operate its own update server to push a new app version to all devices at once?
• Or is App Store + iOS always the sole execution path for installing updated app binaries?
⸻
My current understanding (please correct if wrong)
Based on Apple documentation, it seems that:
1. App Store–distributed apps cannot self-update
• Apps cannot download and install new binaries or replace themselves.
• All executable code must be Apple-signed and installed by the system.
2. MDM can manage distribution and enable auto-update, but:
• MDM cannot reliably trigger an immediate update for App Store apps.
• Actual download/install timing is decided by iOS (device locked, charging, Wi-Fi, etc.).
3. Custom update servers
• May be used for policy decisions (minimum allowed version, feature blocking),
• But cannot be used to distribute or install updated app binaries on iOS.
4. For ASM-managed devices:
• The only supported update execution path is:
App Store → iOS → Managed App Update
• Any “forced update” behavior must be implemented at the app logic level, not the installation level.
⸻
What I’m trying to confirm
• Is there any supported MDM command, API, or mechanism that allows:
• Centralized, immediate, one-shot updates of App Store apps across all ASM-managed devices?
• Or is the above limitation fundamental by design, meaning:
• Organizations must rely on iOS’s periodic auto-update behavior
• And enforce version compliance only via app-side logic?
⸻
Why this matters
In large school deployments, delayed updates (due to device conditions or OS scheduling) can cause:
• Version fragmentation
• Inconsistent behavior across classrooms
• Operational issues for VPN / security-related apps
Understanding whether this limitation is absolute or if there is a recommended Apple-supported workaround would be extremely helpful.
Thanks in advance for any clarification
Problem Summary
After upgrading to iOS 26.1 and 26.2, I'm experiencing a particle positioning bug in RealityKit where ParticleEmitterComponent particles render at an incorrect offset relative to their parent entity. This behavior does not occur on iOS 18.6.2 or earlier versions, suggesting a regression introduced in the newer OS builds.
Environment Details
Operating System: iOS 26.1 & iOS 26.2
Framework: RealityKit
Xcode Version: 16.2 (16C5032a)
Expected vs. Actual Behavior
Expected: Particles should render at the position of the entity to which the ParticleEmitterComponent is attached, matching the behavior on iOS 18.6.2 and earlier.
Actual: Particles appear away from their parent entity, creating a visual misalignment that breaks the intended AR experience.
Steps to Reproduce
Create or open an AR application with RealityKit that uses particle components
Attach a ParticleEmitterComponent to an entity via a custom system
Run the application on iOS 26.1 or iOS 26.2
Observe that particles render at an offset position away from the entity
Minimal Code Example
Here's the setup from my test case:
Custom Component & System:
struct SparkleComponent4: Component {}
class SparkleSystem4: System {
static let query = EntityQuery(where: .has(SparkleComponent4.self))
required init(scene: Scene) {}
func update(context: SceneUpdateContext) {
for entity in context.scene.performQuery(Self.query) {
// Only add once
if entity.components.has(ParticleEmitterComponent.self) { continue }
var newEmitter = ParticleEmitterComponent()
newEmitter.mainEmitter.color = .constant(.single(.red))
entity.components.set(newEmitter)
}
}
}
AR Setup:
let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true)
let model = Entity()
model.components.set(ModelComponent(mesh: boxMesh, materials: [material]))
model.components.set(SparkleComponent4())
model.position = [0, 0.05, 0]
model.name = "MyCube"
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: [0.2, 0.2]))
anchor.addChild(model)
arView.scene.addAnchor(anchor)
Questions for the Community
Has anyone else encountered this particle positioning issue after updating to iOS 26.1/26.2?
Are there known workarounds or configuration changes to ParticleEmitterComponent that restore correct positioning?
Is this a confirmed bug, or could there be a change in coordinate system handling or transform inheritance that I'm missing?
Additional Information
I've already submitted this issue via Feedback Assistant(FB21346746)