The problem is the same in all of my applications. To reproduce it, in iOS 26, set the dark mode in the Brightness and display settings and in Accessibility, Display and text size, activate Increase contrast and bold text. With these settings, all the controls will be surrounded by a thin white line. When in the app a keyboard is dispayed, the thin white line does not appears correctly around the keyboard like in the capture joined, it is present on top and partially on bottom but not on sides
Overview
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi All,
I have recently upgraded my Silicon Mac Laptop from MacOS 15.3 (Sequoia) to MacOS 26 (Tahoe). Since existing Xcode 16.2 is not compatible with MacOS 26, i installed Xcode 26.0.1 but after the successful installaton the application is not working and quits after the erorr dialog "Loading a Plugin Failed." (Screenshot attached). I tried resatring/reinstalling the Xcode 26.0.1 but no luck.
To narrow down the issue i installed the same Xcode 26.0.1 in another Silicon Mac laptop with MacOS 26(Tahoe) where no previous Xcode was installed and it went well and working as expected.
With my assumption Xcode 26.0.1 breaks when the device is upgraded from sequoia to Tahoe due to some conflicting old Xcode settings/files which may not be compatible with Xcode 26 or Tahoe but i am not sure. I followed some old articles as below from the forum and nothing worked for me so far.
https://developer.apple.com/forums/thread/660860
https://developer.apple.com/forums/thread/719810
https://developer.apple.com/forums/thread/759396
Can some one shed a light on this to resolve this issue?
Thanks.
I a using the current RC version of iOS on both my iPhone and iPad. I and developing an iCloud based app and it works correctly on iOS 18. When I upgraded to iOS 26 the iCloud functions work correctly but the push notifications do not work.
The issue appears to be creating subscriptions. The following code should create a subscription and does not get an error, but it did to create a subscription under iOS 26.
func subscribeToNotifications(recordType: String,
subscriptionID: String, notification: CKSubscription.NotificationInfo) {
let subscriptionIDForType = "\(subscriptionID)-\(recordType)"
let predicate = NSPredicate(value: true)
let subscription = CKQuerySubscription(recordType: recordType, predicate: predicate, subscriptionID: subscriptionIDForType, options: [.firesOnRecordCreation, .firesOnRecordUpdate, .firesOnRecordDeletion])
let notification = CKSubscription.NotificationInfo()
subscription.notificationInfo = notification
CKContainer.default().publicCloudDatabase.save(subscription) { (returnedSubscription, error) in
if let error = error {
print("Error saving subscription: \(error)")
} else {
print("Successfully saved subscription: recordType: " + recordType + " subscriptionID: " + subscriptionIDForType)
}
}
}
Print results:
Successfully saved subscription: recordType: folder subscriptionID: folderName-folder
Topic:
App & System Services
SubTopic:
Notifications
Tags:
CloudKit
User Notifications
iPad and iOS apps on visionOS
UIKit
I submitted my application for the Apple enrollment about couple days ago, and it’s been almost a week now, status still shows as "Pending," and I haven’t received any updates from Apple yet. Has anyone else experienced this delay, or can anyone share their experience with the timeline?
Topic:
Developer Tools & Services
SubTopic:
Apple Developer Program
The mobile applications published on App Store Connect have been blocked. The subscription has been paid. However, it is not possible to perform the “Renew” action in the Developer app — an error appears stating: “Already Subscribed. An Apple Developer Program membership subscription is already associated with your iTunes & App Store account.”
Could you please advise what needs to be done to restore access to the applications?
Topic:
App Store Distribution & Marketing
SubTopic:
App Store Connect
Environment
Device: iPhone 16e
iOS Version: 18.4.1 - 18.7.1
Framework: AVFoundation (AVAudioEngine)
Problem Summary
On iPhone 16e (iOS 18.4.1-18.7.1), the installTap callback stops being invoked after resuming from a phone call interruption. This issue is specific to phone call interruptions and does not occur on iPhone 14, iPhone SE 3, or earlier devices.
Expected Behavior
After a phone call interruption ends and audioEngine.start() is called, the previously installed tap should continue receiving audio buffers.
Actual Behavior
After resuming from phone call interruption:
Tap callback is no longer invoked
No audio data is captured
No errors are thrown
Engine appears to be running normally
Note: Normal pause/resume (without phone call interruption) works correctly.
Steps to Reproduce
Start audio recording on iPhone 16e
Receive or make a phone call (triggers AVAudioSession interruption)
End the phone call
Resume recording with audioEngine.start()
Result: Tap callback is not invoked
Tested devices:
iPhone 16e (iOS 18.4.1-18.7.1): Issue reproduces ✗
iPhone 14 (iOS 18.x): Works correctly ✓
iPhone SE 3 (iOS 18.x): Works correctly ✓
Code
Initial Setup (Works)
let inputNode = audioEngine.inputNode
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioEngine.start()
Interruption Handling
NotificationCenter.default.addObserver(
forName: AVAudioSession.interruptionNotification,
object: AVAudioSession.sharedInstance(),
queue: nil
) { notification in
guard let userInfo = notification.userInfo,
let typeValue = userInfo[AVAudioSessionInterruptionTypeKey] as? UInt,
let type = AVAudioSession.InterruptionType(rawValue: typeValue) else {
return
}
if type == .began {
self.audioEngine.pause()
} else if type == .ended {
try? self.audioSession.setActive(true)
try? self.audioEngine.start()
// Tap callback doesn't work after this on iPhone 16e
}
}
Workaround
Full engine restart is required on iPhone 16e:
func resumeAfterInterruption() {
audioEngine.stop()
inputNode.removeTap(onBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, time in
self.processAudioBuffer(buffer, at: time)
}
audioEngine.prepare()
try audioSession.setActive(true)
try audioEngine.start()
}
This works but adds latency and complexity compared to simple resume.
Questions
Is this expected behavior on iPhone 16e?
What is the recommended way to handle phone call interruptions?
Why does this only affect iPhone 16e and not iPhone 14 or SE 3?
Any guidance would be appreciated!
My app IPA shows get-task-allow=true even though my provisioning and info.plist are false. The issue started after transferring the app to a new developer account.
Topic:
App Store Distribution & Marketing
SubTopic:
App Store Connect
We are currently facing an issue when attempting to deploy or update the iOS application provided by Salesforce.
When signing the .ipa file shared periodically by Salesforce using Company’s Apple certificates, and attempting to upload it via Apple’s native Transporter application, we receive the following error message:
ITMS-90034: Missing or invalid signature – The bundle 'com.mysalesforce.mycommunity.[CODE]' at bundle path 'Payload/CommunitiesApp.app' is not signed using an Apple submission certificate.
We have strictly followed the steps described in the Salesforce document https://quip.com/yBxiAS29ZlvI
We have reviewed the Apple certificates. They are the same ones currently used for other active Company's applications — valid, unchanged, and not expired.
Topic:
App Store Distribution & Marketing
SubTopic:
App Store Connect
Hi everyone,
I’ve been trying to enroll in the Apple Developer Program, but I keep getting this message:
Your enrollment could not be completed.
Your enrollment in the Apple Developer Program could not be completed at this time.
I’ve already verified my Apple ID and updated my personal information (including my full legal name to match my ID documents), but the issue still persists.
The problem is that I also can’t reach Apple Support — every time I try to contact them through the enrollment support form, it redirects me back to the same page with no response.
Has anyone else experienced this recently?
Is there any direct contact method (email or phone number) that actually works for resolving this issue?
Any help would be greatly appreciated — I’m stuck at this step and can’t move forward with publishing my apps.
Thank you!
— Bruno
We want to use different company entities for publishing the app and receiving the payments.
for example:
our app is owed by Company A, and we want to receive the payment with Company B. The B is a subsidiary of A.
How can we do this?
Topic:
App Store Distribution & Marketing
SubTopic:
App Store Connect
Hi,
I’m an iOS developer building an app with an use case that needs advanced playback on Apple Music subscription streams, specifically:
• Real-time tempo change (BPM) during playback — i.e., time-stretch with key-lock, not just crossfade.
• Beat-matched transitions between tracks.
From what I can tell, this capability seems to exist only for approved partners and isn’t available through public MusicKit.
Question: What’s the official request path to be evaluated for that restricted partner entitlement (application form, questionnaire, NDA, or internal team/BD contact)? If the entitlement identifier is internal, how can I get my account routed to the right Apple Music team?
For reference, publicly announced partners include Algoriddim djay, Serato DJ Pro, rekordbox (AlphaTheta), and Engine DJ—all of which appear to implement mixing features that imply advanced playback (tempo/beat-matching) on Apple Music content. I’d prefer not to share product details publicly for the moment and can provide specifics privately if needed.
Thanks in advance!
Topic:
Media Technologies
SubTopic:
Audio
Tags:
Apple Music API
FairPlay Streaming
MusicKit
AVFoundation
This is not a question but rather a small bit of documentation on how Accessory Setup Kit actually works. I spent a couple days figuring this out so I thought let's share my findings. The example app is very light and the documentation definitely has room for improvement so here are a couple important notes.
Findings:
If you're running > iOS 18 and add any property to your Info.plist file you're no longer able to scan for devices by using CBCentralManager.scanForPeriphals. This will no longer return discoverable devices. Below iOS 18 these properties in the Info.plist are ignored by the OS and you can safely use the "legacy" method of connecting to bluetooth devices.
If you're running > iOS 26 the removeAccessory will show a prompt to the user. If you're running < 26 you can silently remove the accessory and start each session with a clean state.
If you create CBCentralManager before you start the ASK session you'll not get the state = PoweredOn.
If you have 0 accessories connected to your application CBCentralManager will never enter the state = PoweredOn when you create the CBCentralManager. Pre-ASK this would be the trigger for iOS to ask the user permission. This is no longer necessary with ASK.
If you have have 1 or more accessories authorized to your app this will be returned in the session.accessories after the session has started. This is an important indicator to determine app behavior.
If you have 1 or more accessories CBCentralManager.scanForPeripherals will ONLY return previously authorized AND discoverable devices. Use this for when you want to connect to a previously authorized device.
If you have 1 or more accessories and the CBCentralManager.scanForPeripherals returns nothing you can (safely) assume the user attempts to onboard a new device.
So for my application I take the following steps:
Check for iOS version, if > iOS 18 start ASK session.
Are there previously authorized devices?
-- yes: run CBCentralManger.scanForPeripherals
-- no: show the picker
Did the scan return any devices?
-- yes: show UI to select device or connect with first available device in the list
-- no: show the picker
Feel free to add any of your findings and @Apple please update the documentation!
Hi everyone,
I’m building a full-screen Map (MapKit + SwiftUI) with persistent top/bottom chrome (menu buttons on top, session stats + map controls on bottom). I have three working implementations and I’d like guidance on which pattern Apple recommends long-term (gesture correctness, safe areas, Dynamic Island/home indicator, and future compatibility).
Version 1 — overlay(alignment:) on Map
Idea: Draw chrome using .overlay(alignment:) directly on the map and manage padding manually.
Map(position: $viewModel.previewMapCameraPosition, scope: mapScope) {
UserAnnotation {
UserLocationCourseMarkerView(angle: viewModel.userCourse - mapHeading)
}
}
.mapStyle(viewModel.mapType.mapStyle)
.mapControls {
MapUserLocationButton().mapControlVisibility(.hidden)
MapCompass().mapControlVisibility(.hidden)
MapPitchToggle().mapControlVisibility(.hidden)
MapScaleView().mapControlVisibility(.hidden)
}
.overlay(alignment: .top) { mapMenu } // manual padding inside
.overlay(alignment: .bottom) { bottomChrome } // manual padding inside
Version 2 — ZStack + .safeAreaPadding
Idea: Place the map at the back, then lay out top/bottom chrome in a VStack inside a ZStack, and use .safeAreaPadding(.all) so content respects safe areas.
ZStack(alignment: .top) {
Map(...).ignoresSafeArea()
VStack {
mapMenu
Spacer()
bottomChrome
}
.safeAreaPadding(.all)
}
Version 3 — .safeAreaInset on the Map
Idea: Make the map full-bleed and then reserve top/bottom space with safeAreaInset, letting SwiftUI manage insets
Map(...).ignoresSafeArea()
.mapStyle(viewModel.mapType.mapStyle)
.mapControls {
MapUserLocationButton().mapControlVisibility(.hidden)
MapCompass().mapControlVisibility(.hidden)
MapPitchToggle().mapControlVisibility(.hidden)
MapScaleView().mapControlVisibility(.hidden)
}
.safeAreaInset(edge: .top) { mapMenu } // manual padding inside
.safeAreaInset(edge: .bottom) { bottomChrome } // manual padding inside
Question
I noticed:
Safe-area / padding behavior
– Version 2 requires the least extra padding and seems to create a small but partial safe-area spacing automatically.
– Version 3 still needs roughly the same manual padding as Version 1, even though it uses safeAreaInset. Why doesn’t safeAreaInset fully handle that spacing?
Rotation crash (Metal)
When using Version 3 (safeAreaInset + ignoresSafeArea), rotating the device portrait↔landscape several times triggers a
Metal crash:
failed assertion 'The following Metal object is being destroyed while still required… CAMetalLayer Display Drawable'
The same crash can happen with Version 1, though less often. I haven’t tested it much with Version 2.
Is this a known issue or race condition between Map’s internal Metal rendering and view layout changes?
Expected behavior
What’s the intended or supported interaction between safeAreaInset, safeAreaPadding, and overlay when embedding persistent chrome inside a SwiftUI Map?
Should safeAreaInset normally remove the need for manual padding, or is that by design?
I've built a model using Create ML, but I can't make it, for the love of God, updatable. I can't find any checkbox or anything related. It's an Activity Classifier, if it matters.
I want to continue training it on-device using MLUpdateTask, but the model, as exported from Create ML, fails with error: Domain=com.apple.CoreML Code=6 "Failed to unarchive update parameters. Model should be re-compiled." UserInfo={NSLocalizedDescription=Failed to unarchive update parameters. Model should be re-compiled.}
After a few attempts I end up at the same spot as categorized here by ChatGPT -
I get why it feels that way, Rob — honestly, you’ve walked right into the worst possible moment in Apple’s Core ML toolchain evolution. What you’re running into isn’t your fault; Apple really did rip out the ONNX front-end without offering a clean replacement yet. So every tutorial, every script, and even coremltools itself contradicts what’s actually possible right now.
Let’s be brutally clear about the state of things (late 2025):
Goal Supported by Apple today Practical workaround
Convert ONNX → Core ML using coremltools 9 No. The ONNX importer was removed. Use the legacy coremltools 6.x + onnx-coreml stack in a separate Python 3.9/3.10 env.
Convert PyTorch → Core ML directly Yes, but only for simple or traced models. T5 still fails because of dynamic control flow. None reliable without rewriting the model.
Run ONNX models directly on iOS Partially (via MLC LLM / Metal Performance Shaders). Works, but that’s an entirely different runtime.
So: you’re not crazy, and this isn’t wasted effort—you’ve actually proven that your model exports cleanly and that the only missing piece is Apple’s converter. The ONNX file you built is solid. You just need something that can translate it to Core ML.
If your goal is simply “get it running on-device”, you still have two viable moves that aren’t rabbit holes:
1. Use MLC LLM / Metal runtime
Skip Core ML altogether. Apple’s mlc-llm (and the “MLCChat” pipeline) can execute your ONNX or quantized model directly on the GPU. You keep full fidelity and avoid all these broken converters.
2. Use the legacy Core ML 6.x toolchain in a sandboxed Python 3.10 VM
It actually works. You don’t have to downgrade your whole system—just run that specific environment once to emit the .mlpackage, then never touch it again.
Nothing else will silently fix itself until Apple re-adds ONNX ingestion in Core ML 10 or a new converter.
Topic:
Machine Learning & AI
SubTopic:
Core ML
I want to add my app to Controls, just a button to open the app. The end goal is to put it on a lock screen. However, I can get it to work on the iPhone simulator, but I am unable to get it on my actual device. No warnings or failed build or signing issues. I feel like I have changed and tinkered everything. I feel like my code is correct, so maybe it's an issue of the signing? I am lost. Running the control center stuff through Widgetkit.
Hi everyone,
We just completed an App Store Connect app transfer between two developer teams and ran into what seems like an inconsistency with TN3159 (Migrating Sign in with Apple users for an app transfer).
According to the technote, both the source and destination teams should be able to call /auth/usermigrationinfo for 60 days after the transfer, even if the migration wasn’t run beforehand. However, right after the transfer completed, the source team (Team A) started receiving:
{"error":"invalid_client"}
on all /auth/usermigrationinfo requests, even though /auth/token with scope=user.migration still works fine.
What we verified before transfer:
Team A’s Sign in with Apple key (ES256) was linked to the app and Services ID.
OAuth flow for com.org.appname.web returned valid tokens, and the decoded ID token showed aud=com.org.appname.web with a valid private relay email, confirming the key was trusted.
What happens after transfer:
The key now shows “Enabled Services: —” and the App/Services IDs are no longer selectable in the Developer portal.
/auth/usermigrationinfo immediately returns invalid_client for Team A, even within the same day of the transfer.
This effectively makes Team A unable to generate transfer_sub values, blocking the migration flow TN3159 describes.
Questions:
Is Team A supposed to retain authorization to call /auth/usermigrationinfo for 60 days post-transfer?
If yes, is there any known workaround to re-authorize the key or temporarily re-bind it to the transferred identifiers?
If not, does this mean transfer_sub must be generated before transfer acceptance, contrary to how TN3159 reads?
Would really appreciate any confirmation or guidance from Apple or anyone who’s gone through this recently.
Thanks,
Topic:
Privacy & Security
SubTopic:
Sign in with Apple
Tags:
Sign in with Apple REST API
Sign in with Apple
I have a visionOS app using Apple's WebView and WebPage to display web content. When viewing a live YouTube stream last night, YouTube put up the warning in the area that would have the chat window:
Oh no!
It looks like you're using an older version of your browser. Please update it to use live chat.
Anyone know if YouTube is generating this from the server based on the WebPage's user agent string, from Javascript running in the browser engine, or something else?
Anyone know if and how it is possible to resolve this?
(See right side of YouTube web page from a screen grab):
Hi, I'm currently implementing 180° / 360° property for immersive video in my app.
I was able to implement 360° easily by just giving VideoMaterial to flipped sphere.
However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere.
But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
Context
I’m deploying large language models on iPhone using llama.cpp. A new iPhone Air (12 GB RAM) reports a Metal MTLDevice.recommendedMaxWorkingSetSize of 8,192 MB, and my attempt to load Llama-2-13B Q4_K (~7.32 GB weights) fails during model initialization.
Environment
Device: iPhone Air (12 GB RAM)
iOS: 26
Xcode: 26.0.1
Build: Metal backend enabled llama.cpp
App runs on device (not Simulator)
What I’m seeing
MTLCreateSystemDefaultDevice().recommendedMaxWorkingSetSize == 8192 MiB
Loading Llama-2-13B Q4_K (7.32 GB) fails to complete. Logs indicate memory pressure / allocation issues consistent with the 8 GB working-set guidance.
Smaller models (e.g., 7B/8B with similar quantization) load and run (8B Q4_K provide around 9 tokens/second decoding speed).
Questions
Is 8,192 MB an expected recommendedMaxWorkingSetSize on a 12 GB iPhone?
What values should I expect on other 2025 devices including iPhone 17 (8 GB RAM) and iPhone 17 Pro (12 GB RAM)
Is it strictly enforced by Metal allocations (heaps/buffers), or advisory for best performance/eviction behavior?
Can a process practically exceed this for long-lived buffers without immediate Jetsam risk?
Any guidance for LLM scenarios near the limit?