i export apple SF as custom sf for test.
code is simple:
var body: some ControlWidgetConfiguration {
StaticControlConfiguration(
kind:"ControlWidgetConfiguration"
) {
ControlWidgetButton(action: DynamicWidgetIntent()) {
Text("test")
Image("custom_like")
}
}.displayName("test")
}
as we can see, it can't show image in the preview. but it can show image in the Control widget center.
am i do some thing wrong?
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
On iOS 18 and lower version, my application supports automatically switching to [System settings - Personal Hotspot] directly. But on iOS 26, my application will be redirected to [System settings- Apps].
Does iOS 26 disable the behavior of directly jumping to the system hotspot page? If support, could you share the API for iOS 26?
Project Background:
I am developing a third-party custom keyboard for iOS whose primary feature is real-time voice input.
In my current design, responsibilities are split as follows:
1. The container (main) app is responsible for:
Audio recording
Speech recognition (ASR)
2. The keyboard extension is responsible for:
Providing the keyboard UI
Initiating the voice input workflow
Receiving transcription results via an App Group
Inserting recognized text into the active text field using textDocumentProxy.insertText(_:)
Intended User Flow
The intended workflow is:
The user is typing in a third-party app (for example, WeChat) using my custom keyboard.
The user taps a “Voice Input” button in the keyboard extension.
The keyboard extension activates the container app so that audio recording and ASR can begin.
After recording has started, control returns to the original app where the user was typing.
The container app continues running in the background, maintaining active audio recording and ASR.
Recognized text is continuously streamed back to the keyboard extension and inserted into the current cursor position in real time.
Observed Industry Behavior
Some popular third-party keyboards on iOS, such as WeChat Keyboard and Doubao Keyboard, appear to provide a similar user experience in which:
Voice input can be initiated directly from the keyboard while typing in another app.
The user remains (or returns) in the original typing context after voice input starts.
Speech recognition continues and text is streamed into the active text field without interrupting the typing experience.
I would like to better understand how this type of workflow aligns with iOS platform capabilities and supported APIs.
My Questions
Is it supported by iOS public APIs for a custom keyboard extension to activate its container app to start audio recording and ASR, then return to the original host app while the container app continues recording and performing ASR in the background?
If this workflow is not supported, are there any Apple-recommended or supported alternative architectures for achieving a similar user experience, especially when audio recording and ASR logic are currently implemented in the container app rather than in the keyboard extension?
Goal
My goal is to design a solution that is fully compliant with iOS public APIs and platform constraints, while providing a real-time voice input experience comparable to existing third-party keyboards on the platform.
Any guidance on supported APIs, recommended architectures, or relevant documentation would be greatly appreciated.
Text filtering: behavior of current message is affected by behavior of past message from same origin
If there is this situation:
A text message is sent from a sender and gets classified as junk (by a text filtering extension) with the result that it gets send to the spam folder as expected.
A text message with different content is sent from the same sender and gets classified as allowed, however it also gets sent to the spam folder.
If the above is repeated but after step 1 the message is deleted, then in step 2 the message doesn't get sent to the spam folder.
So the presence of the message from step 1 being in the spam folder is having an effect on the behavior of step 2.
Expected beahavour (if so, why?), or a defect?
The clock on the lock screen is too big.
This is very noticeable on the serif font, the maximum size goes beyond the frame, and rests on the frame of the phone display. (Screenshot 1 & Screenshot 3)
This is especially evident if you use the enlarged interface (using the Large Text function), here the time goes completely out of the frame and conflicts with the frame of the phone screen. (Screenshot 2 & Screenshot 4)
Has anyone else run across this after manually renewing an expired developer account? Happens when I go to the "Certificates, Identifiers, & Profiles" page and click on anything. Also unable to download any of the iOS SDK's. Any ideas?
Hello,
Our iOS app (Flutter + Swift) was rejected under Guideline 2.5.1 with the following message:
The app uses or references the following non-public or deprecated APIs:
Runner
Classes: __SwiftValue
From our investigation, __SwiftValue appears to be an internal Swift runtime class automatically generated by the Swift compiler for Swift–Objective-C bridging.
It is not imported, referenced, or used directly in our source code.
We verified that:
The symbol exists only in the compiled Runner binary
It is not referenced by any third-party framework explicitly
It appears in standard Swift runtime behavior
We previously removed a legitimate private API (PGHostedWindow) from a dependency and resubmitted, after which this new rejection appeared.
Questions:
Is __SwiftValue considered a private API usage by App Review, or is this a false positive?
Are there recommended build settings or mitigations to prevent this symbol from being flagged?
Should this be escalated for manual review?
Any guidance from Apple engineers or developers who encountered similar rejections would be greatly appreciated.
Thank you.
Hi,
Our Flutter + Swift iOS app was rejected under Guideline 2.5.1 citing usage of a non-public API:
Runner Classes: __SwiftValue
From our analysis, __SwiftValue appears to be an internal Swift runtime type automatically generated by the Swift compiler for Swift–Objective-C bridging. It is not referenced in our source code or by any third-party frameworks and only appears in the compiled Runner binary.
Has anyone encountered this rejection before?
Is __SwiftValue considered a private API by App Review, or is this a known false positive?
Are there any recommended build settings or mitigations to avoid this flag?
When a UIVisualEffect with glass effect view is added with opacity 0, it remains hidden as expected. But when changing it back to 1 should make it visible, but currently it stays hidden forever. The bug is only reproducible on iOS 26.1 and iOS 26.2. It does not happen on iOS 26.0. The issue is also not reproducible with UIBlurEffect. Only happens for Glass effect
Here is the repro link
Apple recently rolled out a web version of the App Store and I'm curious as to when exactly the Today tab refreshes?
It doesn't seem to update at midnight as it would on the iPhone.
https://apps.apple.com/iphone/today
There are multiple report of crashes on URLConnectionLoader::loadWithWhatToDo. The crashed thread in the stack traces pointing to calls inside CFNetwork which seems to be internal library in iOS.
The crash has happened quite a while already (but we cannot detect when the crash started to occur) and impacted multiple iOS versions recorded from iOS 15.4 to 18.4.1 that was recorded in Xcode crash report organizer so far.
Unfortunately, we have no idea on how to reproduce it yet but the crash keeps on increasing and affect more on iOS 18 users (which makes sense because many people updated their iOS to the newer version) and we haven’t found any clue on what actually happened and how to fix it on the crash reports. What we understand is it seems to come from a network request that happened to trigger the crash but we need more information on what (condition) actually cause it and how to solve it.
Hereby, I attach sample crash report for both iOS 15 and 18.
I also have submitted a report (that include more crash reports) with number: FB17775979.
Will appreciate any insight regarding this issue and any resolution that we can do to avoid it.
iOS 15.crash
iOS 18.crash
We have an app in Swift that uses push notifications. It has a deployment target of iOS 15.0
I originally audited our app for iOS 26 by building it with Xcode 26 beta 3. At that point, all was well. Our implementation of application:didRegisterForRemoteNotificationsWithDeviceToken was called.
But when rebuilding the app with beta 4, 5 and now 6, that function is no longer being called.
I created a simple test case by creating a default iOS app project, then performing these additional steps:
Set bundle ID to our app's ID
Add the Push Notifications capability
Add in application:didRegisterForRemoteNotificationsWithDeviceToken: with a print("HERE") just to set a breakpoint.
Added the following code inside application:didFinishLaunchingWithOptions: along with setting a breakpoint on the registerForRemoteNotifications line:
UNUserNotificationCenter.current().requestAuthorization(options: [.badge, .alert, .sound]) { granted, _ in
DispatchQueue.main.async {
UIApplication.shared.registerForRemoteNotifications()
}
}
Building and running with Xcode 26 beta 6 (17A5305f) generates these two different outcomes based upon the OS running in the Simulator:
iPhone 16 Pro simulator running iOS 18.4 - both breakpoints are reached
iPhone 16 Pro simulator running iOS 26 - only the breakpoint on UIApplication.shared.registerForRemoteNotifications is reached.
Assuming this is a bug in iOS 26. Or, is there something additional we now need to do to get push notifications working?
In my application, I use CallKit and have supportsHolding = true set. During my phone call, another call comes in (e.g., GSM). I accept the incoming call and put the current call on hold.
If I end the active call myself, everything is fine, and CallKit calls the
method provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession).
However, if the other party ends the call, the second call remains on hold. In the application, the user clicks on unhold, and I notify CallKit that the hold has ended.
But in this case, the didActivate method is not called at all. If I try to activate the audio myself after unhold, I receive the error:
Domain=NSOSStatusErrorDomain Code=561017449 "Session activation failed" UserInfo={NSLocalizedDescription=Session activation failed}
AVAudioSessionErrorInsufficientPriority == NSOSStatusErrorDomain Code: 561017449
What needs to be done for CallKit to activate my audio?
Hello,
I’d like to clarify the technical limitations around app updates in an Apple School Manager (ASM) + MDM environment.
Environment
• iOS/iPadOS devices supervised and managed via Apple School Manager
• Apps are distributed via ASM (VPP / Custom App) and managed by MDM
• Apps are App Store–signed (not Enterprise/In-House)
• Some apps include NetworkExtension (VPN) functionality
• Automatic app updates are enabled in MDM
Question
From a technical and platform-design perspective, is it possible to:
Deploy app updates for ASM/MDM-distributed App Store apps via a separate/custom update server, and trigger updates simultaneously across all managed devices, bypassing or supplementing the App Store update mechanism?
In other words:
• Can an organization operate its own update server to push a new app version to all devices at once?
• Or is App Store + iOS always the sole execution path for installing updated app binaries?
⸻
My current understanding (please correct if wrong)
Based on Apple documentation, it seems that:
1. App Store–distributed apps cannot self-update
• Apps cannot download and install new binaries or replace themselves.
• All executable code must be Apple-signed and installed by the system.
2. MDM can manage distribution and enable auto-update, but:
• MDM cannot reliably trigger an immediate update for App Store apps.
• Actual download/install timing is decided by iOS (device locked, charging, Wi-Fi, etc.).
3. Custom update servers
• May be used for policy decisions (minimum allowed version, feature blocking),
• But cannot be used to distribute or install updated app binaries on iOS.
4. For ASM-managed devices:
• The only supported update execution path is:
App Store → iOS → Managed App Update
• Any “forced update” behavior must be implemented at the app logic level, not the installation level.
⸻
What I’m trying to confirm
• Is there any supported MDM command, API, or mechanism that allows:
• Centralized, immediate, one-shot updates of App Store apps across all ASM-managed devices?
• Or is the above limitation fundamental by design, meaning:
• Organizations must rely on iOS’s periodic auto-update behavior
• And enforce version compliance only via app-side logic?
⸻
Why this matters
In large school deployments, delayed updates (due to device conditions or OS scheduling) can cause:
• Version fragmentation
• Inconsistent behavior across classrooms
• Operational issues for VPN / security-related apps
Understanding whether this limitation is absolute or if there is a recommended Apple-supported workaround would be extremely helpful.
Thanks in advance for any clarification
The universal links for my apps stopped working.
The server where the AASA files where hosted worked on IPV4 exclusively, a few days ago i changed the configuration to IPV6 only. I´ve created new IPV6 entries, renewed all certifactes and deleted all IPV4 entries for the domains.
All seemed fine, but at Saturday I realized that my universal links stopped working for new user.
What i´ve done to find the issue:
Example domain that was used for debugging: "https://developffw.burns.fun"
I´ve verified the AASA file is hosted properly by using different browsers and Postman to retrieve it. The file can be accessed and the certificates look fine.
Output of curl -v https://developffw.burns.fun/.well-known/apple-app-site-association
* Host developffw.burns.fun:443 was resolved.
* IPv6: 2a01:4f8:13b:340a::2
* IPv4: (none)
* Trying [2a01:4f8:13b:340a::2]:443...
* schannel: disabled automatic use of client certificate
* ALPN: curl offers http/1.1
* ALPN: server accepted http/1.1
* Established connection to developffw.burns.fun (2a01:4f8:13b:340a::2 port 443) from 2a00:79c0:65c:8b00:80ee:175b:3e2a:1e7d port 61014
* using HTTP/1.x
> GET /.well-known/apple-app-site-association HTTP/1.1
> Host: developffw.burns.fun
> User-Agent: curl/8.16.0
> Accept: */*
>
* Request completely sent off
< HTTP/1.1 200 OK
< Server: nginx/1.22.1
< Date: Mon, 15 Dec 2025 11:34:22 GMT
< Content-Type: application/octet-stream
< Content-Length: 329
< Last-Modified: Sat, 21 Dec 2024 08:53:11 GMT
< Connection: keep-alive
< ETag: "676681f7-149"
< Accept-Ranges: bytes
<
{
"applinks": {
"details": [
{
"appIDs": [ "6LN7G8JEA5.burns.FFW-Manager-SwiftUI.Debug"],
"components": [
{
"/": "/onboard",
"?": { "id": "*"},
"?": { "name": "*"},
"?": { "token": "*" }
}
]
}
]
}
}
* Connection #0 to host developffw.burns.fun:443 left intact
I took a look at the headers from the Apple CDN network response. These indicate some sort of connection error.
The response code is 404
Response headers:
Apple-Failure-Details: {"cause":"dial tcp [2a01:4f8:13b:340a::2]:443: connect: network is unreachable"}
Apple-Failure-Reason: SWCERR00305 Network error
Apple-From: https://betaffw.burns.fun/.well-known/apple-app-site-association
Apple-Try-Direct: false
Via: https/1.1 defra2-vp-vst-003.ts.apple.com (acdn/268.16305), https/1.1 defra2-vp-vfe-004.ts.apple.com (acdn/268.16305), http/1.1 defra2-xdc-mx-028.ts.apple.com (acdn/3.16363), https/1.1 defra1-edge-fx-021.ts.apple.com (acdn/3.16363)
X-Cache: hit-stale, miss, hit-fresh, miss
CDNUUID: 4321f35e-b73b-4031-a054-7c63af69e126-712221049
Took a look at the log files of the server.
I can´t find any entry from the Apple servers neither in the default logs nor in the error log entries.
The curl attempts are logged with response code 200.
I´ve tried sudo swcutil dl -d https://developffw.burns.fun/onboard in the Terminal on my MAC.
Output:
The operation couldn´t be completed. (SWCErrorDomain error 8.)
This indicates to me threre is an issue for the Apple servers accessing my server. But I don´t know what could be the reason. There is no firewall configuration that could block the requests. And there has been no change at all besides the IPV4 / IPV6 protocol change.
This issue is the same for all the domain listed on this server.
I´v even created a new app for this purpose and created a new AASA entry and associated link. Same issue.
I´m pretty much lost here. Everything looks fine from my side. Google assetlinks.json seem to work fine.
I would really appreciate some help on how to solve this, i´m at my wits end.
When a user enables an SMS filtering extension via iOS Settings → Messages → Text Message Filtering and selects an app, the following prompt appears:
"The developer of [App Name] will receive the text, attachments and sender information in text messages from senders not in your Contacts. Messages may include personal or sensitive information like bank verification codes."
This message cannot be modified by developers, and we're receiving complaints and negative reviews from users who are alarmed by it, despite our app not collecting any data.
iOS should allow developers to customize this message or reword this message to not make false claims about data collection.
We've opened a feedback assistant report here: FB21445903
I have recently been having trouble with my iOS 18.2 beta update. It has been 2 weeks since I have updated to iOS 18.2 beta and joined the Genmoji and image playground waitlist. I am wondering how much longer I have to wait till my request is approved.
Hi there,
Does anyone know how to modify this Image compressor Shortcut https://www.icloud.com/shortcuts/e13d8013598f4f33830386a956a163dd so that the image it creates has the original file name + “-pressed”?
Eg “Image_123” becomes “Image_123-pressed”
I know of the action ‘Rename file’ but can’t make it work. Any help much appreciated:)
Hi there,
Does anyone know how to modify this Image compressor Shortcut https://www.icloud.com/shortcuts/e13d8013598f4f33830386a956a163dd so that the image it creates has the original file name + “-pressed”?
Eg “Image_123” becomes “Image_123-pressed”
I know of the action ‘Rename file’ but can’t make it work. The shortcut does batch processing of images if that makes any difference.
Any help much appreciated:)
Hi Apple Developer Forums,
I’m developing an iOS camera app that processes RAW captures using Core Image. I’m seeing a large “first use” performance penalty specifically when creating the CIImage from CIRAWFilter.outputImage.
What’s slow (important detail)
I’m measuring the time for:
let rawFilter = CIRAWFilter(imageData: rawData, identifierHint: hint)
let ciImage = rawFilter.outputImage
This is not CIContext.render(...) / createCGImage(...). It’s just the time to access outputImage (i.e., building the Core Image graph / RAW pipeline setup).
Observed behavior
First time accessing CIRAWFilter.outputImage: ~3 seconds
Second time (same app session, similar RAW): ~3 milliseconds
So something heavy is happening only on first use (decoder initialization, pipeline setup, shader/library compilation, caching, etc.).
Using Metal System Trace, I also noticed that during the slow first call there are many “Create MTLLibrary” events, while the second call doesn’t show this pattern.
Warm-up attempts using bundled DNG
I tried to “warm up” early (e.g., on camera screen entry) by loading a bundled DNG and then accessing CIRAWFilter.outputImage by taking a photo:
Warm-up with a ~247 KB DNG → first real RAW outputImage cost drops to ~1.42s
Warm-up with a ~25 MB DNG → first real RAW outputImage cost drops to ~843ms
This helps, but it’s still far from the steady-state ~3ms.
Warm-up by capturing a real RAW (works, but concerns)
The only method that fully eliminates the delay is to trigger a real RAW capture programmatically before the user’s first photo, then use that captured rawData to warm up the CIRAWFilter.outputImage path. This brings the first user-facing capture close to the steady-state timing.
However:
In some regions, the camera shutter sound cannot be suppressed, so “hidden warm-up capture” is unacceptable UX.
I’m also unsure whether triggering a real capture without an explicit user action could raise compliance/privacy concerns, even if the image is immediately discarded and never saved/uploaded.
Questions
Is the large first-time cost of CIRAWFilter.outputImage expected (RAW pipeline initialization / shader compilation)?
Is there an Apple-recommended way to pre-initialize the Core Image RAW pipeline / Metal resources so the first outputImage is fast, without taking a real photo?
Are there any best practices (e.g. CIContext creation timing, prepareRender(...), specific options) that reliably reduce this first-use overhead for CIRAWFilter?
Attachments
Figure 1: First RAW capture with no warm-up (~3s outputImage time)
Figure 2: First RAW capture after warm-up with bundled DNG (improved but still hundreds of ms)
Thanks for any guidance or experience sharing!