During Apple Pay in-app provisioning (EV_ECC_v2), our iOS app successfully obtains the issuer provisioning certificates and generates cryptographic material. The flow fails when Apple posts the card blob to Apple’s broker (card creation step), returning HTTP 500 from .../broker/v4/devices/{SEID}/cards.
Steps:
Call issuerProvisioningCertificates?encryptionVersion=EV_ECC_v2
→ 200 OK; returns ECC leaf + Apple Root CA chain; nonce=2a831be4.
2. Build {encryptedCardData, activationData, ephemeralPublicKey}
3. POST /broker/v4/devices/{SEID}/cards
Expected: 200 OK on /broker/v4/devices/{SEID}/cards, or 5xx with a descriptive error if payload/cryptography is invalid.
Observed: 500 Internal Server Error from Apple broker on /cards (labeled “eligibility” in PassKit logs), causing a terminal failure in Wallet UI.
Frameworks
RSS for tagAsk questions about APIs that can drive features in your apps.
Posts under Frameworks tag
135 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
On iOS 26.01 & 26.2, the system keyboard shows uneven horizontal padding:
Leftmost key column touches the screen edge
Right side has visible padding
This behavior is reproducible:
In existing apps In a brand-new demo app In both UIKit and SwiftUI
Xcode: 26.0.1 & 26.2
iOS: 26.0.1 & 26.2 Simulator
Devices tested: iPhone 15 / iPhone 15 Pro
Keyboard types: .numberPad, .decimalPad
Frameworks: UIKit, SwiftUI
I downloaded package, I can see it in Package Dependencies tab, [swift-collections | https://github.com/apple/swift-collections.git | Up to Next Major Version] — looks OK.
But (quite strange):
import Collections //🛑 No such module 'Collections'
typealias Objects = OrderedSet<anyHashable> // No error
I took Collection word from a list or typed — no difference.
if I remove import Collections there is no OrderedSet anymore (of course)
Can't compile. Cleaned build folder, restarted Xcode...
What could be wrong with it? Any ideas?
Hey there, I’m wondering if there is a CRUD API for Apple Notes?
I just saw once that it’s possible through a Apple Script on Mac. But the problem for me is that the app should also run on iOS.
Does someone have an idea how to approach this?
I'm a font developer. In the development process, I will revise a font and overwrite the OTF file that is currently enabled (registered) with macOS.
If I then launch an app, it will immediately use the revised version of the font; while apps that are already loaded will continue to use the old version.
This suggests that each app is loading new and separate font data, rather than getting it from some existing cache in memory. Yet macOS does have a "font cache" of some sort.
Some apps, like TextEdit, seem to only load the fonts that they need to use. However, other apps, like Pages, load every enabled (registered) font on the OS!! (According to the Open Files list in Activity Monitor.)
Given that /System/Library/Fonts/ is 625 Mb, and we can't disable any of it, isn't that a lot of data to be repeating? How many fonts is too many fonts?
I can't find much documentation about the process.
Hello,
I want to use the Speech framework in my app. However, I found that if I want it to work offline, it must be downloaded separately on the device. Do I understand correctly that it is not allowed to use it in a Swift Student Challenge submission if English (as the speech language) must be downloaded by the tester on their device using the internet beforehand?
Is there any information for developer about notification forwarding which is published in iOS 26.3? how to use it ?
Why is the WiFiAware framework not importable in Mac Catalyst? Are there any plans to support Wi-Fi Aware technology on Mac Catalyst in the future?
I have a question regarding the behavior of AVAudioSession.sharedInstance().outputVolume.
Observed behavior:
When the app is in the foreground, I read audioSession.outputVolume (for example, 0.1).
The app is then moved to the background.
While the app is in the background, the user changes the system volume using the hardware buttons (for example, to 0.5).
When the app returns to the foreground, audioSession.outputVolume still reports the previous value (0.1).
From my testing, outputVolume only seems to update when the system volume is changed while the app is in the foreground. Volume changes made while the app is in the background are not reflected when the app returns to the foreground.
Questions:
According to Apple’s documentation for AVAudioSession.outputVolume:
“The systemwide output volume set by the user.”
https://developer.apple.com/documentation/avfaudio/avaudiosession/outputvolume
However, based on our testing on iOS 18.6.2 and iOS 18.1, the observed behavior seems to differ from this description.
Questions:
The documentation states that outputVolume represents the system-wide volume set by the user. In our testing, the value does not reflect volume changes made while the app is in the background and only updates when the app is in the foreground.Is this the expected behavior of AVAudioSession.outputVolume?
Is there any other recommended way in Swift to retrieve the current system volume that reflects user changes made both while the app is in the foreground and while it is in the background?
Any clarification on the intended behavior or recommended handling would be greatly appreciated.
Hello everyone.
I use Translation Framework in my application. During development everything was fine, Translation framework worked well, but after two or three days of using the production version (that was published in AppStore and available for others also!) - my application stopped working. Translation framework gives errors:
Error sending 1 paragraphs Error Domain=TranslationErrorDomain Code=16 "Translation failed" UserInfo={NSLocalizedDescription=Translation failed, NSLocalizedFailureReason=Offline models not available for language pair}
Failed to translate input 0; returning error: Error Domain=TranslationErrorDomain Code=16 "Translation failed" UserInfo={NSLocalizedDescription=Translation failed, NSLocalizedFailureReason=Offline models not available for language pair}
Received unbridged NSError to API, converting to .internalError: Error Domain=TranslationErrorDomain Code=16 "Translation failed" UserInfo={NSLocalizedDescription=Translation failed, NSLocalizedFailureReason=Offline models not available for language pair}
Once again - it worked when I developed it, it was released on the AppStore, and suddenly it stopped working!
I’m building an iOS SDK that is distributed as a binary XCFramework and consumed via Swift Package Manager using a binaryTarget.
What I’ve done so far:
Built the SDK using xcodebuild archive for device and simulator
Created the XCFramework using xcodebuild -create-xcframework
The SDK contains a resource bundle with JSON/config files
The XCFramework is wrapped using SPM (code-only wrapper target)
Currently, the resource bundle exists outside the XCFramework, and the host app needs to add it manually during integration. I want to avoid this and make the SDK completely self-contained.
What I’m trying to achieve:
Embed the resource bundle inside the SDK framework so that each XCFramework slice contains it
Ensure the SDK can load its assets internally at runtime without any host app changes
Questions:
What is the correct way to embed a .bundle inside a framework so it gets packaged into each XCFramework slice during archiving?
Which Xcode build phases or build settings are required for this (e.g., Copy Bundle Resources, SKIP_INSTALL, etc.)?
At runtime, what is the recommended approach for locating and loading this embedded bundle from within the SDK code?
Any guidance or best practices for achieving this would be helpful.
I'm working on building a Text Messages Filter Extension app and currently facing the below error:
"Cannot find 'ILMessageFilterExtensionConfigurationManager' in scope".
As required the app includes MessageFilterStatusManager.swift file, with the host app designated as the Target Membership.
App minimum deployment is set to iOS 17.6 and I'm working with Xcode Version 26.2.
If anyone has encountered this error I'd appreciate your feedback.
The documentation specifies that when Contacts framework returns unified contacts that each fetched unified contact object (CNContact) has its own unique identifier that’s different from any individual contact’s identifier in the set of linked contacts and that when refetching a unified contact, that this identifier should be used.
There is also an analogous identifier within the list of contactRelations, but each of these don't seem to corespondent to the unified contacts. For example, is a new contact (Sheryl Zakroff) is created in the simulator Contacts and their spouse is set to Hank Zakroff. However, the GUID created for the contactRelations identifier does not correlate to the original Hank Zakroff GUID and cannot be searched.
Is this a bug or what is the indent of the contactRelations identifier?
Here's a debug output of walking the unifiedContacts:
Name: Hank Zakroff
2E73EE73-C03F-4D5F-B1E8-44E85A70F170
- Other : (555) 766-4823
- Other : (707) 555-1854
Name: David Taylor
E94CD15C-7964-4A9B-8AC4-10D7CFB791FD
- Other : 555-610-6679
Name: Sheryl Zakroff
DE783BC8-7917-4138-93F6-3AF0FD4CE083
- Other : (707) 555-1854
- Spouse: <CNContactRelation: 0x60000000dd60: name=Hank M. Zakroff>
- 534B467D-CA00-46D3-897C-16EEA782C9CF
- Looking for ["534B467D-CA00-46D3-897C-16EEA782C9CF"]
[]
Hi all,
I have a working macOS (Intel) system extension app that currently uses only a Content Filter (NEFilterDataProvider). I need to capture/log HTTP and HTTPS traffic in plain text, and I understand NETransparentProxyProvider is the right extension type for that.
For HTTPS I will need TLS inspection / a MITM proxy — I’m new to that and unsure how complex it will be.
For DNS data (in plain text), can I use the same extension, or do I need a separate extension type such as NEPacketTunnelProvider, NEFilterPacketProvider, or NEDNSProxyProvider?
Current architecture:
Two Xcode targets: MainApp and a SystemExtension target.
The SystemExtension target contains multiple network extension types.
MainApp ↔ SystemExtension communicate via a bidirectional NSXPC connection.
I can already enable two extensions (Content Filter and TransparentProxy). With the NETransparentProxy, I still need to implement HTTPS capture.
Questions I’d appreciate help with:
Can NETransparentProxy capture the DNS fields I need (dns_hostname, dns_query_type, dns_response_code, dns_answer_number, etc.), or do I need an additional extension type to capture DNS in plain text?
If a separate extension is required, is it possible or problematic to include that extension type (Packet Tunnel / DNS Proxy / etc.) in the same SystemExtension Xcode target as the TransparentProxy?
Any recommended resources or guidance on TLS inspection / MITM proxy setup for capturing HTTPS logs?
There are multiple DNS transport types — am I correct that capturing DNS over UDP (port 53) is not necessarily sufficient? Which DNS types should I plan to handle?
I’ve read that TransparentProxy and other extension types (e.g., Packet Tunnel) cannot coexist in the same Xcode target. Is that true?
Best approach for delivering logs from multiple extensions to the main app (is it feasible)? Or what’s the best way to capture logs so an external/independent process (or C/C++ daemon) can consume them?
Required data to capture (not limited to):
All HTTP/HTTPS (request, body, URL, response, etc.)
DNS fields: dns_hostname, dns_query_type, dns_response_code, dns_answer_number, and other DNS data — all in plain text.
I’ve read various resources but remain unclear which extension(s) to use and whether multiple extension types can be combined in one Xcode target. Please ask if you need more details.
Thank you.
Topic:
App & System Services
SubTopic:
Networking
Tags:
Swift
Frameworks
Network Extension
System Extensions
Is there any reason why the SwiftUI Image hasn’t a direct init(data: Data) like UIImage from UIKit?
In my opinion it’s very unintuitive and expensive to create a UIImage in the first step to create a SwiftUI Image with Image(uiImage: UIImage) in the second step.
In addition to that, this causes unnecessary UIKit imports.
In my opinion this is a very obvious small in the API, so are there any reasons why it is what it is?
Best regards
Hi everyone,
I believe I’ve encountered a potential bug or a hardware alignment limitation in the Core ML Framework / ANE Runtime specifically affecting the new Stateful API (introduced in iOS 18/macOS 15).
The Issue:
A Stateful mlprogram fails to run on the Apple Neural Engine (ANE) if the state tensor dimensions (specifically the width) are not a multiple of 32. The model works perfectly on CPU and GPU, but fails on ANE both during runtime and when generating a Performance Report in Xcode.
Error Message in Xcode UI:
"There was an error creating the performance report Unable to compute the prediction using ML Program. It can be an invalid input data or broken/unsupported model."
Observations:
Case A (Fails): State shape = (1, 3, 480, 270). Prediction fails on ANE.
Case B (Success): State shape = (1, 3, 480, 256). Prediction succeeds on ANE.
This suggests an internal memory alignment or tiling issue within the ANE driver when handling Stateful buffers that don't meet the 32-pixel/element alignment.
Reproduction Code (PyTorch + coremltools):
import torch.nn as nn
import coremltools as ct
import numpy as np
class RNN_Stateful(nn.Module):
def __init__(self, hidden_shape):
super(RNN_Stateful, self).__init__()
# Simple conv to update state
self.conv1 = nn.Conv2d(3 + hidden_shape[1], hidden_shape[1], kernel_size=3, padding=1)
self.conv2 = nn.Conv2d(hidden_shape[1], 3, kernel_size=3, padding=1)
self.register_buffer("hidden_state", torch.ones(hidden_shape, dtype=torch.float16))
def forward(self, imgs):
self.hidden_state = self.conv1(torch.cat((imgs, self.hidden_state), dim=1))
return self.conv2(self.hidden_state)
# h=480, w=255 causes ANE failure. w=256 works.
b, ch, h, w = 1, 3, 480, 255
model = RNN_Stateful((b, ch, h, w)).eval()
traced_model = torch.jit.trace(model, torch.randn(b, 3, h, w))
mlmodel = ct.convert(
traced_model,
inputs=[ct.TensorType(name="input_image", shape=(b, 3, h, w), dtype=np.float16)],
outputs=[ct.TensorType(name="output", dtype=np.float16)],
states=[ct.StateType(wrapped_type=ct.TensorType(shape=(b, ch, h, w), dtype=np.float16), name="hidden_state")],
minimum_deployment_target=ct.target.iOS18,
convert_to="mlprogram"
)
mlmodel.save("rnn_stateful.mlpackage")
Steps to see the error:
Open the generated .mlpackage in Xcode 16.0+.
Go to the Performance tab and run a test on a device with ANE (e.g., iPhone 15/16 or M-series Mac).
The report will fail to generate with the error mentioned above.
Environment:
OS: macOS 15.2
Xcode: 16.3
Hardware: M4
Has anyone else encountered this 32-pixel alignment requirement for StateType tensors on ANE? Is this a known hardware constraint or a bug in the Core ML runtime?
Any insights or workarounds (other than manual padding) would be appreciated.
在 iOS 平台使用 WKWebView 通过file://协议加载本地 HTML 文件时,存储在localStorage中的数据会在 App 后台切换、进程重启后偶尔丢失;但相同代码在安卓 / 鸿蒙平台无此问题。
现在的文档
仅明确了「默认数据存储(defaultDataStore)可将网站数据持久化到磁盘,非持久化存储(nonPersistent)仅存内存」的基础规则;
未提及「file://协议内容即使使用默认持久化存储,也会被归为临时内存存储」这一关键场景限制;
仅在WKURLSchemeHandler关联说明中隐含「自定义 URL 协议可处理 WebKit 原生不支持的 URL 方案」,但未直接关联file://的存储问题。
我找不到如何处理这个问题的官方文档,仅仅有其他的博客说需要增加http/https加载就没有这个问题。
请提供给我官方文档或者官方回复 关于出现这种file:/加载html出现问题的处理办法
Currently, if as a library author you are shipping dependencies as code, you can use the #if DEBUG preprocessor check to execute logic based on whether app is being built for Debug or Release.
My concern is more about the approach that should be taken when distributing frameworks/xcframeworks. One approach I am thinking of using is checking the presence of {CFBundleName}.debug.dylib in the main bundle. Is this approach reliable? Do you suggest any other approach?
Topic:
Developer Tools & Services
SubTopic:
General
Tags:
Swift Packages
Frameworks
Debugging
App Binary
What is the recommended approach for distributing an XCFramework that uses common third-party dependencies (like Google Maps) when client apps may also use the same dependencies, resulting in duplicate symbol conflicts?
I'm developing a closed-source SDK distributed as an XCFramework. My SDK internally uses Google Maps for mapping functionality. However, when clients integrate my XCFramework into their apps that also use Google Maps, we encounter duplicate symbol errors.
What I've Tried:
Static vs Dynamic Linking: Both approaches result in conflicts
Static linking: Google Maps symbols compiled into my binary
Dynamic linking: GoogleMaps.framework bundled with my XCFramework
Build Configuration:
Set "Build Libraries for Distribution" = YES
Tried various linking strategies
Architecture Changes:
Used @implementation_only import
Wrapped code with #if canImport(GoogleMaps)
However, the dependencies still get linked at build time
Hi, I have an iOS project with the following app targets:
main iOS application
a notification service extension
5 static libs containing some swift files with public methods
1 dynamic framework with above static libs as dependencies.
The framework only contains 2 files - a default .h file and 1 .exp file. This exports file contains mangled-names of all the public methods that are exposed by the 5 static libs present as framework's dependencies. I obtained these symbols using the nm command for each static lib.
The main iOS app target has 2 dependencies - the framework and the notification extension.
The notification extension only depends on the framework.
This setup works perfectly fine. I wanted to understand:
If using the exports file is the only way to make this setup work?
If not, what else can I do?
What way does Apple recommend?
According to my requirements, I only need at-most 2-3 functions to be exposed by the framework - thus using a exports file just for that seems to bug me.
Thank you.