I'm developing an iPadOS 18+ application that uses a UITabBarController, styled as a sidebar, to serve as the primary navigation interface. This setup includes 20 different tabs, each representing a distinct section of the app.
For the user experience, each tab needs to present a master-detail interface, implemented using a UISplitViewController. The goal is to allow users to navigate between tabs via the sidebar, and within each tab, access related content through the split view's list-detail pattern.
The Problem:
Currently, my implementation involves instantiating a separate UISplitViewController for each tab, resulting in 20 unique split view instances embedded inside the UITabBarController. While this works functionally, it leads to significant memory usage, especially after the user opens each tab at least once. The accumulation of all these instantiated view controllers in memory eventually causes performance degradation or even memory warnings/crashes on lower-end iPads.
The Question:
What is the best approach to implement this type of architecture without running into memory management issues?
Specifically:
Is there a way to reuse or lazily load the UISplitViewController instances only when needed?
Can we unload or release split view controllers that haven't been used for a while to reduce memory pressure?
Would a custom container controller be more appropriate than using UITabBarController in this case?
Are there iPadOS 18+ best practices or newer APIs that support this kind of complex multi-tab, multi-split-view structure efficiently?
Any advice on how to optimize memory usage while preserving the sidebar navigation and split view layout would be highly appreciated.
Posts under iPadOS tag
174 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Our app needs to read server settings that are configured in the app's settings. In iPadOS 17.7.7 specifically (iPadOS 17.7.6, iPadOS 18.5, and other versions works fine) one can't retrieve any setting from the settings bundle using:
if ([[NSUserDefaults standardUserDefaults] objectForKey:@"setting_hostname"] != nil)
serverHostname = [[NSUserDefaults standardUserDefaults] objectForKey:@"setting_hostname"];
Also, when writing a custom value in NSUserDefaults like:
[[NSUserDefaults standardUserDefaults] setObject:@"Test" forKey:@"test"];
[[NSUserDefaults standardUserDefaults] synchronize];
NSString* test = [[NSUserDefaults standardUserDefaults] objectForKey:@"test"];
NSLog(@"%@", test);
Shows an error in the console:
Couldn't write values for keys ( test ) in CFPrefsPlistSource<0x3017ecc60> (Domain: <redacted_bundle_id>, User: kCFPreferencesCurrentUser, ByHost: No, Container: (null), Contents Need Refresh: No): setting these preferences requires user-preference-write or file-write-data sandbox access
When closing the app and reopening it, and then reading the value of [[NSUserDefaults standardUserDefaults] objectForKey:@"test"]; returns null
I have a SPFx React application where I am printing the HTML page content using the javascript default window.print() functionality. Once I save the page as pdf from the print preview window and open it using Adobe Acrobat, the links(for eg -> Google) within the content are not clickable and appearing as plain text.
I have tried to print random pages post searching with any keywords in Google and saved the files as pdfs, but, unfortunately, the links are still not clickable there as well.
To check whether it is an Adobe Acrobat issue, I have performed the same print functionality from Android devices and shared the pdf file across the iOS devices and in that case, when opened using Adobe Acrobat, the links are appearing to be clickable.
I am wondering whether it is something related to how the default print functionality works for iPadOS and iOS devices. Any insights on this would be really helpful. Thanks!!!
Note: The links are clickable for MacOS as well as for Windows.
#ios #ipados #javascript #spfx #react
DESCRIPTION OF PROBLEM
When using SwiftUI’s TextField or TextEditor on iPadOS, a persistent gray or default system material bar appears at the bottom of the screen. This gray bar is not present in Apple’s native apps (such as Notes, Mail, Messages) or in third-party apps like ChatGPT and Beeper
STEPS TO REPRODUCE
Create a TextField or TextEditor. Run the code, click on it, without software keyboard enabled.
Our app is an enterprise app via MDM.
We are experiencing an issue in iPadOS 18.4 when loading an internal HTTPS server via WKWebView in a hybrid iOS app.
Our server uses a self-signed certificate but lacks the digitalSignature usage in its Key Usage extension. (Currently we have no chance to change the server's certificate)
We override webView:didReceiveAuthenticationChallenge:completionHandler: to trust the certificate:
completionHandler(NSURLSessionAuthChallengeUseCredential, credential);
This "completionHandler" works in previous 18.3.2 , but not work in 18.4. May I know is there any changes in 18.4 for the https certification? Why this delegate not work? What we can do to ignore this ssl error and get connection?
Thanks in advance, look forward for your reply.
In an iPadOS SwiftUI app supporting multiple scenes, each Scene responds to a particular way in which the app was launched. If app was launched by tapping an associated file or a deep link (custom URL), then, the URLHandlerScene is invoked. If app was launched by QuickAction (long tap on the app icon), then another Scene is invoked etc. Each Scene has a purpose and responds to a particular launch.
But after defining handlesExternlEvents(matching:) scene modifier, the scene was not getting launched when user taps the associated file or the app's Deeplinks was invoked.
@main
struct IOSSwiftUIScenesApp: App {
var body: some Scene {
DefaultScene()
URLHandlerScene()
.handlesExternalEvents(matching: ["file://"]) // Launched by an associated file
.handlesExternalEvents(matching: ["Companion://"]) // Launched by Deeplink.
// Other scenes
}
}
struct URLHandlerScene: Scene {
@State private var inputURL: URL // Store the incoming URL
init() {
self.inputURL = URL(string: "Temp://")!
}
var body: some Scene {
WindowGroup {
URLhandlerView(inputURL: $inputURL)
.onOpenURL(perform: { (fileURL: URL) in
log(String(format: "URLhandlerView().onOpenURL | Thread.current = %@", String(describing: Thread.current)))
log("fileURL = " + String(describing: fileURL))
inputURL = fileURL
})
}
}
}
As shown above, I've attached handlesExternalEvents(matching:) modifier with "file://" for the associate file and "Companion" is my custom URL scheme. As per the scene matching rules documented here, my URLHandlerScene should get launched, but every time I launch the app using associated file or 'open' a Deeplink, the DefaultScene is always launched.
What is missing here? Can someone please help?
I’d love to see Apple implement a Bionic Reading feature as a system-wide accessibility option. This type of reading aid highlights the first part of each word in bold to help guide the eyes and improve comprehension.
It’s been shown to be especially helpful for people with ADHD, dyslexia, and other neurodivergent needs. Having a toggle in Settings > Accessibility would be life-changing.
Ideally, it could be:
• Enabled system-wide, or per-app
• Allow customization of how much of the word is bolded
• Available in Safari, Messages, Books, News, etc.
When presenting a SwiftUI sheet containing ObservableObject's injected using environmentObject(_) modifier, the objects are unexpectedly retained after the sheet is dismissed if a TextField within the sheet gains focus or is edited.
This issue occurs on iOS and iPadOS (on macOS the objects are always released), observable both in the simulator and on physical devices, and happens even when the view does not explicitly reference these environment objects, and the TextField's content isn't bound to them.
Expected Results:
When the sheet is dismissed, all environment objects passed to the sheet’s content view should be released (deinitialized), regardless of whether the TextField was focused or edited.
Actual Results:
If the TextField was focused or edited, environment objects (ObservableA and ObservableB) are retained after the sheet is dismissed. They are not deinitialized as expected, leading to unintended retention.
Interestingly, previously retained copies of these environment objects, if any, are released precisely at the moment the TextField becomes focused on subsequent presentations, indicating an inconsistent lifecycle behavior.
I have filed an issue FB17226970
Sample Code
Below is a sample code that consistently shows the issue on iOS 18.3+.
Steps to Reproduce:
Run the attached SwiftUI sample.
Tap the button labeled “Show Sheet” to present a sheet.
Tap on the TextField to focus or begin editing.
Dismiss the sheet by dragging it down or by other dismissal methods (e.g., tapping outside on iPadOS).
import SwiftUI
struct ContentView: View {
@State private var showSheet: Bool = false
var body: some View {
VStack {
Button("Show Sheet") {
showSheet = true
}
}
.sheet(isPresented: $showSheet) {
SheetContentView()
.environmentObject(ObservableA())
.environmentObject(ObservableB())
}
}
}
struct SheetContentView: View {
@State private var text: String = ""
var body: some View {
TextField("Select to retain observable objects", text: $text)
.textFieldStyle(.roundedBorder)
}
}
final class ObservableA: ObservableObject {
init() {
print(type(of: self), #function)
}
deinit {
print(type(of: self), #function)
}
}
final class ObservableB: ObservableObject {
init() {
print(type(of: self), #function)
}
deinit {
print(type(of: self), #function)
}
}
#Preview {
ContentView()
}
I have written an App which extracts data, over WiFi, from an instrument that creates its own WiFi Hotspot.
The instrument provides no internet connection. The iPad version of this App is connects fine and is assigned an IP address by DHCP server running on a MicroChip RN171 wifi module.
iOS assigns an obscure IP address on a completely different subnet. I understand this is iOS' way of "Complaining" that is wasn't assigned an IP address.
Consequently in the case of the iPhone I am forced to manually assign an IP address for the iPhone, the mask and the gateway. Only then is the connection successful.
Anyone know why the iPhone won't talk DHCP to a WiFi module not connected to the internet? Are there perhaps some parameters that I need to adjust on either the iPhone or WiFi module?
We are experiencing a lot of problems deploying an enterprise app for in-house use since late January. All our iPads are managed by an MDM solution. It can take 10 or more attempts to successfully deploy the app. The deployment usually fails with the message "ASDErrorDomain error 854" among other messages. The company providing the MDM solution has no idea what causes this message or what it means. I suspect the error message is not generated by the MDM solutiion but rather gets passed through from iOS. After many attempts the installation may succeed suddenly, though, and the apps works as expected, but this may take weeks.
I have not done any changes to my development system. 'I am running XCode 15.3 with SDK version 17.4, the iPads are on iOS 18.3
Topic:
Business & Education
SubTopic:
Device Management
Tags:
Enterprise
iPadOS
Business and Enterprise
I'm using an AVCaptureSession to send video and audio samples to an AVAssetWriter. When I play back the resultant video, sometimes there is a significant lag between the audio compared with the video, so they're just not in sync. But sometimes they are, with the same code.
If I look at the very first presentation time stamps of the buffers being sent to the delegate, via
func captureOutput(_: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
I see something like this:
Adding audio samples for pts time 227711.0855328798,
Adding video samples for pts time 227710.778785374
That is, the clock for audio vs video is behind: the first audio sample I receive is at 11.08 something, while the video video sample is earlier in time, at 10.778 something. The times are the presentation time stamps of the buffer, and the outputPresentationTimeStamp is the exact same number.
It feels like "video" vs the "audio" clock are just mismatched.
This doesn't always happen: sometimes they're synced. Sometimes they're not.
Any ideas? The device I'm recording is a webcam, on iPadOS, connected via the usb-c port.
I am making a USB attached IoT device that follows the Matter approach to connectivity (IP/mDNS/DHCP). I am having conflicts with it as it appears to MacOS as an Ethernet adapter and this is causing it to be assigned as a "default" route, interfering with routing when my Mac is connected to NAT based WiFi.
I'd like to be able to hint to MacOS & iPadOS that this is not a routable private network, the subnet should be respected and a default route should not be assigned to it, otherwise the order of the device connection is used by the IP routing tables and I am concerned my non-routable private network will initialize before Wifi and block NAT based internet connectivity.
How can I hint to MacOS/iPadOS "this is not a routable private network, this is not a NAT, do not assign me a default route beyond the subnet I have provided you."
Please update Accessibility OS Settings for VoiceOver in iPhone iOS and iPadOS to include frames on the Rotor, and to make web navigation and component gestures easier to find and assign. Please add content to the iPhone and iPad Apple User Guide to use VoiceOver in web navigation with touch gestures.
Specifically... iframes.
There is no clear guidance in Apple documentation for VoiceOver users in iPhone or iPadOS to access iframes with touch gestures. A common belief as written on AppleVis, other blogs, and internet searches is that iframes in Safari or a webView in an app are only available with explore by touch.
If explore by touch is the only option for some interactions, that needs to be included in Apple User Guides. If not, details on equivalent touch gestures for VO that have keyboard interactions in Mac need to be clear for users.
VoiceOver for Mac includes a default keyboard interaction of VO-Command-F in its extensive User Guide (https://support.apple.com/guide/voiceover/by-images-or-frames-mchlp2740/mac). A user can include a rotor option for web navigation for iframes.
VoiceOver for iPhone and iPad does not include a default swipe gesture assigned to frames. An option is not available for the Rotor.
While there is iPhone User Guide guidance that gestures can be customized (https://support.apple.com/guide/iphone/customize-gestures-and-keyboard-shortcuts-iph59a8e6fd2/18.0/ios/18.0), it is not clear that for adding this gesture, "Move to the next frame" is tucked into the advanced navigation commands for VoiceOver Accessibility Settings in the OS. At least in my phone, the word "frame" was not searchable despite the All Commands screen using a search bar.
问题描述:
环境:
Xcode 版本: 16.3
*macOS 版本: 15.4
项目类型: SwiftUI App with SwiftData
问题摘要:
当在 Xcode 项目的 "Signing & Capabilities" 中为启用了 iCloud 能力的应用关联一个具体的 iCloud Container 时,即使代码中并未显式启用 CloudKit 同步(例如,未在 ModelConfiguration 中设置 cloudKitDatabase),SwiftData 的 ModelContainer 在应用启动初始化时也会立即失败并导致崩溃。如果仅启用 iCloud 能力但不选择任何 Container,应用可以正常启动。此问题在使用空的 Schema([]) 进行测试时依然稳定复现。
复现步骤 (使用最小复现项目 - MRE):
使用 Xcode 16.3 创建一个新的 SwiftUI App 项目 (例如,命名为 CloudKitCrashTest)。
在创建时不要勾选 "Host in CloudKit",或者,如果勾选了,先确保后续步骤覆盖相关设置。Storage 选择 None 或 SwiftData 均可复现。
修改 CloudKitCrashTestApp.swift 文件,添加 SwiftData 导入和基本的 ModelContainer 初始化逻辑。关键代码如下:
import SwiftUI
import SwiftData
@main
struct CloudKitCrashTestApp: App {
let sharedModelContainer: ModelContainer
init() {
// 使用空 Schema 进行诊断
let schema = Schema([])
// *不* 指定 cloudKitDatabase
let modelConfiguration = ModelConfiguration(schema: schema, isStoredInMemoryOnly: false)
do {
sharedModelContainer = try ModelContainer(for: schema, configurations: [modelConfiguration])
print("ModelContainer created successfully.")
} catch {
// 在关联 Container 后,会在此处崩溃
fatalError("Could not create ModelContainer: \(error)")
}
}
var body: some Scene {
WindowGroup {
ContentView()
.modelContainer(sharedModelContainer)
}
}
}
// ... ContentView 定义 ...
进入项目的 Signing & Capabilities 标签页。
点击 + Capability 添加 iCloud。
在 iCloud 服务中,勾选 CloudKit。
此时,不要在 Containers 部分选择任何容器。
运行应用: 应用应该能够成功启动,并在控制台打印 "ModelContainer created successfully."。
停止应用。
回到 Signing & Capabilities -> iCloud -> Containers。
点击 + 添加一个新的自定义容器,或选择 Xcode 默认建议的容器。确保一个容器 ID 被明确选中。
再次运行应用: 应用现在会在启动时立即崩溃,fatalError 被触发。
预期结果:
即使在 Signing & Capabilities 中关联了 iCloud Container,应用也应该能够成功初始化 ModelContainer(尤其是当代码中未使用 cloudKitDatabase 或使用空 Schema 时),不应崩溃。
实际结果:
应用在 ModelContainer 初始化时崩溃,抛出 fatalError,错误信息为:
Could not create ModelContainer: SwiftDataError(_error: SwiftData.SwiftDataError._Error.loadIssueModelContainer, _explanation: nil)
补充说明:
此问题在 iPhone 模拟器上稳定复现。
尝试清理构建文件夹 (Clean Build Folder)、删除派生数据 (Derived Data) 未能解决问题。
使用 Xcode 模板创建项目(勾选 "Host in CloudKit")并在之后手动取消选择 Container 可以运行,但一旦重新选择 Container 就会崩溃,现象一致。
这表明问题与 ModelContainer 初始化过程在检测到项目关联了具体 iCloud Container 标识符(可能涉及读取 .entitlements 文件或准备相关环境)时发生的内部错误有关,而不是由于实际的 CloudKit 同步代码或模型定义。
影响:
此 Bug 阻止了在当前的 Xcode 和 macOS 环境下开发和测试任何需要关联 iCloud Container 的 SwiftData 应用。
I'm trying to determine if it’s possible to detect when a user interacts with a Slide Over window while my app is running in the background on iPadOS. I've explored lifecycle methods such as scenePhase and various UIApplication notifications (e.g., willResignActiveNotification) to detect focus loss, but these approaches don't seem to capture the event reliably. Has anyone found an alternative solution or workaround for detecting this specific state change? Any insights or recommended practices would be greatly appreciated.
My app is a SwiftUI document based app using DocumentGroupLaunchScene. In iOS(iPadOS) 18.4, when it launches, it has duplicate toolbar items, and when I close the current document and open other documents, it adds more duplicates. It also shows a wrong document name, which shows the first opened document name. This issue can be reproduced in the sample code (Building a document-based app with SwiftUI).
I have submitted Feedback (FB17025216), but not sure if this is a known bug or if I'm missing anything.
I have created a custom class:
class TarifsHeaderFooterView: UITableViewHeaderFooterView { …}
With its init:
override init(reuseIdentifier: String?) {
super.init(reuseIdentifier: reuseIdentifier)
configureContents()
}
I register the custom header view in viewDidLoad of the class using the tableView. Table delegate and data source are defined in storyboard.
tarifsTableView.register(TarifsHeaderFooterView.self, forHeaderFooterViewReuseIdentifier: headerTarifsIdentifier)
And call it:
func tableView(_ tableView: UITableView, viewForHeaderInSection section: Int) -> UIView? {
let view = tableView.dequeueReusableHeaderFooterView(withIdentifier: headerTarifsIdentifier) as! TarifsHeaderFooterView
That works on iPhone (simulators and devices).
But it crashes on any iPad simulator, as tableView.dequeueReusableHeaderFooterView(withIdentifier: headerTarifsIdentifier) is found nil.
What difference between iPhone and iPad do I miss that explains this crash ?
PS: sorry for the messy message. It looks like the new "feature" of the forum to put a Copy code button on the code parts, causes the code sections to be put at the very end of the post, not at the place they should be.
Case-ID: 12591306
Use Xcode 16.x to compile an iPhone demo app and run it on an iPad (iPadOS 18.x) device or simulator.
Launch the iPhone app and activate Picture-in-Picture mode.
Attempt to input text; the system keyboard does not appear.
Compare the output of [[UIScreen mainScreen] bounds] before and after enabling Picture-in-Picture mode, notice the values change unexpectedly after enabling PiP.
This issue can be consistently reproduced on iPadOS 18.x when running an app built with Xcode 16.0 to Xcode 16.3RC(16E137).
Beginning April 24, 2025, apps uploaded to App Store Connect must be built with Xcode 16 or later using an SDK for iOS 18, iPadOS 18, tvOS 18, visionOS 2, or watchOS 11.Therefore, I urgently hope to receive a solution provided by Apple.
When an external keyboard is connected to the iPad, the software keyboard often becomes hidden and cannot be made to show again programmatically.
Help! Is there a way to programmatically get the software keyboard to re-appear when an external keyboard is connected without clicking 'Show Keyboard' from the Keyboard Shortcuts menu-item button?
Its corner-casey for sure, (with our own keyboard extension) we have a sub-view that is added to the software keyboard view under a certain use case. When the user has an external keyboard connected, everything is fine until the app is backgrounded. When the app is re-foregrounded the software keyboard often becomes hidden (what the user now sees is a keyboard shortcut menu-item group w/a keyboard button and a mic button.) This makes it impossible for the user to interact with the subview we added (because it is hidden), unless the user manually shows the software keyboard again by clicking "Show Keyboard" from the keyboard shortcut.
The software keyboard view is still the 'firstResponder' (accepting key strokes, etc), we just cannot find a way to programmatically make it visible again. Sigh.
Hello Apple Forum,
We were testing out the following entitlements: 'Increased Memory Limit', 'Extended Virtual Addressing', 'Extended Virtual Addressing(Debug)' when we realized the maximum allocation amount of the memory dropped from our previous test of 32GB to 16GB.
We were testing these on the following devices:
iPad Pro 12.9(6th Gen) 18.4(Beta, 22E4232a)
iPad Pro 11 M4 18.3.2
Each device has 16GB of physical RAM and because we are able to reach 16GB of allocation usage until app crashes, we believe the entitlements are applied correctly. Each test device was on charging mode and battery mode with 60, 80 100% battery.
We understand allocating memory is complex and os is more optimized for battery efficiency, hence possibly limiting max usage of memory.
However, through the same testing method we have done on iPadOS 18.3 and 4, we were able to allocate 31~32GB of RAM on testing done on January this year. (iPadOS 18.0, or maybe 18.1?) which make us wonder, has there been a change in core os that handles memory allocation since 18.0? And what can be the cause of this drop?
The code we ran for our memory testing is as follows:
private var allocatedMemory: [UnsafeMutableRawPointer] = []
public func allocate1GBMemory() {
let sizeInBytes = 1024 * 1024 * 1024
if let pointer = malloc(sizeInBytes) {
for i in 0..<sizeInBytes {
pointer.advanced(by: i).storeBytes(of: UInt8(i % 256), as: UInt8.self)
}
allocatedMemory.append(pointer)
logger.print("Allocated 1GB of memory. Total allocated: \(allocatedMemory.count)GB")
} else {
logger.print("Failed to allocate memory")
}
}
Each functions call increases memory usage by allocating 1GB of space and we have called this function until it reaches 16GB then the app crashes.