I'm being faced with an issue when using SwiftUI's WebView on iOS 26. In many websites, the top/bottom content is unaccessible due to being under the app's toolbars. It feels like the WebView doesn't really understand the safe areas where it's being shown, because the content should start right below the navigation bar, and only when the user scrolls down, the content should move under the bar (but it's always reachable if the users scroll back up).
Here's a demo of the issue:
Here's a 'fix' by ensuring that the content of the WebView never leaves its bounds. But as you can see, it feels out of place on iOS 26 (would be fine on previous OS versions if you had a fully opaque toolbar):
Code:
struct ContentView: View {
var body: some View {
NavigationStack {
WebView(url: URL(string: "https://apple.com")).toolbar {
ToolbarItem(placement: .primaryAction) {
Button("Top content covered, unaccessible.") {}
}
}
}
}
}
Does anyone know if there's a way to fix it using some sort of view modifier combination or it's just broken as-is?
SwiftUI
RSS for tagProvide views, controls, and layout structures for declaring your app's user interface using SwiftUI.
Posts under SwiftUI tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've being playing aground with long press gesture in scroll view and noticed gesture(LongPressGesture()) doesn't seem to work with scroll view's scrolling which doesn't seem to be the intended behavior to me.
Take the following example: the blue rectangle is modified with onLongPressGesture and the red rectangle is modified with LongPressGesture (_EndedGesture<LongPressGesture> to be specific).
ScrollView {
Rectangle()
.fill(.blue)
.frame(width: 200, height: 200)
.onLongPressGesture {
print("onLongPressGesture performed")
} onPressingChanged: { _ in
print("onLongPressGesture changed")
}
.overlay {
Text("onLongPressGesture")
}
Rectangle()
.fill(.red)
.frame(width: 200, height: 200)
.gesture(LongPressGesture()
.onEnded { _ in
print("gesture ended")
})
.overlay {
Text("gesture(LongPressGesture)")
}
}
If you start scrolling from either of the rectangles (that is, start scrolling with your finger on either of the rectangles), the ScrollView will scroll.
However, if LongPressGesture is modified with either onChanged or .updating, ScrollView won't respond to scroll if the scroll is started from red rectangle. Even setting the maximumDistance to 0 won't help. As for its counter part onLongPressGesture, even though onPressingChanged to onLongPressGesture, scrolling still works if it's started from onLongPressGesture modified view.
ScrollView {
Rectangle()
.fill(.blue)
.frame(width: 200, height: 200)
.onLongPressGesture {
print("onLongPressGesture performed")
} onPressingChanged: { _ in
print("onLongPressGesture changed")
}
.overlay {
Text("onLongPressGesture")
}
Rectangle()
.fill(.red)
.frame(width: 200, height: 200)
.gesture(LongPressGesture(maximumDistance: 0)
// scroll from the red rectangle won't work if I add either `updating` or `onChanged` but I put both here just to demonstrate
// you will need to add `@GestureState private var isPressing = false` to your view body
.updating($isPressing) { value, state, transaction in
state = value
print("gesture updating")
}
.onChanged { value in
print("gesture changed")
}
.onEnded { _ in
print("gesture ended")
})
.overlay {
Text("gesture(LongPressGesture)")
}
}
This doesn't seem right to me. I would expect the view modified by LongPressGesture(), no matter if the gesture has onChanged or updating, should be able to start scroll in a scroll view, just like onLongPressGesture.
I observed this behavior in a physical device running iOS 26.1, and I do not know the behavior on other versions.
Such a simple piece of code:
import SwiftUI
import WebKit
struct ContentView: View {
var body: some View {
WebView(url: URL(string: "https://www.apple.com"))
}
}
When I run this, the web content shows under the top notch’s safe area, and buttons inside that region aren’t tappable. I tried a bunch of things and the only “fix” that seems to work is .padding(.top, 1), but that leaves a noticeable white strip in non-portrait orientations.
What’s the proper way to solve this? Safari handles the safe area correctly and doesn’t render content there.
Hi there! I'm having this issue with my main windows. I'm having a big space on top of that without any logic explanation (at least for my poor knowledge).
Using the code below I'm getting this Windows layout:
Does anybody have any guidance on how to get out that extra space at the beginning?
Thanks a lot!
import SwiftUI
import SwiftData
#if os(macOS)
import AppKit
#endif
// Helper to access and control NSWindow for size/position persistence
#if os(macOS)
struct WindowAccessor: NSViewRepresentable {
let onWindow: (NSWindow) -> Void
func makeNSView(context: Context) -> NSView {
let view = NSView()
DispatchQueue.main.async {
if let window = view.window {
onWindow(window)
}
}
return view
}
func updateNSView(_ nsView: NSView, context: Context) {
DispatchQueue.main.async {
if let window = nsView.window {
onWindow(window)
}
}
}
}
#endif
@main
struct KaraoPartyApp: App {
@StateObject private var songsModel = SongsModel()
@Environment(\.openWindow) private var openWindow
var body: some Scene {
Group {
WindowGroup {
#if os(macOS)
WindowAccessor { window in
window.minSize = NSSize(width: 900, height: 700)
// Configure window to eliminate title bar space
window.titleVisibility = .hidden
window.titlebarAppearsTransparent = true
window.styleMask.insert(.fullSizeContentView)
}
#endif
ContentView()
.environmentObject(songsModel)
}
.windowToolbarStyle(.unifiedCompact)
.windowResizability(.contentSize)
.defaultSize(width: 1200, height: 900)
.windowStyle(.titleBar)
#if os(macOS)
.windowToolbarStyle(.unified)
#endif
WindowGroup("CDG Viewer", id: "cdg-viewer", for: CDGWindowParams.self) { $params in
if let params = params {
ZStack {
#if os(macOS)
WindowAccessor { window in
window.minSize = NSSize(width: 600, height: 400)
// Restore window frame if available
let key = "cdgWindowFrame"
let defaults = UserDefaults.standard
if let frameString = defaults.string(forKey: key) {
let frame = NSRectFromString(frameString)
if window.frame != frame {
window.setFrame(frame, display: true)
}
} else {
// Open CDG window offset from main window
if let mainWindow = NSApp.windows.first {
let mainFrame = mainWindow.frame
let offsetFrame = NSRect(x: mainFrame.origin.x + 60, y: mainFrame.origin.y - 60, width: 800, height: 600)
window.setFrame(offsetFrame, display: true)
}
}
// Observe frame changes and save
NotificationCenter.default.addObserver(forName: NSWindow.didMoveNotification, object: window, queue: .main) { _ in
let frameStr = NSStringFromRect(window.frame)
defaults.set(frameStr, forKey: key)
}
NotificationCenter.default.addObserver(forName: NSWindow.didEndLiveResizeNotification, object: window, queue: .main) { _ in
let frameStr = NSStringFromRect(window.frame)
defaults.set(frameStr, forKey: key)
}
}
#endif
CDGView(
cancion: Cancion(
title: params.title ?? "",
artist: params.artist ?? "",
album: "",
genre: "",
year: "",
bpm: "",
playCount: 0,
folderPath: params.cdgURL.deletingLastPathComponent().path,
trackName: params.cdgURL.deletingPathExtension().lastPathComponent + ".mp3"
),
backgroundType: params.backgroundType,
videoURL: params.videoURL,
cdfContent: params.cdfContent.flatMap { String(data: $0, encoding: .utf8) },
artist: params.artist,
title: params.title
)
}
} else {
Text("No se pudo abrir el archivo CDG.")
}
}
.windowResizability(.contentSize)
.defaultSize(width: 800, height: 600)
WindowGroup("Metadata Editor", id: "metadata-editor") {
MetadataEditorView()
.environmentObject(songsModel)
}
.windowResizability(.contentSize)
.defaultSize(width: 400, height: 400)
WindowGroup("Canciones DB", id: "canciones-db") {
CancionesDBView()
}
.windowResizability(.contentSize)
.defaultSize(width: 800, height: 500)
WindowGroup("Importar canciones desde carpeta", id: "folder-song-importer") {
FolderSongImporterView()
}
.windowResizability(.contentSize)
.defaultSize(width: 500, height: 350)
}
.modelContainer(for: Cancion.self)
// Add menu command under Edit
.commands {
CommandGroup(replacing: .pasteboard) { }
CommandMenu("Edit") {
Button("Actualizar Metadatos") {
openWindow(id: "metadata-editor")
}
.keyboardShortcut(",", modifiers: [.command, .shift])
}
CommandMenu("Base de Datos") {
Button("Ver Base de Datos de Canciones") {
openWindow(id: "canciones-db")
}
.keyboardShortcut("D", modifiers: [.command, .shift])
}
}
}
init() {
print("\n==============================")
print("[KaraoParty] Nueva ejecución iniciada: \(Date())")
print("==============================\n")
}
}
Apparently now with iOS 26.1 if you have .tabViewBottomAccessory { } you get a pill shape floater all the time. That was not like that in 26.0.
Hi everyone,
I’m building a full-screen Map (MapKit + SwiftUI) with persistent top/bottom chrome (menu buttons on top, session stats + map controls on bottom). I have three working implementations and I’d like guidance on which pattern Apple recommends long-term (gesture correctness, safe areas, Dynamic Island/home indicator, and future compatibility).
Version 1 — overlay(alignment:) on Map
Idea: Draw chrome using .overlay(alignment:) directly on the map and manage padding manually.
Map(position: $viewModel.previewMapCameraPosition, scope: mapScope) {
UserAnnotation {
UserLocationCourseMarkerView(angle: viewModel.userCourse - mapHeading)
}
}
.mapStyle(viewModel.mapType.mapStyle)
.mapControls {
MapUserLocationButton().mapControlVisibility(.hidden)
MapCompass().mapControlVisibility(.hidden)
MapPitchToggle().mapControlVisibility(.hidden)
MapScaleView().mapControlVisibility(.hidden)
}
.overlay(alignment: .top) { mapMenu } // manual padding inside
.overlay(alignment: .bottom) { bottomChrome } // manual padding inside
Version 2 — ZStack + .safeAreaPadding
Idea: Place the map at the back, then lay out top/bottom chrome in a VStack inside a ZStack, and use .safeAreaPadding(.all) so content respects safe areas.
ZStack(alignment: .top) {
Map(...).ignoresSafeArea()
VStack {
mapMenu
Spacer()
bottomChrome
}
.safeAreaPadding(.all)
}
Version 3 — .safeAreaInset on the Map
Idea: Make the map full-bleed and then reserve top/bottom space with safeAreaInset, letting SwiftUI manage insets
Map(...).ignoresSafeArea()
.mapStyle(viewModel.mapType.mapStyle)
.mapControls {
MapUserLocationButton().mapControlVisibility(.hidden)
MapCompass().mapControlVisibility(.hidden)
MapPitchToggle().mapControlVisibility(.hidden)
MapScaleView().mapControlVisibility(.hidden)
}
.safeAreaInset(edge: .top) { mapMenu } // manual padding inside
.safeAreaInset(edge: .bottom) { bottomChrome } // manual padding inside
Question
I noticed:
Safe-area / padding behavior
– Version 2 requires the least extra padding and seems to create a small but partial safe-area spacing automatically.
– Version 3 still needs roughly the same manual padding as Version 1, even though it uses safeAreaInset. Why doesn’t safeAreaInset fully handle that spacing?
Rotation crash (Metal)
When using Version 3 (safeAreaInset + ignoresSafeArea), rotating the device portrait↔landscape several times triggers a
Metal crash:
failed assertion 'The following Metal object is being destroyed while still required… CAMetalLayer Display Drawable'
The same crash can happen with Version 1, though less often. I haven’t tested it much with Version 2.
Is this a known issue or race condition between Map’s internal Metal rendering and view layout changes?
Expected behavior
What’s the intended or supported interaction between safeAreaInset, safeAreaPadding, and overlay when embedding persistent chrome inside a SwiftUI Map?
Should safeAreaInset normally remove the need for manual padding, or is that by design?
I have a working XIB App to run a Linux VM with graphics interface. I am trying to rewrite it in SwiftUI but fall into all sorts of problems when using a combination of a Representable of the VZVirtualMachineView, an associated Coordinator, and @StateObjects.
a) The VM display is not updated when running but is displayed if I close the window and reopen it. As the underlying VZVirtualMachineView is created/dismantled many times, there are warnings about negative scanouts that end up crashing the App
b) Keyboard focus is not really working.
https://developer.apple.com/forums/thread/766014 reports that there is probably a solution making a NSViewControllerReporesentable rather than a VZVirtualMachineViewRepresentable.
I think I am fighting against proper SwiftUI lifecycle and would love to have a hint at what shall be the right organization of model and SwiftUI constructs.
I am trying to perform swiftUI instrumentation on my ios app. whenever i hit the rocord button, the app launches on target device and closes with the error:
Failed to start the recording: Failed starting ktrace session.
How do i resolve this please?
The top toolbar looks fine, but in the bottom toolbar, one of the layers is stretched into a capsule shape instead of an ellipse.
Is this intended?
Any view that is content for the tabViewBottomAccessory API fails to retain its state as of the last couple of 26.1 betas (and RC). The loss of state happens (at least) when the currently selected tab is switched (filed as FB20901325).
Here's code to reproduce the issue:
struct ContentView: View {
@State private var selectedTab = TabSelection.one
enum TabSelection: Hashable {
case one, two
}
var body: some View {
TabView(selection: $selectedTab) {
Tab("One", systemImage: "1.circle", value: .one) {
BugExplanationView()
}
Tab("Two", systemImage: "2.circle", value: .two) {
BugExplanationView()
}
}
.tabViewBottomAccessory {
AccessoryView()
}
}
}
struct AccessoryView: View {
@State private var counter = 0 // This guy's state gets lost (as of iOS 26.1)
var body: some View {
Stepper("Counter: \(counter)", value: $counter)
.padding(.horizontal)
}
}
struct BugExplanationView: View {
var body: some View {
ScrollView {
VStack(alignment: .leading, spacing: 16) {
Text("(1) Manipulate the counter state")
Text("(2) Then switch tabs")
Text("BUG: The counter state gets unexpectedly reset!")
}
.multilineTextAlignment(.leading)
}
}
}
Hi,
How to enable multitouch on ARView?
Touch functions (touchesBegan, touchesMoved, ...) seem to only handle one touch at a time. In order to handle multiple touches at a time with ARView, I have to either:
Use SwiftUI .simultaneousGesture on top of an ARView representable
Position a UIView on top of ARView to capture touches and do hit testing by passing a reference to ARView
Expected behavior:
ARView should capture all touches via touchesBegan/Moved/Ended/Cancelled.
Here is what I tried, on iOS 26.1 and macOS 26.1:
ARView Multitouch
The setup below is a minimal ARView presented by SwiftUI, with touch events handled inside ARView. Multitouch doesn't work with this setup.
Note that multitouch wouldn't work either if the ARView is presented with a UIViewController instead of SwiftUI.
import RealityKit
import SwiftUI
struct ARViewMultiTouchView: View {
var body: some View {
ZStack {
ARViewMultiTouchRepresentable()
.ignoresSafeArea()
}
}
}
#Preview {
ARViewMultiTouchView()
}
// MARK: Representable ARView
struct ARViewMultiTouchRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARViewMultiTouch(frame: .zero)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) { }
}
// MARK: ARView
class ARViewMultiTouch: ARView {
required init(frame: CGRect) {
super.init(frame: frame)
/// Enable multi-touch
isMultipleTouchEnabled = true
cameraMode = .nonAR
automaticallyConfigureSession = false
environment.background = .color(.gray)
/// Disable gesture recognizers to not conflict with touch events
/// But it doesn't fix the issue
gestureRecognizers?.forEach { $0.isEnabled = false }
}
required dynamic init?(coder decoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
for touch in touches {
/// # Problem
/// This should print for every new touch, up to 5 simultaneously on an iPhone (multi-touch)
/// But it only fires for one touch at a time (single-touch)
print("Touch began at: \(touch.location(in: self))")
}
}
}
Multitouch with an Overlay
This setup works, but it doesn't seem right. There must be a solution to make ARView handle multi touch directly, right?
import SwiftUI
import RealityKit
struct MultiTouchOverlayView: View {
var body: some View {
ZStack {
MultiTouchOverlayRepresentable()
.ignoresSafeArea()
Text("Multi touch with overlay view")
.font(.system(size: 24, weight: .medium))
.foregroundStyle(.white)
.offset(CGSize(width: 0, height: -150))
}
}
}
#Preview {
MultiTouchOverlayView()
}
// MARK: Representable Container
struct MultiTouchOverlayRepresentable: UIViewRepresentable {
func makeUIView(context: Context) -> UIView {
/// The view that SwiftUI will present
let container = UIView()
/// ARView
let arView = ARView(frame: container.bounds)
arView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
arView.cameraMode = .nonAR
arView.automaticallyConfigureSession = false
arView.environment.background = .color(.gray)
let anchor = AnchorEntity()
arView.scene.addAnchor(anchor)
let boxWidth: Float = 0.4
let boxMaterial = SimpleMaterial(color: .red, isMetallic: false)
let box = ModelEntity(mesh: .generateBox(size: boxWidth), materials: [boxMaterial])
box.name = "Box"
box.components.set(CollisionComponent(shapes: [.generateBox(width: boxWidth, height: boxWidth, depth: boxWidth)]))
anchor.addChild(box)
/// The view that will capture touches
let touchOverlay = TouchOverlayView(frame: container.bounds)
touchOverlay.autoresizingMask = [.flexibleWidth, .flexibleHeight]
touchOverlay.backgroundColor = .clear
/// Pass an arView reference to the overlay for hit testing
touchOverlay.arView = arView
/// Add views to the container.
/// ARView goes in first, at the bottom.
container.addSubview(arView)
/// TouchOverlay goes in last, on top.
container.addSubview(touchOverlay)
return container
}
func updateUIView(_ uiView: UIView, context: Context) {
}
}
// MARK: Touch Overlay View
/// A UIView to handle multi-touch on top of ARView
class TouchOverlayView: UIView {
weak var arView: ARView?
override init(frame: CGRect) {
super.init(frame: frame)
isMultipleTouchEnabled = true
isUserInteractionEnabled = true
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Began --- (New: \(touches.count), Total: \(totalTouches))")
for touch in touches {
let location = touch.location(in: self)
/// Hit testing.
/// ARView and Touch View must be of the same size
if let arView = arView {
let entity = arView.entity(at: location)
if let entity = entity {
print("Touched entity: \(entity.name)")
} else {
print("Touched: none")
}
}
}
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
let totalTouches = event?.allTouches?.count ?? touches.count
print("--- Touches Cancelled --- (Cancelled: \(touches.count), Total: \(totalTouches))")
}
}
What is the correct way to implement scrolling in a looong list that uses ScrollView and LazyVstack
Imagine I have some api that returns a longs list of comments with replies
The basic usecase is to scroll to the bottom(to the last comment) Most of the time this works fine
But, imagine some of the comments have many replies like 35 or more (or even 300)
User expands replies for the first post, then presses scroll to bottom.
The scrollbar reaches the bottom and I see the blank screen.
Sometimes the scrollbar may jump for a while before lazyvstack finishes loading or until I manually scroll up a bit or all the way up and down
What should I do in this case? Is this the swiftui performance problem that has no cure?
Abstract example:
ScrollViewReader { proxy in
ScrollView {
LazyVStack {
ForEach(comments) { comment in
CommentView(comment: comment)
.id("comment-\(comment.id)")
}
}
}
}
struct CommentView: View {
let comment: Comment
@State var isExpanded = false
var body: some View {
VStack {
Text(comment.text)
if isExpanded {
RepliesView(replies: comment.replies) // 35-300+ replies
}
}
}
}
...
scroll
proxy.scrollTo("comment-\(lastComment.id)", anchor: .bottom)
The document-based SwiftUI example app (https://developer.apple.com/documentation/swiftui/building-a-document-based-app-with-swiftui) doesn't specify a launch image.
It would seem per the HIG that the "pinkJungle" background in the app would be a decent candidate for a launch image, since it will be in the background when the document browser comes up.
However when specifying it as the UIImageName, it is not aligned the same as the background image. I'm having trouble figuring out how it should be aligned to match the image. The launch image seems to be scaled up a bit over scaledToFill.
I suppose a launch storyboard might make this more explicit, but I still should be able to do it without one.
This is the image when displayed as the launch image:
and this is how it's rendered in the background right before the document browser comes up:
List {
Text("ITEM 1")
.onHover(perform: { hovering in
debugPrint("hovering: ", hovering)
})
.help("ITEM 1")
Text("ITEM 2")
.onHover(perform: { hovering in
debugPrint("hovering: ", hovering)
})
.help("ITEM 2")
Text("ITEM 3")
.onHover(perform: { hovering in
debugPrint("hovering: ", hovering)
})
.help("ITEM 3")
}
.fixedSize(horizontal: false, vertical: true)
.frame(maxHeight: 200)
}
Hello everyone!!!
Considering the snippet above, seems like the onHover action, including help modifiers, doesn't work for the elements of a List, on macOS Tahoe.
The situation changes using a ScrollView embedding a LazyVStack, or disabling Liquid Glass from the info plist, so my guess is that the new Liquid Glass style has something to do with this issue though I didn't find any clue about it.
Does anyone have any idea?
Maybe there's a layer above that doesn't allow to trigger the onHover modifier?
Thanks in advance for your help!
For information I stumbled upon a regression with SwiftUI Slider on iOS 26. Its onEditingChanged closure might be called twice when interaction ends, with a final Boolean incorrect value of true provided to the closure.
As a result apps cannot reliably rely on this closure to detect when an interaction with the slider starts or ends.
I filed a feedback under FB20283439 (iOS 26.0 regression: Slider onEditingChanged closure is unreliable).
I'm using SwiftUI WebView and this error happens when app becomes inactive, the webview changes to blank, and will be in this state all along even if reopen a new webview. When I switch back to WKWebview, everything works fine.
environment Xcode 26.1(17B55) on macOS 15.7.1
Error acquiring assertion: <Error Domain=RBSServiceErrorDomain Code=1 "((target is not running or doesn't have entitlement com.apple.developer.web-browser-engine.rendering AND target is not running or doesn't have entitlement com.apple.developer.web-browser-engine.networking AND target is not running or doesn't have entitlement com.apple.developer.web-browser-engine.webcontent))" UserInfo={NSLocalizedFailureReason=((target is not running or doesn't have entitlement com.apple.developer.web-browser-engine.rendering AND target is not running or doesn't have entitlement com.apple.developer.web-browser-engine.networking AND target is not running or doesn't have entitlement com.apple.developer.web-browser-engine.webcontent))}>
this is the code, pretty simple, in load() function i just call page.load().
WebView(vm.page)
.onAppear {
Task {
await vm.load()
}
}
I’m seeing a strange visual bug in iOS 26 when building chat-style UIs that use an inverted ScrollView or List (via .rotationEffect(.radians(.pi)) and .scaleEffect(x: -1, y: 1)) to anchor messages at the bottom.
When I add .ignoresSafeArea() to let the chat bleed behind the navigation bar - the new navigation bar fade (that subtle top-to-bottom gradient Apple added in iOS 26) behaves incorrectly. Instead of fading from the top of the screen toward the nav bar, it fades upward from the bottom of the view, effectively covering the entire screen with the gradient.
This only happens when the view is inverted.
If I remove .ignoresSafeArea(), the fade looks correct — but then my chat no longer extends behind the nav bar.
It looks like the fade effect is being applied in the transformed coordinate space of the inverted scroll view rather than in visual screen space. I haven’t found a reliable workaround besides disabling the fade (which isn’t really possible).
Has anyone found a proper solution or a modifier that prevents the fade inversion when using flipped ScrollViews?
Would love to know if Apple is aware of this or if there’s a hidden API for disabling that fade effect.
I have made a report about this in Feedback Assistant: FB20540755
I'm encountering unexpected behavior with @Binding in a UIViewRepresentable's Coordinator. The same Coordinator instance returns different values depending on how the property is accessed.
Environment:
iOS 17+ / Xcode 15+
SwiftUI with UIViewRepresentable
Issue:
When I access @Binding var test inside the Coordinator:
✅ Via self.test in Coordinator methods: Returns the updated value
❌ Via context.coordinator.test in updateUIView: Returns the stale/initial value
Both access the same Coordinator instance (verified by memory address), yet return different values.
Minimal Reproducible Example:
struct ContentView: View {
@State private var test: [Int] = [1, 2, 3, 4, 5]
var body: some View {
VStack {
TestRepresentable(test: $test)
Text("State: \(test.description)")
}
}
}
struct TestRepresentable: UIViewRepresentable {
@Binding var test: [Int]
func makeUIView(context: Context) -> UIButton {
let button = UIButton(type: .system)
button.setTitle("Toggle", for: .normal)
button.addTarget(
context.coordinator,
action: #selector(Coordinator.buttonTapped),
for: .touchUpInside
)
return button
}
func updateUIView(_ uiView: UIButton, context: Context) {
// Log coordinator instance address
let coordAddr = String(describing: Unmanaged.passUnretained(context.coordinator).toOpaque())
print("[updateUIView] Coordinator address: \(coordAddr)")
// Log values
print("[updateUIView] self.test: \(self.test)")
print("[updateUIView] context.coordinator.test: \(context.coordinator.test)")
// These should be the same but they're not!
// self.test shows updated value
// context.coordinator.test shows stale value
}
func makeCoordinator() -> Coordinator {
Coordinator(test: $test)
}
class Coordinator: NSObject {
@Binding var test: [Int]
var idx: Int = 0
init(test: Binding<[Int]>) {
_test = test
}
@objc func buttonTapped() {
idx += 1
if idx < test.count {
test[idx] += 5
}
// Log coordinator instance address
let selfAddr = String(describing: Unmanaged.passUnretained(self).toOpaque())
print("[buttonTapped] Coordinator address: \(selfAddr)")
// Log value - this shows the UPDATED value
print("[buttonTapped] self.test: \(test)")
}
}
}
Actual Output:
[Initial]
[updateUIView] Coordinator address: 0x600001234567
[updateUIView] self.test: [1, 2, 3, 4, 5]
[updateUIView] context.coordinator.test: [1, 2, 3, 4, 5]
[After first tap]
[buttonTapped] Coordinator address: 0x600001234567
[buttonTapped] self.test: [1, 7, 3, 4, 5] ✅ Updated!
[updateUIView] Coordinator address: 0x600001234567 ← Same instance
[updateUIView] self.test: [1, 7, 3, 4, 5] ✅ Updated!
[updateUIView] context.coordinator.test: [1, 2, 3, 4, 5] ❌ Stale!
[After second tap]
[buttonTapped] Coordinator address: 0x600001234567
[buttonTapped] self.test: [1, 7, 8, 4, 5] ✅ Updated!
[updateUIView] Coordinator address: 0x600001234567 ← Same instance
[updateUIView] self.test: [1, 7, 8, 4, 5] ✅ Updated!
[updateUIView] context.coordinator.test: [1, 2, 3, 4, 5] ❌ Still stale!
Questions:
Why does context.coordinator.test return a stale value when it's the same Coordinator instance?
Is this intended behavior or a bug?
What's the correct pattern to access Coordinator's @Binding properties in updateUIView?
Workaround Found:
Using self.test instead of context.coordinator.test in updateUIView works, but I'd like to understand why accessing the same property through context yields different results.
Related:
I've seen suggestions to update coordinator.parent = self in updateUIView, but this doesn't explain why the same object's property returns different values.
Binding documentation states it's "a pair of get and set closures," but there's no explanation of how Context affects closure behavior.
Any insights would be greatly appreciated!
P.S - https://stackoverflow.com/questions/69552418/why-does-a-binding-in-uiviewrepresentables-coordinator-have-a-constant-read-valu
This is the link that I found out online with same problem that I have but not answered well.
Hello, I've a question about performance when trying to render lots of items coming from SwiftData via a @Query on a SwiftUI List. Here's my setup:
// Item.swift:
@Model final class Item: Identifiable {
var timestamp: Date
var isOptionA: Bool
init() {
self.timestamp = Date()
self.isOptionA = Bool.random()
}
}
// Menu.swift
enum Menu: String, CaseIterable, Hashable, Identifiable {
var id: String { rawValue }
case optionA
case optionB
case all
var predicate: Predicate<Item> {
switch self {
case .optionA: return #Predicate { $0.isOptionA }
case .optionB: return #Predicate { !$0.isOptionA }
case .all: return #Predicate { _ in true }
}
}
}
// SlowData.swift
@main
struct SlowDataApp: App {
var sharedModelContainer: ModelContainer = {
let schema = Schema([Item.self])
let modelConfiguration = ModelConfiguration(schema: schema, isStoredInMemoryOnly: false)
return try! ModelContainer(for: schema, configurations: [modelConfiguration])
}()
var body: some Scene {
WindowGroup {
ContentView()
}
.modelContainer(sharedModelContainer)
}
}
// ContentView.swift
struct ContentView: View {
@Environment(\.modelContext) private var modelContext
@State var selection: Menu? = .optionA
var body: some View {
NavigationSplitView {
List(Menu.allCases, selection: $selection) { menu in
Text(menu.rawValue).tag(menu)
}
} detail: {
DemoListView(selectedMenu: $selection)
}.onAppear {
// Do this just once
// (0..<15_000).forEach { index in
// let item = Item()
// modelContext.insert(item)
// }
}
}
}
// DemoListView.swift
struct DemoListView: View {
@Binding var selectedMenu: Menu?
@Query private var items: [Item]
init(selectedMenu: Binding<Menu?>) {
self._selectedMenu = selectedMenu
self._items = Query(filter: selectedMenu.wrappedValue?.predicate,
sort: \.timestamp)
}
var body: some View {
// Option 1: touching `items` = slow!
List(items) { item in
Text(item.timestamp.description)
}
// Option 2: Not touching `items` = fast!
// List {
// Text("Not accessing `items` here")
// }
.navigationTitle(selectedMenu?.rawValue ?? "N/A")
}
}
When I use Option 1 on DemoListView, there's a noticeable delay on the navigation. If I use Option 2, there's none. This happens both on Debug builds and Release builds, just FYI because on Xcode 16 Debug builds seem to be slower than expected: https://indieweb.social/@curtclifton/113273571392595819
I've profiled it and the SwiftData fetches seem blazing fast, the Hang occurs when accessing the items property from the List. Is there anything I'm overlooking or it's just as fast as it can be right now?
Colombia is not yet listed as a Tap to Pay user, but it is in the process of becoming so.
We are currently a group of developers at a company in Colombia working on a project to integrate Tap to Pay into our application. After reviewing Apple's documentation, my company is not certified to meet Apple's security requirements, PCI standards, or licensing requirements. However, the payment service provider we have contracted for this is in the process of obtaining the certifications, authorizations, and licenses that Apple specifies. Our team members and managers overseeing this Tap to Pay project have told us that we, as iOS developers, should integrate and use the Proximity Reader API, but we know that we, as developers, are not authorized by Apple to do so.
Is the payment service provider the only one who can make this possible, enabling its use with NFC and Proximity Reader?
I would like to know if the service provider will provide us with the SDK containing the Proximity Reader API for integration into the project, or if my company will have to implement the Proximity Reader API ourselves?
Topic:
App & System Services
SubTopic:
Tap to Pay on iPhone
Tags:
Swift Packages
Swift
SwiftUI
Tap to Pay on iPhone