So I noticed this:
A sheet window is presented.
The sheet window has some UI that makes it expandable say a little arrow expandable button.
Click the little expandable button. Now the sheet window controller calls - (void)setFrame:display:animate: on its window to resize.
The parent window flies across the screen to the lower left corner.
I'm on Tahoe 26.1. Seems to be related to NSSheetMoveHelper. Not sure how long this bug has been around. Workaround is to call -setFrame:display:animate: and pass NO to the animate flag. Then the sheet window resizes (but not animated which doesn't look as good as the old behavior but better than suddenly disappearing).
I think Apple may already knows about this bug b/c in an Apple app on Tahoe I see a sheet resizing being done with no animation...
AppKit
RSS for tagConstruct and manage a graphical, event-driven user interface for your macOS app using AppKit.
Posts under AppKit tag
200 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I've been thinking of bringing some older games back to the modern Mac.
Rewriting old titles in Swift but using the original data files that assume use of non-rounded corners Windows.
Many of these games require all the Window space of a 90 degree cornered Window.
Can anyone point me at some useful workarounds or Is Apple simply deaf to the needs of this type of product?
Hi all,
when I launch my macOS app from Xcode 16 on ARM64, appKit logs me this error on the debug console:
It's not legal to call -layoutSubtreeIfNeeded on a view which is already being laid out. If you are implementing the view's -layout method, you can call -[super layout] instead. Break on _NSDetectedLayoutRecursion(void) to debug. This will be logged only once. This may break in the future.
_NSDetectedLayoutRecursion doesn't help a lot, giving me these assembly codes from a call to a subclassed window method that looks like this:
-(void) setFrame:(NSRect)frameRect display:(BOOL)flag {
if (!_frameLocked) [super setFrame:frameRect display:flag];
}
I have no direct call to -layoutSubtreeIfNeeded from a
-layout implementation in my codes. I have a few calls to this method from update methods, however even if I comment all of them, the error is still logged...
Finally, apart from that log, I cannot observe any layout error when running the program. So I wonder if this error can be safely ignored?
Thanks!
I'm building a macOS app using SwiftUI, and I want to create a draggable floating webcam preview window
Right now, I have something like this:
import SwiftUI
import AVFoundation
struct WebcamPreviewView: View {
let captureSession: AVCaptureSession?
var body: some View {
ZStack {
if let session = captureSession {
CameraPreviewLayer(session: session)
.clipShape(RoundedRectangle(cornerRadius: 50))
.overlay(
RoundedRectangle(cornerRadius: 50)
.strokeBorder(Color.white.opacity(0.2), lineWidth: 2)
)
} else {
VStack(spacing: 8) {
Image(systemName: "video.slash.fill")
.font(.system(size: 40))
.foregroundColor(.white.opacity(0.6))
Text("No Camera")
.font(.caption)
.foregroundColor(.white.opacity(0.6))
}
}
}
.shadow(color: .black.opacity(0.3), radius: 10, x: 0, y: 5)
}
}
struct CameraPreviewLayer: NSViewRepresentable {
let session: AVCaptureSession
func makeNSView(context: Context) -> NSView {
let view = NSView()
view.wantsLayer = true
let previewLayer = AVCaptureVideoPreviewLayer(session: session)
previewLayer.videoGravity = .resizeAspectFill
previewLayer.frame = view.bounds
view.layer = previewLayer
return view
}
func updateNSView(_ nsView: NSView, context: Context) {
if let previewLayer = nsView.layer as? AVCaptureVideoPreviewLayer {
previewLayer.frame = nsView.bounds
}
}
}
This is my SwiftUI side code to show the webcam, and I am trying to create it as a floating window which appears on top of all other apps windows etc. however, even when the webcam is clicked, it should not steal the focus from other apps, the other apps should be able to function properly as they already are.
import Cocoa
import SwiftUI
class WebcamPreviewWindow: NSPanel {
private static let defaultSize = CGSize(width: 200, height: 200)
private var initialClickLocation: NSPoint = .zero
init() {
let screenFrame = NSScreen.main?.visibleFrame ?? .zero
let origin = CGPoint(
x: screenFrame.maxX - Self.defaultSize.width - 20,
y: screenFrame.minY + 20
)
super.init(
contentRect: CGRect(origin: origin, size: Self.defaultSize),
styleMask: [.borderless],
backing: .buffered,
defer: false
)
isOpaque = false
backgroundColor = .clear
hasShadow = false
level = .screenSaver
collectionBehavior = [
.canJoinAllSpaces,
.fullScreenAuxiliary,
.stationary,
.ignoresCycle
]
ignoresMouseEvents = false
acceptsMouseMovedEvents = true
hidesOnDeactivate = false
becomesKeyOnlyIfNeeded = false
}
// MARK: - Focus Prevention
override var canBecomeKey: Bool { false }
override var canBecomeMain: Bool { false }
override var acceptsFirstResponder: Bool { false }
override func makeKey() {
}
override func mouseDown(with event: NSEvent) {
initialClickLocation = event.locationInWindow
}
override func mouseDragged(with event: NSEvent) {
let current = event.locationInWindow
let dx = current.x - initialClickLocation.x
let dy = current.y - initialClickLocation.y
let newOrigin = CGPoint(
x: frame.origin.x + dx,
y: frame.origin.y + dy
)
setFrameOrigin(newOrigin)
}
func show<Content: View>(with view: Content) {
let host = NSHostingView(rootView: view)
host.autoresizingMask = [.width, .height]
host.frame = contentLayoutRect
contentView = host
orderFrontRegardless()
}
func hide() {
orderOut(nil)
contentView = nil
}
}
This is my Appkit Side code make a floating window, however, when the webcam preview is clicked, it makes it as the focus app and I have to click anywhere else to loose the focus to be able to use the rest of the windows.
In macOS 13.2, the NSTrackingEnabledDuringMouseDrag flag for NSTrackingArea is broken, i.e., mouse-entered and mouse-exited events are not sent during a drag. This problem did not exist in macOS 12. I've filed a bug report, FB11973492, just wondering if anyone knows a workaround.
Hello.
In my app, I use RegisterEventHotkey to implement global keyboard shortcuts to trigger actions.
Up until macOS Sequoia, I was able to use a keyboard shortcut with option and shift as the modifiers, like option shift 2 (⌥ ⇧ 2).
Now, on macOS Sequoia, using RegisterEventHotkey to register a hotkey with those exact modifiers (option and shift), regardless of the key, fails with the error -9868 (eventInternalErr).
Is this a documented and wanted change, or is this a bug? Other modifier keys (just command, command option, command shift, command control, control shift, etc), all work.
Any insight into this would be appreciated. (Feedback filed: FB15163561)
Thank you,
Matthias
Previously I was able to "snapshot" view that were not part of any window hierarchy using the following:
NSImage *buttonImage = [NSImage imageWithSystemSymbolName:@"49.circle.fill" accessibilityDescription:nil];
NSButton *aButton = [NSButton buttonWithImage:buttonImage target:nil action:nil];
[aButton sizeToFit];
NSBitmapImageRep *rep = [aButton bitmapImageRepForCachingDisplayInRect:aButton.bounds];
if (rep == nil) {
NSLog(@"Failed to get bitmap image rep.");
return;
}
[aButton cacheDisplayInRect:aButton.bounds toBitmapImageRep:rep];
NSData *tiffData = rep.TIFFRepresentation;
NSImage *snapShotOfImage = [[NSImage alloc]initWithData:tiffData];
Now on macOS Tahoe I get a non nil image, but the image is blank.
However if I add aButton NSWindow's view hiearchy just before the call to -cacheDisplayInRect:toBitmapImageRep: I do get a proper image.
Is this behavior intended or is this considered a bug? Is it documented anywhere that a view must be in a NSWindow for -cacheDisplayInRect:toBitmapImageRep: to work? Thanks
I’m building a macOS video editor that uses AVComposition and AVVideoComposition.
Initially, my renderer creates a composition with some default video/audio tracks:
@Published var composition: AVComposition?
@Published var videoComposition: AVVideoComposition?
@Published var playerItem: AVPlayerItem?
Then I call a buildComposition() function that inserts all the default video segments.
Later in the editing workflow, the user may choose to add their own custom video clip. For this I have a function like:
private func handlePickedVideo(_ url: URL) {
guard url.startAccessingSecurityScopedResource() else {
print("Failed to access security-scoped resource")
return
}
let asset = AVURLAsset(url: url)
let videoTracks = asset.tracks(withMediaType: .video)
guard let firstVideoTrack = videoTracks.first else {
print("No video track found")
url.stopAccessingSecurityScopedResource()
return
}
renderer.insertUserVideoTrack(from: asset, track: firstVideoTrack)
url.stopAccessingSecurityScopedResource()
}
What I want to achieve is the same behavior professional video editors provide,
after the composition has already been initialized and built, the user should be able to add a new video track and the composition should update live, meaning the preview player should immediately reflect the changes without rebuilding everything from scratch manually.
How can I structure my AVComposition / AVMutableComposition and my rendering pipeline so that adding a new clip later updates the existing composition in real time (similar to Final Cut/Adobe Premiere), instead of needing to rebuild everything from zero?
You can find a playable version of this entire setup at :- https://github.com/zaidbren/SimpleEditor
In macOS Tahoe, users can tint folders or add symbols. But when trying to access that customized icon in Swift, the system always returns the default folder icon.
NSWorkspace.shared.icon(forFile: url.path)
try url.resourceValues(forKeys: [.effectiveIconKey]).effectiveIcon
try url.resourceValues(forKeys: [.customIconKey]).customIconKey
All of these give back the standard folder icon without any of the user-applied customization.
So the question is: Is there any API or workaround in Swift to retrieve the actual customized folder icon (including tint and symbol) as displayed in Finder on macOS Tahoe?
I'm Trying to add an NSProgressIndicator on the unlock (first lock screen ) in macOS ( the screen with the lock icon )
I already added a label and it works fine and after entering the password on the second (authentication) page I can able to add ProgressIndicator but not on first screen. But Whenever I try to add a Progress indicator, the entire screen turns Black and nothing is displayed
Is NSProgressIndicator supported on the first unlock Screen ? Or does macOS block animated UI on this screen
Any Guidance would be helpful
Thanks
I want to check whether a sandboxed application already has access permission to a specific URL.
Based on my investigation, the following FileManager method seems to be able to determine it:
FileManager.default.isReadableFile(atPath: fileURL.path)
However, the method name and description don't explicitly mention this use case, so I'm not confident there aren't any oversights.
Also, since this method takes a String path rather than a URL, I'd like to know if there's a more modern API available.
I want to use this information to decide whether to prompt the user about the Sandbox restriction in my AppKit-based app.
My app has the following UI layout:
NSSplitViewController as the windows contentViewController
NSPageController in the content (right) split item
NSTabViewController as the root items of the NSPageController
NSViewController with a collection view in the first tab of that NSTabViewController
The collection view is using a NSCollectionViewCompositionalLayout in which the sections are set up to have a header using NSCollectionLayoutBoundarySupplementaryItem with pinToVisibleBounds=true and alignment=top
With macOS 26, the pinned supplementary item automatically gets a blurred/semi-transparent background that seamlessly integrates with the toolbar. When the window's title bar has a NSTitlebarAccessoryViewController added, the said semi-transparent background gets a bottom hard edge and a hairline to provide more visual separation from the main content.
During runtime, my NSPageController transitions from the NSTabViewController to another view controller. When transitioning back, the semi-transparent blur bleeds into the entire section. This happens no matter if there's a NSTitlebarAccessoryViewController added or not.
It doesn't happen 100% of the cases, it seems to depend on section size, header visibility and/or scroll position. But it happens more often than not.
Most of the time, a second or so after the back transition - shortly after pageControllerDidEndLiveTransition: of the NSPageControllerDelegate is called - the view updates and the supplementary views are back to normal.
Sometimes, the issue also appears not when transitioning using NSPageController, but simply by scrolling through the collection view.
Anyone has an idea what is happening here? Below are two screenshots of both the "ok" and "not ok" state
I'm on macOS 26.0.1 and I'm using XCode 26.0.1
When scrolling a basic NSScrollView there seems to be a sudden jump after each flick. Scrolling does not appear smooth and is disorientating.
A scroll jump seems to happen directly after letting go of a scroll flick using a trackpad/mouse. Right at that moment the scroll turns into a momentum scroll, slowly decreasing the speed. But the first frame after the gesture the content jumps forward, more than what is expected.
Observations:
Counterintuitively, scrolling appears to be smoother when disabling NSScrollView.isCompatibleWithResponsiveScrolling. If disabled using a custom NSScrollView subclass there is no large jump anymore.
Scrolling also appears to be smoother using a SwiftUI ScrollView. I assume that has the same behaviour as a disabled isCompatibleWithResponsiveScrolling
Ironically a WKWebView scrolls much smoother. No sudden jump is observable. It also seems to scroll with faster acceleration, but the individual frames do appear smoother. Why is this better than a native NSScrollView?
Elastic scrolling at the bounds of the scroll view also appears much smoother for WKWebViews. When pulling to refresh there is a jump for NSScrollView/SwiftUI, but not for WKWebView.
When using an NSScrollView with isCompatibleWithResponsiveScrolling disabled, scrolling appears just as smooth as WKWebView on macOS 13 Ventura and below. On macOS 14 Sonoma scrolling behaviour is suddenly different.
Please see a sample project with 4 different scroll views side by side:
https://github.com/floorish/ScrollTest
Screen recordings show the sudden jumps when scrolling and when elastic scrolling.
Tested on Intel & Arm Macs, macOS 11 Big Sur through 15 Sequoia, built with Xcode 16.
Should isCompatibleWithResponsiveScrolling be disabled on Sonoma+? Are there any drawbacks?
There is also no overdraw anymore since Monterey, as described in https://developer.apple.com/library/archive/releasenotes/AppKit/RN-AppKitOlderNotes/#10_9Scrolling
Even with responsive scrolling disabled, why is WKWebView scrolling much smoother than NSScrollView?
When I compiled my legacy project with Tahoe's macOS 26 SDK, NSRulerViews are showing a very different design:
Under prior macOS versions the horizontal and verrical ruler's background were blurring the content view, which was extending under the rulers, showing through their transparency.
With Tahoe the horizontal ruler is always reflecting the scrollview's background color, showing the blurred content view beneath.
And the vertical ruler is always completely transparent (without any blurring), showing the content together with the ruler's markers and ticks.
It's difficult to describe, I'll try to replicate this behavior with a minimal test project, and probably file a bug report / enhancement request.
But before I take next steps, can anyone confirm this observation? Maybe it is an intentional design decision by Apple?
I just updated to macOS 26.1.
I have a pure AppKit app (I guess that's not possible anymore but as close to a pure AppKit app as you can get).
I use NSButton with the glass bezel style and SF symbol images. It looks like the minor OS update brought layout changes because now some of these buttons are scaling the symbol image much larger than was being done on macOS 26. The image can sometimes draw outside the glass 'bezel'. It looks like using the 'info' symbol in a button results in much larger image scaling than it did on the previous Tahoe for some SF Symbols.
With the glass bezel style and a SF Symbol image how am I supposed to consistently make the button look good? With certain symbols I have to use imageScaling NSImageScaleProportionallyUpOrDown and on others I have to use NSImageScaleProportionallyDown. If I'm using a system image shouldn't it just do the right thing? Is trial and error the only way to tell? That's what I was doing before but it seems that the minor 26.1 update changed things.
Additionally calling -sizeToFit on a button multiple times can cause it to shrink for example:
[button sizeToFit]; // <-- At fitting size
// Then later
[button sizeToFit]; // Now button is shrunk
But if the button is already at its fitting size an additional call later shouldn't make it shrink, it should stay the same size. FB20517174
I see I now inherit SwiftUI, not sure if that has anything to do with this but if I wanted to opt in to fragile layout I wouldn't be using Appkit...
I have a fairly robust MacOS application that has an NSScrollView that contains a canvas with various subviews (including web views and text views that contain scroll views), and a couple of peer views that track items in the scroll view (eg: screen space controls).
Some of these views interrupt two finger scrolling. Every scroll view, and one of the peer views (essentially a stack view with buttons in it).
I have written an additional bare bones application which does roughly the same thing, and my bare bones application works perfectly: Start two-finger dragging, scroll any of these other things under the cursor, I can continue to drag (and start dragging in any of those, and they drag without interfering with the parent scroll view).
I have tried everything to recreate the interruption, including drag gestures attached to these various ancillary views, and I cannot figure out why dragging some of these views under the cursor interrupts two finger drag in our application, but not in my testbed.
Does anyone have suggestions for how to debug this? I can see that there is a gesture recognizer in the NSScrollView hierarchy, but I don't see it in any of my gesture recognizer handling. I have breakpoints on every variation of hit testing and mouse motion, and none of them are getting hit in unexpected ways.
I'm at my wit's end. Thanks.
I have a working XIB App to run a Linux VM with graphics interface. I am trying to rewrite it in SwiftUI but fall into all sorts of problems when using a combination of a Representable of the VZVirtualMachineView, an associated Coordinator, and @StateObjects.
a) The VM display is not updated when running but is displayed if I close the window and reopen it. As the underlying VZVirtualMachineView is created/dismantled many times, there are warnings about negative scanouts that end up crashing the App
b) Keyboard focus is not really working.
https://developer.apple.com/forums/thread/766014 reports that there is probably a solution making a NSViewControllerReporesentable rather than a VZVirtualMachineViewRepresentable.
I think I am fighting against proper SwiftUI lifecycle and would love to have a hint at what shall be the right organization of model and SwiftUI constructs.
In Xcode 26.1, for a project with “Swift Language Version” set to swift6 I am getting for the following code I get errors:
Error1: Main actor-isolated initializer 'init(title:action:keyEquivalent:)' has different actor isolation from nonisolated overridden declaration
Error2: Main actor-isolated initializer 'init(coder:)' has different actor isolation from nonisolated overridden declaration
@MainActor class MenuItem: NSMenuItem { // error1
var userInfo: [String : Any] = [:]
init(label: String, action: Selector?, target: AnyObject?, userInfo: [String : Any]) {
self.userInfo = userInfo
super.init(title: label, action: action, keyEquivalent: "")
}
required init(coder decoder: NSCoder) { // error2
super.init(coder: decoder)
}
}
NSVisualEffectView in AppKit has two main properties: material and blendingMode.
Material is well supported in SwiftUI, but I can't seem to find an equivalent for blendingMode.
What is the SwiftUI equivalent to NSVisualEffect.BlendingMode?
My app displays some text that should appear the same regardless of the container view or window size, i.e. it should grow and shrink with the container view or window.
On iOS there is UILabel.adjustsFontSizeToFitWidth but I couldn't find any equivalent API on macOS. On the internet some people suggest to iteratively set a smaller font size until the text fits the available space, but I thought there must be a more efficient solution. How does UILabel.adjustsFontSizeToFitWidth do it?
My expectation was that setting a font's size to a fraction of the window width or height would do the trick, but when resizing the window I can see a slightly different portion of it.
class ViewController: NSViewController {
override func loadView() {
view = MyView(frame: CGRect(x: 0, y: 0, width: 400, height: 400))
NSLayoutConstraint.activate([view.widthAnchor.constraint(equalTo: view.heightAnchor, multiplier: 3), view.heightAnchor.constraint(greaterThanOrEqualToConstant: 100)])
}
}
class MyView: NSView {
let textField = NSTextField(labelWithString: String(repeating: "a b c d e f g h i j k l m n o p q r s t u v w x y z ", count: 2))
override init(frame frameRect: NSRect) {
super.init(frame: frameRect)
textField.translatesAutoresizingMaskIntoConstraints = false
textField.setContentCompressionResistancePriority(.defaultLow, for: .horizontal)
addSubview(textField)
NSLayoutConstraint.activate([textField.topAnchor.constraint(equalTo: topAnchor), textField.leadingAnchor.constraint(equalTo: leadingAnchor), textField.trailingAnchor.constraint(equalTo: trailingAnchor)])
}
required init?(coder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func resize(withOldSuperviewSize oldSize: NSSize) {
// textField.font = .systemFont(ofSize: frame.width * 0.05)
textField.font = .systemFont(ofSize: frame.height * 0.1)
}
}