Hi all,
I have been trying to get Apple's assistive touch's snap to item to work for a unity game built using Apple's Core & Accessibility API. The switch control recognises these buttons however, eye tracking will not snap to them. The case in which it needs to snap is when an external eye tracking device is connected and utilises assistive touch & assistive touch's snap to item.
All buttons in the game have a AccessibilityNode with the trait 'Button' on them & an appropriate label, which, following the documentation and comments on the developer forum, should allow them to be recognised by snap to item.
This is not the case, devices (iPads and iPhones) do not recognise the buttons as a snap to target.
Does anyone know why this is the case, and if this is a bug?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I have a very basic usdz file from this repo
I call loadTextures() after loading the usdz via MDLAsset. Inspecting the MDLTexture object I can tell it is assigning a colorspace of linear rgb instead of srgb although the image file in the usdz is srgb.
This causes the textures to ultimately render as over saturated.
In the code I later convert the MDLTexture to MTLTexture via MTKTextureLoader but if I set the srgb option it seems to ignore it.
This significantly impacts the usefulness of Model I/O if it can't load a simple usdz texture correctly. Am I missing something?
Thanks!
I'm trying to position an Entity with inverse kinematics while dragging the handle only, but use forward kinematics (pose jointTransforms) otherwise.
The System, Components, Gestures and Rig all seem to work individually.
My approach is to add the IKComponent when dragging starts on the handle and removing the IKComponent it is released.
The switch into IK works, but when removing the IKComponent the App crashes
* thread #1, queue = 'com.apple.main-thread', stop reason = EXC_BAD_ACCESS (code=1, address=0x8)
* frame #0: 0x00000001aa5bb188 CoreRE`(anonymous namespace)::IKComponentSolverWrapper::getSolver() + 60
frame #1: 0x00000001aa5bafb0 CoreRE`re::internal::ikParametersNodeCallback(re::Slice<re::StringID>, re::Slice<re::RigDataValue>, re::Slice<re::StringID>, re::MutableSlice<re::RigDataValue>, void*) + 48
frame #2: 0x00000001aa52d090 CoreRE`re::(anonymous namespace)::resolveEvaluationContextCallback(re::EvaluationContext&, void*) + 152
frame #3: 0x00000001aa68c024 CoreRE`re::(anonymous namespace)::$_76::__invoke(re::Slice<unsigned long>, re::(anonymous namespace)::RegisterTable&) + 1080
frame #4: 0x00000001aa678c94 CoreRE`re::EvaluationModelSingleThread::evaluate(re::EvaluationContextSlices&) + 1188
frame #5: 0x00000001aa866984 CoreRE`re::SkeletalPoseRuntimeData::executeEvaluationTree() + 136
frame #6: 0x00000001aadf37ec CoreRE`re::ecs2::SkeletalPoseComponent::calculateSkeletalPoseBufferWithRig(re::ecs2::MeshComponent*, re::ecs2::RigComponent*, re::ecs2::SkeletalPoseBufferComponent*) + 492
frame #7: 0x00000001aadf4a84 CoreRE`re::ecs2::SkeletalPoseComponentStateImpl::processPreparingComponents(re::ecs2::System::UpdateContext const&, re::ecs2::BasicComponentStateSceneData<re::ecs2::SkeletalPoseComponent>*, re::ecs2::ComponentBuckets<re::ecs2::SkeletalPoseComponent>::BucketIteration, void*) + 268
frame #8: 0x00000001aadf54b0 CoreRE`re::ecs2::SkeletalPoseSystem::update(re::ecs2::System::UpdateContext) const + 732
frame #9: 0x00000001aaed3e54 CoreRE`re::internal::Callable<re::ecs2::ECSManager::configurePhaseECSSystems(re::Scheduler::ScheduleDescriptor&, re::ecs2::ECSSystemGroup, unsigned long)::$_1, void (float)>::operator()(float&&) const + 168
frame #10: 0x00000001ab40eda4 CoreRE`re::Scheduler::executePhase(unsigned long) + 440
frame #11: 0x00000001aa6a3b74 CoreRE`re::Engine::executePhase(re::FramePhase) + 144
frame #12: 0x000000023173de9c RealitySystemSupport`RCPSharedSimulationExecuteUpdate + 64
frame #13: 0x00000002276c9820 MRUIKit`__65-[MRUISharedSimulation _doJoinWithConnectionConfiguration:error:]_block_invoke.35 + 168
frame #14: 0x00000002276c8530 MRUIKit`__addCAPreFenceHandler_block_invoke + 32
frame #15: 0x000000018af22058 QuartzCore`CA::Transaction::run_commit_handlers(CATransactionPhase) + 112
frame #16: 0x000000018aef2ad4 QuartzCore`CA::Context::commit_transaction(CA::Transaction*, double, double*) + 592
frame #17: 0x000000018af21898 QuartzCore`CA::Transaction::commit() + 652
frame #18: 0x000000018af22dac QuartzCore`CA::Transaction::flush_as_runloop_observer(bool) + 68
frame #19: 0x0000000185a26820 UIKitCore`_UIApplicationFlushCATransaction + 48
frame #20: 0x0000000184f97af0 UIKitCore`_UIUpdateSequenceRun + 76
frame #21: 0x0000000185954290 UIKitCore`schedulerStepScheduledMainSection + 168
frame #22: 0x00000001859536d8 UIKitCore`runloopSourceCallback + 80
frame #23: 0x00000001804157fc CoreFoundation`__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__ + 24
frame #24: 0x0000000180415744 CoreFoundation`__CFRunLoopDoSource0 + 172
frame #25: 0x0000000180414eb0 CoreFoundation`__CFRunLoopDoSources0 + 232
frame #26: 0x000000018040f454 CoreFoundation`__CFRunLoopRun + 788
frame #27: 0x000000018040ecd4 CoreFoundation`CFRunLoopRunSpecific + 552
frame #28: 0x0000000190104b70 GraphicsServices`GSEventRunModal + 160
frame #29: 0x0000000185a27e30 UIKitCore`-[UIApplication _run] + 796
frame #30: 0x0000000185a2c058 UIKitCore`UIApplicationMain + 124
frame #31: 0x00000001d29558b4 SwiftUI`closure #1 (Swift.UnsafeMutablePointer<Swift.Optional<Swift.UnsafeMutablePointer<Swift.Int8>>>) -> Swift.Never in SwiftUI.KitRendererCommon(Swift.AnyObject.Type) -> Swift.Never + 164
frame #32: 0x00000001d29555dc SwiftUI`SwiftUI.runApp<τ_0_0 where τ_0_0: SwiftUI.App>(τ_0_0) -> Swift.Never + 84
frame #33: 0x00000001d265ecdc SwiftUI`static SwiftUI.App.main() -> () + 164
frame #34: 0x000000010303f1c4 Playground.debug.dylib`static PlaygroundApp.$main() at <compiler-generated>:0
frame #35: 0x000000010303f290 Playground.debug.dylib`main at PlaygroundApp.swift:7:8
frame #36: 0x0000000102f6d410 dyld_sim`start_sim + 20
frame #37: 0x000000010312e274 dyld`start + 2840
Is there a workaround or another way to switch between IK and FK?
Topic:
Graphics & Games
SubTopic:
RealityKit
I’ve been trying to run Jurassic World Evolution 2 using the Game Porting Toolkit on macOS, but the game doesn’t launch and crashes immediately. Based on the error and research, it seems the issue is related to missing support for D3D12_TILED_RESOURCES_TIER_2 in the Metal API.
If this is the case, does anyone know if support for tiled resources is planned for future updates of the toolkit? Or are there any potential workarounds for bypassing this limitation?
I have an M1 Pro with a 16-core GPU. When I run a shader with 8193 threads, atomic_thread_fence is violated across the boundary between thread 8191 (the last thread in the 7th threadgroup) and 8192 (the first thread in the 9th threadgroup).
I've attached the Metal and Swift files, but I'll repost the relevant kernel here. It's a function that launches N threads to iterate through a binary tree from the leaves, where the first thread to reach the parent terminates and the second one populates it with the sum of the nodes two children.
// clang-format off
void sum(device const int& size,
device const int* __restrict__ in,
device int* __restrict__ out,
device atomic_int* visited,
uint i [[thread_position_in_grid]]) {
// clang-format on
int val = in[i];
uint cur = (size + i - 1);
out[cur] = val;
atomic_thread_fence(mem_flags::mem_device, memory_order_seq_cst);
cur = (cur - 1) / 2;
int proceed = atomic_fetch_add_explicit(&visited[cur], 1, memory_order_relaxed);
while (proceed == 1) {
uint left = 2 * cur + 1;
uint right = 2 * cur + 2;
uint val_left = out[left];
uint val_right = out[right];
uint val_cur = val_left + val_right;
out[cur] = val_cur;
if (cur == 0) {
break;
}
cur = (cur - 1) / 2;
atomic_thread_fence(mem_flags::mem_device, memory_order_seq_cst);
proceed = atomic_fetch_add_explicit(&visited[cur], 1, memory_order_relaxed);
}
}
What I'm observing is that thread 8192 hits the atomic_fetch_add first and terminates, while thread 8191 hits it second (observes that thread 8192 had incremented it by 1) and proceeds into the loop. Thread 8191 reads out[16383] (which it populated with 8191) and out[16384] (which thread 8192 populated with 8192 prior to the atomic_thread_fence). Instead of reading 8192 from out[16384] though, it reads 0.
Maybe I'm missing something but this seems like a pretty clear violation of the atomic_thread_fence which (I thought) was supposed to guarantee that the write from thread 8192 to out[16384] would be visible to any thread observing the effects of the following atomic_fetch_add. Is atomic_fetch_add not a store operation? Modifying it to an atomic_store or atomic_exchange still results in the bug. Adding another atomic_thread_fence between the atomic_fetch_add and reading of out also doesn't change anything.
I only begin to observe this on grid sizes of 8193 and upwards. That's 9 threadgroups per grid, which I assume could be related to my M1 Pro GPU having 16 cores.
Running the same example on an A17 Pro GPU doesn't show any of this behavior up through a tested grid size of 4194303 (2^22-1), at which point testing larger grid sizes starts to run into other issues so I can't test anything larger.
Removing the atomic_thread_fences on both the M1 and A17 cause the test to fail at much smaller grid sizes, as expected.
sum.metal
main.swift
Hi,
When I attach BillboardComponent to anchor entities, I am no longer able to retrieve the tapped entity anymore because the collision shapes of the entity are messed up due to always orienting it towards the camera. And it does not updated the collision shapes because if I try pressing everywhere that is not my model entity, I get a hit out of nowhere.
I tried updating the collision shapes of the entity every frame:
for child in existingPassport.mainEntity!.children {
child.generateCollisionShapes(recursive: true)
}
However, nothing comes of it, and it is not a smart solution in the first places because it is too heavy to recreate the shapes every frame.
I am using the usual AR View Controller that works when I comment out the BillboardComponent line just fine:
private func setupTapRecognizer() {
let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap))
arView.addGestureRecognizer(tapRecognizer)
}
@objc func handleTap(_ recognizer: UITapGestureRecognizer) {
print("handle tap URL 1")
let location = recognizer.location(in: arView)
if let entity = arView.entity(at: location) {
print("handle tap URL 2")
// Assuming each entity has a URL stored in a component
if let urlComponent = entity.components[URLComponent.self] {
webViewPresenter?.presentFullScreenWebView(url: urlComponent.url)
print("handle tap URL: \(urlComponent.url)")
}
}
}
How should we tackle this issue on iOS 18?
Thanks!
Is there a working example of imageblock_slice with implicit layout somewhere?
I get a compilation error when i write this:
imageblock_slilce color_slice = img_blk.slice(frag->color);
Error:
No matching member function for call to 'slice'
candidate template ignored: couldn't infer template argument 'E'
candidate function template not viable: requires 2 arguments, but 1 was provided
Too few template arguments for class template 'imageblock_slice'
It seems the syntax has changed since the Imageblocks presentation https://developer.apple.com/videos/play/tech-talks/603/
I tried supplying the struct type of the image block between <> but it still does not work.
Hello!
Brand new to the Apple developer community, so, hello everyone! I'm a game developer, we just launched our first game on PC and I'm looking to port it to ios. Time is something I'm kind of short on, and I hear it takes some jumping through hoops to get the know-how to port something to mobile. Any good sites you'd recommend for finding programmers to port your game? It's fairly simple - just a visual novel. Any and all suggestions welcome!
All the best!
Elijah
Topic:
Graphics & Games
SubTopic:
General
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene.
Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
Hi experts,
When I open a USDZ file which contains perspective cameras by "Files" app in IOS 18.2/iPadOS 18.2, I can't see anything. And when I open the USDZ file in IOS 18.1/iPadOS 18.1, it works well.
On the other hand, when I open a USDZ file which contains orthographic cameras in IOS 18.1 or IOS 18.2, the scene is stuck.
Could you help to solve these issues please?
Thanks.
Hello,
I am trying to read video frames using AVAssetReaderTrackOutput. Here is the sample code:
//prepare assets
let asset = AVURLAsset(url: some_url)
let assetReader = try AVAssetReader(asset: asset)
guard let videoTrack = try await asset.loadTracks(withMediaCharacteristic: .visual).first else {
throw SomeErrorCode.error
}
var readerSettings: [String: Any] = [
kCVPixelBufferIOSurfacePropertiesKey as String: [String: String]()
]
//check if HDR video
var isHDRDetected: Bool = false
let hdrTracks = try await asset.loadTracks(withMediaCharacteristic: .containsHDRVideo)
if hdrTracks.count > 0 {
readerSettings[AVVideoAllowWideColorKey as String] = true
readerSettings[kCVPixelBufferPixelFormatTypeKey as String] =
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange
isHDRDetected = true
}
//add output to assetReader
let output = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: readerSettings)
guard assetReader.canAdd(output) else {
throw SomeErrorCode.error
}
assetReader.add(output)
guard assetReader.startReading() else {
throw SomeErrorCode.error
}
//add writer ouput settings
let videoOutputSettings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecType.hevc,
AVVideoWidthKey: 1920,
AVVideoHeightKey: 1080,
]
let finalPath = "//some URL oath"
let assetWriter = try AVAssetWriter(outputURL: finalPath, fileType: AVFileType.mov)
guard assetWriter.canApply(outputSettings: videoOutputSettings, forMediaType: AVMediaType.video)
else {
throw SomeErrorCode.error
}
let assetWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings)
let sourcePixelAttributes: [String: Any] = [
kCVPixelBufferPixelFormatTypeKey as String: isHDRDetected
? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_32ARGB,
kCVPixelBufferWidthKey as String: 1920,
kCVPixelBufferHeightKey as String: 1080,
]
//create assetAdoptor
let assetAdaptor = AVAssetWriterInputTaggedPixelBufferGroupAdaptor(
assetWriterInput: assetWriterInput, sourcePixelBufferAttributes: sourcePixelAttributes)
guard assetWriter.canAdd(assetWriterInput) else {
throw SomeErrorCode.error
}
assetWriter.add(assetWriterInput)
guard assetWriter.startWriting() else {
throw SomeErrorCode.error
}
assetWriter.startSession(atSourceTime: CMTime.zero)
//prepare tranfer session
var session: VTPixelTransferSession? = nil
guard
VTPixelTransferSessionCreate(allocator: kCFAllocatorDefault, pixelTransferSessionOut: &session)
== noErr, let session
else {
throw SomeErrorCode.error
}
guard let pixelBufferPool = assetAdaptor.pixelBufferPool else {
throw SomeErrorCode.error
}
//read through frames
while let nextSampleBuffer = output.copyNextSampleBuffer() {
autoreleasepool {
guard let imageBuffer = CMSampleBufferGetImageBuffer(nextSampleBuffer) else {
return
}
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 23:58 timestamp
let attachment = [
kCVImageBufferYCbCrMatrixKey: kCVImageBufferYCbCrMatrix_ITU_R_2020,
kCVImageBufferColorPrimariesKey: kCVImageBufferColorPrimaries_ITU_R_2020,
kCVImageBufferTransferFunctionKey: kCVImageBufferTransferFunction_SMPTE_ST_2084_PQ,
]
CVBufferSetAttachments(imageBuffer, attachment as CFDictionary, .shouldPropagate)
//now convert to CIImage with HDR data
let image = CIImage(cvPixelBuffer: imageBuffer)
let cropped = "" //here perform some actions like cropping, flipping, etc. and preserve this changes by converting the extent to CGImage first:
//this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 24:30 timestamp
guard
let cgImage = context.createCGImage(
cropped, from: cropped.extent, format: .RGBA16,
colorSpace: CGColorSpace(name: CGColorSpace.itur_2100_PQ)!)
else {
continue
}
//finally convert it back to CIImage
let newScaledImage = CIImage(cgImage: cgImage)
//now write it to a new pixelBuffer
let pixelBufferAttributes: [String: Any] = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true,
]
var pixelBuffer: CVPixelBuffer?
CVPixelBufferCreate(
kCFAllocatorDefault, Int(newScaledImage.extent.width), Int(newScaledImage.extent.height),
kCVPixelFormatType_420YpCbCr10BiPlanarFullRange, pixelBufferAttributes as CFDictionary,
&pixelBuffer)
guard let pixelBuffer else {
continue
}
context.render(newScaledImage, to: pixelBuffer) //context is a CIContext reference
var pixelTransferBuffer: CVPixelBuffer?
CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelTransferBuffer)
guard let pixelTransferBuffer else {
continue
}
// Transfer the image to the pixel buffer.
guard
VTPixelTransferSessionTransferImage(session, from: pixelBuffer, to: pixelTransferBuffer)
== noErr
else {
continue
}
//finally append to taggedBuffer
}
}
assetWriterInput.markAsFinished()
await assetWriter.finishWriting()
The result video is not in correct color as the original video. It turns out too bright. If I play around with attachment values, it can be either too dim or too bright but not exactly proper as the original video. What am I missing in my setup? I did find that kCVPixelFormatType_4444AYpCbCr16 can produce proper video output but then I can't convert it to CIImage and so I can't do the CIImage operations that I need. Mainly cropping and resizing the CIImage
What evidence exists that it's safe to call nextDrawable() on CAMetalLayer off the main thread? I have seen developers claiming that it's OK, but the official docs are silent on the topic. Attempting to do so with Strict Concurrency Checking set to Complete complains that CAMetalLayer is not @Sendable.
I want to call it off the main thread since there doesn't seem to be any way to prevent it from blocking the UI for up to a second. I have read hints and allegations that this won't happen if you avoid asking for too many drawables, but that doesn't seem to be true 100% of the time in my experience.
Supposing it is allowed, I wonder how races are handled such as when the layer's size is changed on the main thread, or if the layer is removed from the layer hierarchy.
Topic:
Graphics & Games
SubTopic:
Metal
I am trying to use the SVGF denoiser to denoise my ray traced shadows (and also other textures later). I do get a smoothed image, but with wonky denoising.
I need the depth-normal textures and motion textures for the SVGF and assume that these are badly filled in my case. However, neither in the above linked documentation nor in the WWDC19 video I find how they should be defined. I am looking to answers to:
Is depth in red or alpha channel for the depth-normal texture?
Are the normals in screen space?
Is depth linear?
Is it distance or z coordinate in view space? Or even logarithmically scaled or something else?
Are the motion vectors supposed to be in pixels per frame?
What is the orientation of the axis? Is y up or down?
Are there are other restrictions on the formats?
Also the linked code did not help me (I have not found any SVGF so far; also all the code is in Objective-C++, not Swift, but that's a different topic).
So how should I fill these textures.
Can someone point me to the documentation where these kinds of questions are answered?
My MacBook Air M1 has installed Mac OS Sonoma 14.3.1, and I tried to install game-poring-toolkit tonight. After the step which it requires me to input the command "brew -v install apple/apple/game-porting-toolkit", Terminal ran for minutes. But at the end this error appeared: Error: apple/apple/game-porting-toolkit 1.1 did not build.
I don't know anything about coding and software. Could someone please tell me what cause this error and how to fix it after you read my post? I will appreciate your help!
Hi, I uptaded to the 15.2 my system. It was Turkish and I change the English for AI. But Image Playground is not working.
My Playground app has been stuck at Downloading support for Image Playground now for about 4-5 days.
Topic:
Graphics & Games
SubTopic:
General
I am trying to model something similar to an odometer in RealityKit, where 3D numbers scroll up or down, as they increase or decrease, within a container entity. Is there a way for an Entity to clip its children so that anything that extends beyond its dimensions is not rendered?
Hello,
I want to use flattenedClone() node for the trees in my landscape and reduce the draw call. If my texture uses a jpg file the result is good. If I use a PNG with transparency there is no draw call reduction.
Can I use PNGs with transparency with the flattenedClone() ?
Below is the structure I'm using:
for child in scene_temp.rootNode.childNodes {
let newtree = tree.clone()
sum_tree.addChildNode(newtree)
}
let sum_tree_flat = sum_tree.flattenedClone()
scene.rootNode.addChildNode(sum_tree_flat)
If the tree texture contains a JPG file (diffuse) I get a draw call reduction (< 200 for my complete map) as expected (but no transparency with my leaves). If I use a PNG file, my draw calls are not reduced (> 2000) but I do have my trees with my leaves in transparency.
I've tried replacing tree.clone() with tree.flattenedClone() with no results.
Thanks for your help.
Translated with DeepL.com (free version)
Everything works fine, except when tapping the navigation Back link and returning to the previous view, the AR session inside RealityView does not terminate. The green dot camera indicator stays on, it is still scanning the environment, and if the package has audio in it, the audio will still play, albeit extremely panned on the right channel.
I have no issues terminating QuickLook or ARSCNView.
I have a simple NavigationLink opening the RealityView...
NavigationLink(destination: MyRealityView()) {
Text("Open AR")
}
struct MyRealityView : View {
var body: some View {
RealityView { content in
// Create horizontal plane anchor for the content
let anchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: [0.5,0.5]))
let scene = await loadEntity(named: "Scene")
// Add model to anchor
anchor.addChild(scene!)
content.add(anchor)
// View Settings
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.onDisappear {
//print("RealityView is disappearing. Cleanup actions here.")
}
.edgesIgnoringSafeArea(.all)
// Activate onTap from Reality Composer Pro
.gesture(TapGesture().targetedToAnyEntity().onEnded { value in
_ = value.entity.applyTapForBehaviors()
})
}}
I have experimented with several ways of trying to close it, and I can't figure it out. I have tried State variables and custom Back buttons. I was also trying to working with pause(), but I can't seem to get that to function either.
Anyone else have this issue or know of a solution? What am I missing?
Topic:
Graphics & Games
SubTopic:
RealityKit
I'm trying to pass a buffer of float2 items from CPU to GPU.
In the kernel, I can provide a parameter for the buffer:
device const float2* values
for example.
How do I specify float2 as the type for the MTL::Buffer?
I managed to get the code to work by "cheating" by defining a simple class that has the same data members as a float2, but there is probably a better way.
class Coord_f { public: float x{0.0f}; float y{0.0f}; };
then using code to allocate like this:
NS::TransferPtr(device->newBuffer(n_elements * sizeof(Coord_f), MTL::ResourceStorageModeManaged))
The headers for metal-cpp do not appear to define vector objects like float2, but I'm doubtless missing something.
Thanks.
I have a Mac Studio 2023 M2 Max
Running Sonoma 14.6.1
Developing in XCode 16.1
It seems that the NSScreen frame settings may be incorrect. The frame settings received from NSScreen.screens don't seem to match up with the Desktop arrangement settings in the Settings.
Apologies in advance for this long post!
for screen in NSScreen.screens {
let name = screen.localizedName
Globals.logger.debug("Globals initializeScreens - screen \(i) '\(name, privacy: .public)'")
Globals.logger.debug("Globals initializeScreens - '\(screen.debugDescription, privacy: .public)'")
}
This is what I receive in the log:
Globals initializeScreens - '<NSScreen: 0x600000ef4240;
name="PHL 346E2C";
backingScaleFactor=1.000000;
frame={{0, 0}, {3440, 1440}};
visibleFrame={{0, 0}, {3440, 1415}}>'
Globals initializeScreens - screen 2 'Blackmagic (1)'
Globals initializeScreens - '<NSScreen: 0x600000ef42a0;
name="Blackmagic (1)";
backingScaleFactor=1.000000;
frame={{-3840, 0}, {1920, 1080}};
visibleFrame={{-3840, 0}, {1920, 1055}}>'
Globals initializeScreens - screen 3 'Blackmagic (4)'
Globals initializeScreens - '<NSScreen: 0x600000ef4360;
name="Blackmagic (4)";
backingScaleFactor=1.000000;
frame={{-1920, 0}, {1920, 1080}};
visibleFrame={{-1920, 0}, {1920, 1055}}>'
Globals initializeScreens - screen 4 'Blackmagic (2)'
Globals initializeScreens - '<NSScreen: 0x600000ef43c0;
name="Blackmagic (2)";
backingScaleFactor=1.000000;
frame={{5360, 0}, {1920, 1080}};
visibleFrame={{5360, 0}, {1920, 1055}}>'
Globals initializeScreens - screen 5 'Blackmagic (3)'
Globals initializeScreens - '<NSScreen: 0x600000ef4420;
name="Blackmagic (3)";
backingScaleFactor=1.000000;
frame={{3440, 0}, {1920, 1080}};
visibleFrame={{3440, 0}, {1920, 1055}}>'
It looks like the frame settings for Blackmagic (2) and Blackmagic (4) are switched.
The setup has five monitors. Four are using the USB-C Digital AV Multiport Adapters. The output for these are streamed into a rack of A/V equipment using BlackMagic Design mini converters and monitors.
My Swift application allows users to open four movies, one for each of the AV Adapters. The movies can then be played back in sync for later processing by the A/V equipment.
Here are some screen captures that show my display settings.
Blackmagic (1) and Blackmagic (2) are to the left of the main screen.
Blackmagic (3) and Blackmagic(4) are to the right of the main screen.
The desktop is hard to see but is correct.
The wallpaper settings are all correct.
The wallpaper is correctly ordered when displayed on the monitors.
After opening the movies and using the NSScreen frame settings, the displays are incorrectly ordered. Test B and Test D are switched, which is what I would expect given the NSScreen frame values.
Any ideas? I've tried re-arranging the desktops, rebooting, etc. but no luck.
The code that changes the screen location is similar to this post on Stack Overflow
public func setDisplay( screen: NSScreen ) {
Globals.logger.log("MovieWindowController - setDisplay = \(screen.localizedName, privacy: .public)")
Globals.logger.debug("MovieWindowController - setDisplay - '\(screen.debugDescription, privacy: .public)'")
let dx = CGFloat(Constants.midX)
let dy = CGFloat(Constants.midY)
var pos = NSPoint()
pos.x = screen.visibleFrame.midX - dx
pos.y = screen.visibleFrame.midY - dy
Globals.logger.debug("MovieWindowController - setDisplay - x = '\(pos.x, privacy: .public)', y = '\(pos.y, privacy: .public)'")
window?.setFrameOrigin(pos)
}
The log show just what I would expect given the incorrect frame values.
MovieWindowController - setDisplay = Blackmagic (1)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018e8420; name="Blackmagic (1)"; backingScaleFactor=1.000000; frame={{-3840, 0}, {1920, 1080}}; visibleFrame={{-3840, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '-3840.000000', y = '-12.500000'
MovieWindowController - setDisplay = Blackmagic (2)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018a10e0; name="Blackmagic (2)"; backingScaleFactor=1.000000; frame={{5360, 0}, {1920, 1080}}; visibleFrame={{5360, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '5360.000000', y = '-12.500000'
MovieWindowController - setDisplay = Blackmagic (3)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018cc8a0; name="Blackmagic (3)"; backingScaleFactor=1.000000; frame={{3440, 0}, {1920, 1080}}; visibleFrame={{3440, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '3440.000000', y = '-12.500000'
MovieWindowController - setDisplay = Blackmagic (4)
MovieWindowController - setDisplay - '<NSScreen: 0x6000018c9ce0; name="Blackmagic (4)"; backingScaleFactor=1.000000; frame={{-1920, 0}, {1920, 1080}}; visibleFrame={{-1920, 0}, {1920, 1055}}>'
MovieWindowController - setDisplay - x = '-1920.000000', y = '-12.500000'
Am I correct? I think this is driving me crazy!
Thanks in advance!
Edit: The mouse behavior is correct in moving across the displays!
Topic:
Graphics & Games
SubTopic:
General