Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

macOS SwiftUI app with external 4K camera & sensors for Hospital Avatar: ARKit, MLX, and Thermal feasibility?
We are developing a standalone AI avatar application for hospital reception kiosks using Mac mini (M2/M4). The app runs on SwiftUI + RealityKit, displays on a 75-inch monitor, and utilizes a USB-connected 4K camera and external sensors (LiDAR/mmWave). We have several technical concerns regarding the transition from iPadOS to macOS. Could you please provide insights on the following? ARKit/Vision Framework on macOS with External Camera On iPadOS, ARKit provides robust Face Tracking. On macOS with an external USB 4K camera: Can we achieve real-time face tracking (expression/gaze/depth) with Vision framework or ARKit comparable to iPadOS performance? Are there any specific limitations for accessing the Neural Engine via Vision framework for real-time 4K video analysis on macOS? Accessing External Hardware (LiDAR/Sensors) in Sandbox We plan to connect external LiDAR and mmWave sensors (e.g., Akara) via USB/Bluetooth. Is it feasible to communicate with these custom drivers/devices within the App Sandbox environment? Would DriverKit be required, or can we use standard serial communication APIs? On-Device LLM (MLX) & Thermals We intend to run a local LLM (e.g., Llama 3 using MLX framework) for offline conversation, alongside 3D rendering. With the M2/M4 Mac mini fan design, is there a risk of thermal throttling during 10+ hours of continuous operation (simultaneous CoreML + 3D rendering)? Is the Mac Studio recommended over the Mac mini for this thermal profile? Long-running Speech API Are there any known issues (memory leaks, API limits) when using Spherch framework and AVSpeechSynthesizer continuously for over 10 hours daily? 3D Display Output Are there any macOS constraints for rendering a SwiftUI window in a specific 3D format (e.g., Side-by-Side) and outputting it via HDMI to a 3D digital signage display (fixed refresh rate/resolution)? Thank you for your assistance.
0
0
244
2w
SCNTechnique clearColor Always Shows sceneBackground When Passes Share Depth Buffer
Problem Description I'm encountering an issue with SCNTechnique where the clearColor setting is being ignored when multiple passes share the same depth buffer. The clear color always appears as the scene background, regardless of what value I set. The minimal project for reproducing the issue: https://www.dropbox.com/scl/fi/30mx06xunh75wgl3t4sbd/SCNTechniqueCustomSymbols.zip?rlkey=yuehjtk7xh2pmdbetv2r8t2lx&st=b9uobpkp&dl=0 Problem Details In my SCNTechnique configuration, I have two passes that need to share the same depth buffer for proper occlusion handling: "passes": [ "box1_pass": [ "draw": "DRAW_SCENE", "includeCategoryMask": 1, "colorStates": [ "clear": true, "clearColor": "0 0 0 0" // Expecting transparent black ], "depthStates": [ "clear": true, "enableWrite": true ], "outputs": [ "depth": "box1_depth", "color": "box1_color" ], ], "box2_pass": [ "draw": "DRAW_SCENE", "includeCategoryMask": 2, "colorStates": [ "clear": true, "clearColor": "0 0 0 0" // Also expecting transparent black ], "depthStates": [ "clear": false, "enableWrite": false ], "outputs": [ "depth": "box1_depth", // Sharing the same depth buffer "color": "box2_color", ], ], "final_quad": [ "draw": "DRAW_QUAD", "metalVertexShader": "myVertexShader", "metalFragmentShader": "myFragmentShader", "inputs": [ "box1_color": "box1_color", "box2_color": "box2_color", ], "outputs": [ "color": "COLOR" ] ] ] And the metal shader used to display box1_color and box2_color with splitting: fragment half4 myFragmentShader(VertexOut in [[stage_in]], texture2d<half, access::sample> box1_color [[texture(0)]], texture2d<half, access::sample> box2_color [[texture(1)]]) { half4 color1 = box1_color.sample(s, in.texcoord); half4 color2 = box2_color.sample(s, in.texcoord); if (in.texcoord.x < 0.5) { return color1; } return color2; }; Expected Behavior Both passes should clear their color targets to transparent black (0, 0, 0, 0) The depth buffer should be shared between passes for proper occlusion Actual Behavior Both box1_color and box2_color targets contain the scene background instead of being cleared to transparent (see attached image) This happens even when I explicitly set clearColor: "0 0 0 0" for both passes Setting scene.background.contents = UIColor.clear makes the clearColor work as expected, but I need to keep the scene background for other purposes What I've Tried Setting different clearColor values - all are ignored when sharing depth buffer Using DRAW_NODE instead of DRAW_SCENE - didn't solve the issue Creating a separate pass to capture the background - the background still appears in the other passes Various combinations of clear flags and render orders Environment iOS/macOS, running with "My Mac (Designed for iPad)" Xcode 16.2 Question Is this a known limitation of SceneKit when passes share a depth buffer? Is there a workaround to achieve truly transparent clear colors while maintaining a shared depth buffer for occlusion testing? The core issue seems to be that SceneKit automatically renders the scene background in every DRAW_SCENE pass when a shared depth buffer is detected, overriding any clearColor settings. Any insights or workarounds would be greatly appreciated. Thank you!
0
0
266
Jun ’25
screencapturekit
I have code that captures a window and displays a cropped image. The problem is 2 fold. Kit doesn't seem to allow to modify stop and recapture image in window mode to capture a portion of the screen. So this makes me having to crop and display the cropped image via a published variable. This all works find. But seems to stop after some time. Using an M1 16gig ram. program is taking less than 100meg of mem with 40-70%cpu as the crow flies. printing captured success in debug mode and sometimes frame isn't valid so guarding against it. any ideas on how to improve my strategy?
0
0
136
Jun ’25
ARView [.showStatistics] doesn't work on Xcode Canvas
Hi, I can't see RealityKit statistics on Xcode Canvas using: arView.debugOptions = [.showStatistics] The statistics only show on a physical device, not Xcode live canvas with #Preview. Testing in Xcode 26.0.1 (17A400) on Tahoe 26.0.1 (25A362). Use case: I'm using RealityKit as a non-AR 3D engine. Xcode Canvas is useful for live iterations. Is this expected behavior? How can I see FPS on Xcode canvas? SKView for example shows all debug options on both Xcode Canvas and physical devices.
0
0
444
Oct ’25
Optimizing HZB Mip-Chain Generation and Bindless Argument Tables in a Custom Metal Engine
Hi everyone, I’ve been developing a custom, end-to-end 3D rendering engine called Crescent from scratch using C++20 and Metal-cpp (targeting macOS and visionOS). My primary goal is to build a zero-bottleneck, GPU-driven pipeline that maximizes the potential of Apple Silicon’s Unified Memory and TBDR architecture. While the fundamental systems are stable, I am looking for architectural feedback from Metal framework engineers regarding specific synchronization and latency challenges. Current Core Implementations: GPU-Driven Instance Culling: High-performance occlusion culling using a Hierarchical Z-Buffer (HZB) approach via Compute Shaders. Clustered Forward Shading: Support for high-count dynamic lights through view-space clustering. Temporal Stability: Custom TAA with history rejection and Motion Blur resolve. Asset Infrastructure: Robust GUID-based scene serialization and a JSON-driven ECS hierarchy. The Architectural Challenge: I am currently seeing slight synchronization overhead when generating the HZB mip-chain. On Apple Silicon, I am evaluating the cost of encoder transitions versus cache-friendly barriers. && m_hzbInitPipeline && m_hzbDownsamplePipeline && !m_hzbMipViews.empty(); if (canBuildHzb) { MTL::ComputeCommandEncoder* hzbInit = commandBuffer->computeCommandEncoder(); hzbInit->setComputePipelineState(m_hzbInitPipeline); hzbInit->setTexture(m_depthTexture, 0); hzbInit->setTexture(m_hzbMipViews[0], 1); if (m_pointClampSampler) { hzbInit->setSamplerState(m_pointClampSampler, 0); } else if (m_linearClampSampler) { hzbInit->setSamplerState(m_linearClampSampler, 0); } const uint32_t hzbWidth = m_hzbMipViews[0]->width(); const uint32_t hzbHeight = m_hzbMipViews[0]->height(); const uint32_t threads = 8; MTL::Size tgSize = MTL::Size(threads, threads, 1); MTL::Size gridSize = MTL::Size((hzbWidth + threads - 1) / threads * threads, (hzbHeight + threads - 1) / threads * threads, 1); hzbInit->dispatchThreads(gridSize, tgSize); hzbInit->endEncoding(); for (size_t mip = 1; mip < m_hzbMipViews.size(); ++mip) { MTL::Texture* src = m_hzbMipViews[mip - 1]; MTL::Texture* dst = m_hzbMipViews[mip]; if (!src || !dst) { continue; } MTL::ComputeCommandEncoder* downEncoder = commandBuffer->computeCommandEncoder(); downEncoder->setComputePipelineState(m_hzbDownsamplePipeline); downEncoder->setTexture(src, 0); downEncoder->setTexture(dst, 1); const uint32_t mipWidth = dst->width(); const uint32_t mipHeight = dst->height(); MTL::Size downGrid = MTL::Size((mipWidth + threads - 1) / threads * threads, (mipHeight + threads - 1) / threads * threads, 1); downEncoder->dispatchThreads(downGrid, tgSize); downEncoder->endEncoding(); } if (m_instanceCullHzbPipeline) { dispatchInstanceCulling(m_instanceCullHzbPipeline, true); } } My Questions: Encoder Synchronization: Would you recommend moving this loop into a single ComputeCommandEncoder using MTLBarrier between dispatches to maintain L2 cache residency, or is the overhead of separate encoders negligible for depth-downsampling on TBDR? visionOS Bindless Latency: For stereo rendering on visionOS, what are the best practices for managing MTL4ArgumentTable updates at 90Hz+? I want to ensure that updating bindless resources for each eye doesn't introduce unnecessary CPU-to-GPU latency. Memory Management: Are there specific hints for Memoryless textures that could be applied to intermediate HZB levels to save bandwidth during this process? I’ve attached a screenshot of a scene rendered with the engine (PBR, SSR, and IBL).
0
0
326
1w
Error: "CoreImage Metal library does not contain function"
Hey I'm using the CIDepthBlurEffect Core Image Filter in my app. It seems to work ok but I get these errors in the console when calling the class. CoreImage Metal library does not contain function for name: sparserendering_xhlrb_scan CoreImage Metal library does not contain function for name: sparserendering_xhlrb_diffuse CoreImage Metal library does not contain function for name: sparserendering_xhlrb_copy_back CoreImage Metal library does not contain function for name: plain_or_sRGB_copy Am I missing some sort of import to gain these Metal functions? I am using my own custom shaders but I assume you'd be able to use them along side the built in ones.
0
0
516
Dec ’25
How to apply the same SystemImage to both mainEmitter and spawnedEmitter without clipping in ParticleEmitterComponent?
Hi everyone, I’m currently learning about ParticleEmitterComponentParticleEmitterComponent and exploring the sample app provided in the Simulating particles in your visionOS app documentation. In the sample app, when I set the EmitterPreset to fireworks from the settings panel on the left side of the window and choose SystemImage, I noticed two issues: The image applied to mainEmitter appears clipped or cropped. The image on spawnedEmitter does not update to the selected SystemImage. What I want to achieve: Apply the same SystemImage to both mainEmittermainEmitter and spawnedEmitterspawnedEmitter so that it displays correctly without clipping. Remove the animation that changes the size of spawnedEmitterspawnedEmitter over time and keep it at a constant size. Could someone explain which properties should be adjusted to achieve this behavior? Any guidance or examples would be greatly appreciated. Thanks in advance!
0
0
476
Sep ’25
RealityView IOS Navigation
I have a visionOS app that I’m adding support for IOS and will like to keep using RealityView. I know there are the following modifiers to add some navigation .realityViewCameraControls(.orbit) .realityViewCameraControls(.dolly) .realityViewCameraControls(.pan) But how can I add more than one? For example I would like to orbit with one finger, Pan with 2 fingers and dolly by pinching. Is this possible and if so can someone share some sample code on how to achieve that? Thanks, Guillermo
0
1
464
Feb ’25
Improving person segmentation and occlusion quality in RealityKit
I’m building an app that uses RealityKit and specifically ARConfiguration.FrameSemantics.personSegmentationWithDepth. The goal is to insert an AR object into the scene behind a person, and an additional AR object in front of the person, while being as photo realistic as possible. Through testing, I’ve noticed that many times, the edges of the person segmentation mask are not well matched to the actual person, and parts of the person are transparent, with the AR object bleeding through. It’s sort of like a “bad green screen” effect, which I’d expect to see a little bit, but not to this extent. I’ve been testing on iPhone 16, iPhone 14 Pro, iPad Pro 12.9 inch 6th Generation, and iPhone 12 Pro, with similar results across all devices. I’m wondering what else I can do to improve this… either code changes, platform (like different iPhone models), or environment (like lighting, distance, etc). Attaching some example screen grabs and a minimum reproducible code sample. Appreciate any insights! import ARKit import SwiftUI import RealityKit struct RealityViewContainer: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.environment.sceneUnderstanding.options.insert(.occlusion) arView.renderOptions.insert(.disableMotionBlur) arView.renderOptions.insert(.disableDepthOfField) let configuration = ARWorldTrackingConfiguration() configuration.planeDetection = [.horizontal] if ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth) { configuration.frameSemantics.insert(.personSegmentationWithDepth) } arView.session.run(configuration) arView.session.delegate = context.coordinator context.coordinator.arView = arView } func makeCoordinator() -> Coordinator { Coordinator(self) } class Coordinator: NSObject, ARSessionDelegate { var parent: RealityViewContainer var floorAnchor: ARPlaneAnchor? init(_ parent: RealityViewContainer) { self.parent = parent } func session(_ session: ARSession, didAdd anchors: [ARAnchor]) { if let arView,floorAnchor == nil { for anchor in anchors { if let horizontalPlaneAnchor = anchor as? ARPlaneAnchor, horizontalPlaneAnchor.alignment == .horizontal, horizontalPlaneAnchor.transform.columns.3.y < arView.cameraTransform.translation.y { // filter out ceiling floorAnchor = horizontalPlaneAnchor let backgroundEntity = BackgroundEntity() let anchorEntity = AnchorEntity(anchor: horizontalPlaneAnchor) anchorEntity.addChild(background) let foregroundEntity = ForegroundEntity() backgroundEntity.addChild(foregroundEntity) arView.scene.addAnchor(anchorEntity) arView.installGestures([.rotation, .translation], for: backgroundEntity) break // Stop after adding the first horizontal plane (floor) } } } } } }
1
0
112
May ’25
3D Skeletal animation in metal-cpp?
Hey all! I'm got my hands on a refurbished mac mini m1 and already diving into metal. At the moment, i'm currently studying graphics programming with opengl and got to a point where I can almost create a 3d cube. However, I noticed there aren't many tutorials for metal cpp but rather demos. One thing I love about graphic programming, is skinning/skeletal animation. At the moment, I can't find any sources or tutorials on how to load skeletal animations into metal-cpp. So, if I create my character in blender and had all types of animations all loaded into a .FBX or maybe .DAE and load this into metal api with metal-cpp, how can I go on about how this works?
1
0
501
Mar ’25
Sparse Texture Writes
Hey, I've been struggling with this for some days now. I am trying to write to a sparse texture in a compute shader. I'm performing the following steps: Set up a sparse heap and create a texture from it Map the whole area of the sparse texture using updateTextureMapping(..) Overwrite every value with the value "4" in a compute shader Blit the texture to a shared buffer Assert that the values in the buffer are "4". I have a minimal example (which is still pretty long unfortunately). It works perfectly when removing the line heapDesc.type = .sparse. What am I missing? I could not find any information that writes to sparse textures are unsupported. Any help would be greatly appreciated. import Metal func sparseTexture64x64Demo() throws { // ── Metal objects guard let device = MTLCreateSystemDefaultDevice() else { throw NSError(domain: "SparseNotSupported", code: -1) } let queue = device.makeCommandQueue()! let lib = device.makeDefaultLibrary()! let pipeline = try device.makeComputePipelineState(function: lib.makeFunction(name: "addOne")!) // ── Texture descriptor let width = 64, height = 64 let format: MTLPixelFormat = .r32Uint // 4 B per texel let desc = MTLTextureDescriptor() desc.textureType = .type2D desc.pixelFormat = format desc.width = width desc.height = height desc.storageMode = .private desc.usage = [.shaderWrite, .shaderRead] // ── Sparse heap let bytesPerTile = device.sparseTileSizeInBytes let meta = device.heapTextureSizeAndAlign(descriptor: desc) let heapBytes = ((bytesPerTile + meta.size + bytesPerTile - 1) / bytesPerTile) * bytesPerTile let heapDesc = MTLHeapDescriptor() heapDesc.type = .sparse heapDesc.storageMode = .private heapDesc.size = heapBytes let heap = device.makeHeap(descriptor: heapDesc)! let tex = heap.makeTexture(descriptor: desc)! // ── CPU buffers let bytesPerPixel = MemoryLayout<UInt32>.stride let rowStride = width * bytesPerPixel let totalBytes = rowStride * height let dstBuf = device.makeBuffer(length: totalBytes, options: .storageModeShared)! let cb = queue.makeCommandBuffer()! let fence = device.makeFence()! // 2. Map the sparse tile, then signal the fence let rse = cb.makeResourceStateCommandEncoder()! rse.updateTextureMapping( tex, mode: .map, region: MTLRegionMake2D(0, 0, width, height), mipLevel: 0, slice: 0) rse.update(fence) // ← capture all work so far rse.endEncoding() let ce = cb.makeComputeCommandEncoder()! ce.waitForFence(fence) ce.setComputePipelineState(pipeline) ce.setTexture(tex, index: 0) let threadsPerTG = MTLSize(width: 8, height: 8, depth: 1) let tgCount = MTLSize(width: (width + 7) / 8, height: (height + 7) / 8, depth: 1) ce.dispatchThreadgroups(tgCount, threadsPerThreadgroup: threadsPerTG) ce.updateFence(fence) ce.endEncoding() // Blit texture into shared buffer let blit = cb.makeBlitCommandEncoder()! blit.waitForFence(fence) blit.copy( from: tex, sourceSlice: 0, sourceLevel: 0, sourceOrigin: MTLOrigin(x: 0, y: 0, z: 0), sourceSize: MTLSize(width: width, height: height, depth: 1), to: dstBuf, destinationOffset: 0, destinationBytesPerRow: rowStride, destinationBytesPerImage: totalBytes) blit.endEncoding() cb.commit() cb.waitUntilCompleted() assert(cb.error == nil, "GPU error: \(String(describing: cb.error))") // ── Verify a few texels let out = dstBuf.contents().bindMemory(to: UInt32.self, capacity: width * height) print("first three texels:", out[0], out[1], out[width]) // 0 1 64 assert(out[0] == 4 && out[1] == 4 && out[width] == 4) } Metal shader: #include <metal_stdlib> using namespace metal; kernel void addOne(texture2d<uint, access::write> tex [[texture(0)]], uint2 gid [[thread_position_in_grid]]) { tex.write(4, gid); }
1
0
130
May ’25
SceneKit - different behavior when debugging
Hello, I'm currently working on my first SceneKit game and have encountered an issue related to moving an SCNNode using a UIPanGestureRecognizer. When I deploy the game to my iPhone via Xcode in debug mode, all interactions are smooth. However, when I stop the debugging session and run the game directly from the device (outside of Xcode), the SCNNode movement behaves inconsistently — it works sometimes smoothly and sometimes not and the interaction becomes choppy. The SCNNode movement is controlled using a UIPanGestureRecognizer. Do you have any ideas what might be causing the issue?
1
0
265
May ’25
PhotogrammetrySession fails with internal errors 4011 / 4012 when using iOS Object Capture (Area Mode) images
Hi all, I’m running into an issue when trying to reconstruct a 3D model using PhotogrammetrySession on macOS from a set of images captured via the iOS Object Capture sample app, specifically in Area mode. When I attempt to create the model from these images (using the raw Images/ folder exported directly from the capture session), I get the following errors: ERROR cv3dapi.pg: Internal error codes (2): 4011 4012 WARN cv3dapi.pg: Internal warning codes (1): 4507 Output error with code = -15 requestError: CoreOC.PhotogrammetrySession.Error.processError I use the "Images" directory directly exported from Object Capture with my iphone 12 pro max (has lidar) set to "area mode" in the object capture app here is an example heic image metadata from the sequence. heif-info Images/00044.869568833.HEIC MIME type: image/heic main brand: heic compatible brands: mif1, MiHE, MiPr, miaf, MiHB, heic image: 3024x4032 (id=49), primary tiles: 6x8, tile size: 512x512 colorspace: YCbCr, 4:2:0 bit depth: 8 thumbnail: 240x320 color profile: nclx alpha channel: no depth channel: yes size: 192x256 bits per pixel: 8 z-near: 1.173828 z-far: 2.552734 d-min: undefined d-max: undefined representation: uniform Z metadata: Exif: 960 bytes uri /tag:apple.com,2023:ObjectCapture#CameraTrackingState: 4 bytes uri /tag:apple.com,2023:ObjectCapture#CameraCalibrationData: 1015 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#ObjectBoundingBox: 48 bytes uri /tag:apple.com,2023:ObjectCapture#RawFeaturePoints: 832 bytes uri /tag:apple.com,2023:ObjectCapture#PointCloudData: 23984 bytes uri /tag:apple.com,2023:ObjectCapture#BundleVersion: 5 bytes uri /tag:apple.com,2023:ObjectCapture#SegmentID: 4 bytes uri /tag:apple.com,2024:ObjectCapture#SessionUUID: 36 bytes uri /tag:apple.com,2024:ObjectCapture#CaptureMode: 4 bytes uri /tag:apple.com,2023:ObjectCapture#Feedback: 4 bytes uri /tag:apple.com,2023:ObjectCapture#WideToDepthCameraTransform: 48 bytes uri /tag:apple.com,2023:ObjectCapture#TemporalDepthPointClouds: 864026 bytes transformations: angle (ccw): 270 region annotations: none properties: camera intrinsic matrix: focal length: 2813.695557; 2813.695557 principal point: 1522.338502; 2002.843018 skew: 0.000000 camera extrinsic matrix: rotation matrix: -0.695 0.344 -0.632 0.007 -0.875 -0.483 -0.719 -0.340 0.606 Questions: • What do internal error codes 4011 and 4012 refer to? • Is there something specific about Area mode captures that require preprocessing before they’re compatible with PhotogrammetrySession? • Has anyone successfully reconstructed a model from an Area mode session using the stock Apple tools? NOTE: I can provide the folder of images for debugging if that would help!
1
2
972
Jul ’25
Loosing display when zooming UIView on Mac (Designed for IPad) while IPad version works fine with same zoom level
I have a UIView that displays lines, and I zoom in (scale by 2 on the scroll view zoomScale variable containing the UIView). As I zoom in, on the Mac version (Designed for IPad) I loose the graphic after a certain number of zooms (the scrollView maximumZoomScale is set at 10). To ensure that lines are correctly represented, I modify the contentScaleFactor variable on the UIView; otherwise, the line's display is pixelated. On the IPad (simulator and real) I do not loose the graphic when zooming. So the Mac port of the UIView drawing is not working as the IPad version. Everything else of the application works fine except this important details. I already submitted a feedback request (#FB16829106) with the images showing the problem. I need a solution to this problem. Thanks.
1
0
106
Mar ’25
GameKit Matchmaking
Hi all im having a variety of issues with gamekit matchmaking. On the simulator the matchmaking ui pops up and I can click Quick Match, then immediately "Failed to find Players" this is the same with a real Apple ID and a sandbox account. If I use real devices the app at least discovers a match, but then the match none of the delegate methods for the match ever get called and the logs are filled with socket not connected and various errors. My questions are: Should match making via quick match work in the simulator, I have seen tutorial videos etc of this working, but I can't seem to get it to work. How do people debug issues with GameCenter / Gamekit to find out why its not able to connect? Many thanks in advance
1
0
482
Feb ’25
RealityKit VideoMaterial renders pink on iOS 18
our app is live, and it appears that since the ios 18 update - the VideoMaterial renders pink / purple color instead of the video (picture attached). the audio is rendered properly. we found that it occurs on old devices: iPhone 11 & iPhone SE 2020. I've found this thread of Andy Jazz on stackoverflow: Steps to Reproduce: Create a plane for the video screen. Apply a VideoMaterial using AVPlayerItem. Anchor the model entity to an ARImageAnchor. Expected Outcome: The video should play as a material on the plane in RealityKit. Actual Outcome: On iOS 18, the plane appears pink, indicating the VideoMaterial isn’t applied. What I’ve Tried: -Verified the video URL is correct. -Checked that the AVPlayerItem and VideoMaterial are initialised correctly. -Ensured the AVPlayer is playing the video. I also tried different formats (mov / mp4 / m4v), and verifying that the video's status is readyToPlay. any suggestions?
1
0
213
Jun ’25
CGContext PDF/A intents
let dic : [AnyHashable:Any] = [ kCGPDFXRegistryName: "http://www.color.org" as CFString, kCGPDFXOutputConditionIdentifier: "FOGRA43" as CFString, kCGPDFContextOutputIntent: "GTS_PDFX" as CFString, kCGPDFXOutputIntentSubtype: "GTS_PDFX" as CFString, kCGPDFContextCreateLinearizedPDF: "" as CFString, kCGPDFContextCreatePDFA: "" as CFString, kCGPDFContextAuthor: "Placeholder" as CFString, kCGPDFContextCreator: "Placeholder" as CFString ] Hello, Now I would like to export my PDF's as PDF/A. In my opinion, there is also the right option for this under Core Graphics. Unfortunately, the documentation does not show what is 'kCGPDFContextCreatePDFA' or 'kCGPDFContextLinearizedPDF' for a stringvalue is required. What I have already tried: GTS_PDFA1 , PDF/A-1, true as CFString. (Above my CFDictionary. ...Author e.g are working perfectly.) In the Finder you can see these two options, which I would also like to implement in my app. Thank you in advance!
1
0
180
Jun ’25
SKScene editor canvas gone
I've recently run into an issue in Xcode where the sks editor's preview canvas just vanishes for every project on my computer. I don't think it is an issue with my sks files because this works as expected on another computer with the same files, and when it happens it happens for ALL sks files in all projects. There used to be menu items to toggle the canvas and its settings, but those are now gone for me in sks files (they show up for swift files that have previews, however). Any idea what is going on here? How do I get the canvas back? I literally cannot get any work done on my primary computer because of this...
1
0
546
Dec ’25
Race conditions when changing CAMetalLayer.drawableSize?
Is the pseudocode below thread-safe? Imagine that the Main thread sets the CAMetalLayer's drawableSize to a new size meanwhile the rendering thread is in the middle of rendering into an existing MTLDrawable which does still have the old size. Is the change of metalLayer.drawableSize thread-safe in the sense that I can present an old MTLDrawable which has a different resolution than the current value of metalLayer.drawableSize? I assume that setting the drawableSize property informs Metal that the next MTLDrawable offered by the CAMetalLayer should have the new size, right? Is it valid to assume that "metalLayer.drawableSize = newSize" and "metalLayer.nextDrawable()" are internally synchronized, so it cannot happen that metalLayer.nextDrawable() would produce e.g. a MTLDrawable with the old width but with the new height (or a completely invalid resolution due to potential race conditions)? func onWindowResized(newSize: CGSize) { // Called on the Main thread metalLayer.drawableSize = newSize } func onVsync(drawable: MTLDrawable) { // Called on a background rendering thread renderer.renderInto(drawable: drawable) }
1
1
489
Dec ’25