We’re developing an iPad application that visualizes 2D and 3D building floor plans, including a mesh network of nodes that control lighting and climate. The node count ranges from 1,000 to 15,000.
We’re using SceneKit to dynamically render the floor plan and node mesh on an iPad 10th generation running iPadOS 18.3. While the core visualization works, we are experiencing significant performance degradation as the node count increases.
Specifically:
At 750–1,000 nodes, UI responsiveness noticeably declines.
At 2,000 nodes, navigating the floor plan becomes nearly unusable.
We attempted to optimize performance with a Geometric Pool algorithm, but the impact was minimal. Strangely, the same iPad handles 30,000+ 3D objects effortlessly when using Unity or Unreal Engine, raising the question of whether SceneKit may not be optimized for this scale.
Our questions:
Is SceneKit suitable for visualizing such large node counts, or are we hitting an inherent limitation of the framework?
Are there best practices or optimization techniques for SceneKit that we might be missing?
Should we consider a hybrid approach or fully transition to a different 3D engine for this use case?
We’ve attached a code sample below demonstrating the issue. Any insights, suggestions, or experiences would be greatly appreciated!
ContentView.swift
SceneKit
RSS for tagCreate 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.
Posts under SceneKit tag
66 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Hi
Looking at the documentation for screenSpaceAmbientOcclusionIntensity, I noticed that it says this is supported on visionOS 1.0+: https://developer.apple.com/documentation/scenekit/scncamera/screenspaceambientocclusionintensity
Could someone enlighten me as to how that would work? As far as I know, we don't use an SCNCamera on visionOS. So, what's the idea here? Can we activate SSAO on visionOS?
Helle there
Currently, I’m attempting to create an interactive learning application with a 3D view. I’ve discovered the framework SceneKit, but I lack the necessary knowledge to animate, load and moving objects. Could someone kindly suggest some good articles or tutorials on this topic?
Description:
I'm developing an AR effect using SceneKit and applying a transparent material to a face mesh. However, I'm facing an issue where the front faces of the mesh overlap each other, causing incorrect rendering.
Problem:
The front faces of the mesh overlap with each other when transparency is applied.
This causes areas like the cheeks to be visible through the nose, even though they should be occluded.
Expected Behavior: The material should behave as if it were opaque to itself—that is, overlapping front faces should be occluded properly, while still allowing transparency for background elements.
Actual Behavior: The mesh renders its own front faces incorrectly, making parts of the face visible through others when they should be blocked.
What I Have Tried:
testMaterial.writesToDepthBuffer = true
testMaterial.readsFromDepthBuffer = true
Question:
👉 How can I prevent SceneKit's transparent material from rendering overlapping front faces?
👉 Is there a way to force SceneKit to treat its own mesh as opaque for itself while still being transparent to the background?
👉 Does SceneKit support a proper depth pre-pass or an equivalent to Unity’s ZWrite shaders to solve this issue?
Attached screenshots demonstrate the problem visually. Any help would be greatly appreciated! 🚀
We're developing an iOS application that integrates RoomCaptureSession with ARSCNView for room scanning. Our implementation differs from the standard RoomCaptureView because we need custom UI guidance with 3D dots placed in the scanning environment to guide users through the capture process.
Bug Description:
The application crashes when users attempt to scan multiple rooms or apartments in sequence. The crash specifically occurs with the following pattern:
User successfully scans first room with multiple hotspots (working correctly)
User stops scanning, moves to a new room
In the new room, first 1-2 hotspots work correctly
Application crashes when attempting to scan additional hotspots
Technical Details:
Error: SLAM Anchor assertion failure in SlamAnchor.cpp:37 : HasValidPose()
Crash occurs in Thread 27 with CAPIDetectionOutputFwdNode
Error suggests invalid positioning when placing AR anchors
Steps to Reproduce:
Start room scan
Complete multiple hotspot captures in first room
Stop scanning
Start new room scan
Capture 1-2 hotspots successfully
Attempt additional hotspot captures -> crashes
Attempted Solutions:
Implemented anchor cleanup between sessions
Added position validation before anchor placement
Implemented ARSession error handling
Added proper thread management for AR operations
Environment:
Device: iPhone 14 Pro (LiDAR equipped)
iOS Version: 18.1.1 (22B91)
Testing through TestFlight
Crash Log Details:
Exception Type: EXC_CRASH (SIGABRT)
Exception Codes: 0x0000000000000000, 0x0000000000000000
Exception Note: EXC_CORPSE_NOTIFY
Triggered by Thread: 27
Thread 27 Crashed:
0 libsystem_kernel.dylib 0x00000001f0cc91d4 __pthread_kill + 8
1 libsystem_pthread.dylib 0x0000000228e12ef8 pthread_kill + 268
2 libsystem_c.dylib 0x00000001a86bbad8 abort + 128
3 AppleCV3D 0x0000000234d71a28 cv3d::vio::capi::SlamAnchor::SlamAnchor
Question:
Is there a recommended approach for handling multiple room captures with custom ARSCNView integration? The standard RoomCaptureView implementation doesn't show this behavior, but we need the custom guidance functionality that ARSCNView provides.
Crash Log
Code and full crash logs can be provided if needed.
I have used SceneKit for several years but recently have a problem where a scene with fewer than 50 nodes is partially drawn, i.e., some nodes are, some aren't, and greater than 50 nodes are always draw correctly. This seems to have happened since concurrency was introduced. (w.r.t. concurrency, I had been using DispatchQueue successfully before then.)
Since all nodes (few or many) are constructed and implemented by the same functions etc. I'm baffled.
When I print the node hierarchy all nodes are present whether few or many.
SceneView() has [.rendersContinually] option selected. Every node created (few or many) has .opacity = 1.0, .isHidden = false
I haven't tried setting-back the compiler version as that is not a long term solution, and I know the same code worked fine then.
Subject: Combining ARKit Face Tracking with High-Resolution AVCapture and Perspective Rendering on Front Camera
Message:
Hello Apple Developer Community,
We’re developing an application using the front camera that requires both real-time ARKit face tracking/guidance and the capture of high-resolution still images via AVCaptureSession. Our goal is to leverage ARKit’s depth and face data to render a captured image from another perspective post-capture, maintaining high image quality.
Our Approach:
Real-Time ARKit Guidance:
Utilize ARKit (e.g., ARFaceTrackingConfiguration) for continuous face tracking, depth, and scene understanding to guide the user in real time.
High-Resolution Capture Transition:
At the moment of capture, we plan to pause the ARKit session and switch to an AVCaptureSession to take a high-resolution image.
We assume that for a front-facing image, the subject’s face is directly front-on, and the relative pose between the face and camera remains the same during the transition. The only variation we expect is a change in distance.
Our intention is to minimize the delay between the last ARKit frame and the high-res capture to maintain temporal consistency, assuming that aside from distance, the face-camera relative pose remains unchanged.
Post-Processing Perspective Rendering:
Using the last ARKit face data (depth, pose, and landmarks) along with the high-resolution 2D image, we aim to render the scene from another perspective.
We want to correct the perspective of the 2D image using SceneKit or RealityKit, leveraging the collected ARKit scene information to achieve a natural, high-quality rendering from a different viewpoint.
The rendering should match the quality of a normally captured high-resolution image, adjusting for the difference in distance while using the stored ARKit data to correct perspective.
Our Questions:
Session Transition Best Practices:
What are the recommended best practices to seamlessly pause ARKit and switch to a high-resolution AVCapture session on the front camera
How can we minimize user movement or other issues during this brief transition, given our assumption that the face-camera pose remains largely consistent except for distance changes?
Data Integration for Perspective Rendering:
How can we effectively integrate stored ARKit face, depth, and pose data with the high-res image to perform accurate perspective correction or rendering from another viewpoint?
Given that we assume the relative pose is constant except for distance, are there strategies or APIs to leverage this assumption for simplifying the perspective transformation?
Perspective Correction with SceneKit/RealityKit:
What techniques or workflows using SceneKit or RealityKit are recommended for correcting the perspective of a captured 2D image based on ARKit scene data?
How can we use these frameworks to render the high-resolution image from an alternative perspective, while maintaining image quality and fidelity?
4. Pitfalls and Guidelines:
What common pitfalls should we be aware of when combining ARKit tracking data with high-res capture and post-processing for perspective rendering?
Are there performance considerations, recommended thresholds for acceptable temporal consistency, or validation techniques to ensure the ARKit data remains applicable at the moment of high-res capture?
We appreciate any advice, sample code references, or documentation pointers that could assist us in implementing this workflow effectively.
Thank you!
Looking for sample code 3d wireframe (with lines ) & polygons and should be able to rotate (set camera angles)
I tried sample code seems to be complicated & getting a BLANK screen
import SwiftUI
import SceneKit
struct SceneKitTest2: View {
var body: some View {
VStack{
Text("SceneKitTest2")
SceneView(scene: SCNScene(named:"Earth_1_12756.scn"), options: [.autoenablesDefaultLighting,.allowsCameraControl])
.frame(width:UIScreen.main.bounds.width,
height: UIScreen.main.bounds.height/2)
Spacer(minLength: 0)
}
}
}
This is my code in ContentView:
import SwiftUI
import SceneKit
import PlaygroundSupport
struct ContentView: View {
var body: some View {
VStack {
Text("SceneKit with SwiftUI")
.font(.headline)
.padding()
SceneView(
scene: loadScene(),
options: [.autoenablesDefaultLighting, .allowsCameraControl]
)
.frame(width: 400, height: 400)
.border(Color.gray, width: 1)
}
}
}
func loadScene() -> SCNScene? {
if let fileURL = Bundle.main.url(forResource: "a", withExtension: "dae") {
do {
let scene = try SCNScene(url: fileURL, options: [
SCNSceneSource.LoadingOption.checkConsistency: true
])
print("Scene loaded successfully.")
return scene
} catch {
print("Error loading scene: \(error.localizedDescription)")
}
} else {
print("Error: Unable to locate a.dae in Resources.")
}
return nil
}
a.dae file exists in the Resources section of macOS Playground app. And a.dae can be viewed in Xcode.
Console shows: Error loading scene: The operation couldn’t be completed. (Foundation._GenericObjCError error 0.)
Any input is appreciated.
Up to now I have created multiple new SCNNodes using an instance of SCNGeometry and it was OK that they all had the same appearance. Now I want variety and when I make a copy of that instance using:
let newGeo = myGeoInstance.copy() as! SCNGeometry
(must be force cast because copy() -> any?)
all elements are verified present. :-)
Likewise:
node.geometry?.replaceMaterial(at: index, with: myNewMaterial)
is verified to correctly change the material(s) at the correct index(s). The only problem is the modified "teapot" is not visible, and yes I have set node.isHidden = false.
Has anyone experienced this?
In the old days reversing the verts was a solution. In desperation I tried that. |-(
The Actions Editor doesn't seem to work in Xcode 16.1.
With a node selected, when I try to drag an action from the Library into the Actions panel nothing happens, the action icon just disappears. Clicking the '+' button to create a new action doesn't work either.
Actions do work when created in code, though.
Is this a bug, or am I missing something?
I would like to implement zoom functionality in my SceneKit game: when the user performs the pinch gesture on a point on the screen, the scene zooms in to make that point larger.
Until now I simply changed SCNCamera.focalLength, but this simply zooms in to the center of what is currently visible on screen. Is it somehow possible to implement the zoom functionality described above by perhaps interactively rotating the camera at the same time towards the pinched point? Is there a formula for this? I would like to avoid suddenly rotating the camera to face the pinched point when the pinch gesture begins and then zoom in while the pinch is in progress.
I am trying to extract some built-in and custom render passes from SceneKit, so that I can pass them into a metal pipeline and do some additional work with them.
I have a metal viewport, and have instantiated a SCNRenderer so that I can render a SCNScene using SceneKit to a texture as part of my metal draw pass. This works as expected.
Now I want to output multiple textures from the SceneKit render, not just the final color. I want to extract Depth, Normal, Lighting, Colour and a custom SCNTechnique for world position.
I can easily use a SCNTechnique to render one of these to the color output, but it's not clear how I would render multiple passes in one render call.
Is there some way to pass a writeable buffer/texture to a SCNTechnique, so that I can populate it in my SCNTechnique shader at render time with the output from the pass? Similar to how one would bind a buffer for a metal shader. SCNTechnique obfuscates things, so it's not clear how to proceed.
Does anyone have any ideas?
When running the sample code below, every 3 seconds the middle sprite is replaced by a new one. When this happens, most of the time a flicker is noticeable. When recording the screen and stepping through the recording frame by frame, I noticed that the flicker is caused by a temporary reordering of the nodes’. Below you find two screenshots of two consecutive frames where the reordering is clearly visible.
This only happens for a SpriteKit scene used as an overlay for a SceneKit scene. Commenting out
buttons.zPosition = 1
or avoiding the fade in/out animations solves the issue.
I have created FB15945016.
import SceneKit
import SpriteKit
class GameViewController: NSViewController {
let overlay = SKScene()
var buttons: SKNode!
var previousButton: SKSpriteNode!
var nextButton: SKSpriteNode!
var pageContainer: SKNode!
var pageViews = [SKNode]()
var page = 0
override func viewDidLoad() {
super.viewDidLoad()
let scene = SCNScene(named: "art.scnassets/ship.scn")!
let scnView = self.view as! SCNView
scnView.scene = scene
overlay.anchorPoint = CGPoint(x: 0.5, y: 0.5)
scnView.overlaySKScene = overlay
buttons = SKNode()
buttons.zPosition = 1
overlay.addChild(buttons)
previousButton = SKSpriteNode(systemImage: "arrow.uturn.backward.circle")
previousButton.position = CGPoint(x: -100, y: 0)
buttons.addChild(previousButton)
nextButton = SKSpriteNode(systemImage: "arrow.uturn.forward.circle")
nextButton.position = CGPoint(x: 100, y: 0)
buttons.addChild(nextButton)
pageContainer = SKNode()
pageViews = [SKSpriteNode(systemImage: "square.and.arrow.up"), SKSpriteNode(systemImage: "eraser")]
overlay.addChild(pageContainer)
setPage(0)
Timer.scheduledTimer(withTimeInterval: 3, repeats: true) { [self] _ in
setPage((page + 1) % 2)
}
}
func setPage(_ page: Int) {
pageViews[self.page].run(.sequence([
.fadeOut(withDuration: 0.2),
.removeFromParent()
]), withKey: "fade")
self.page = page
let pageView = pageViews[page]
pageView.alpha = 0
pageView.run(.fadeIn(withDuration: 0.2), withKey: "fade")
pageContainer.addChild(pageView)
}
override func viewDidLayout() {
overlay.size = view.frame.size
}
}
extension SKSpriteNode {
public convenience init(systemImage: String) {
self.init()
let width = 100.0
let image = NSImage(systemSymbolName: systemImage, accessibilityDescription: nil)!.withSymbolConfiguration(.init(hierarchicalColor: NSColor.black))!
let scale = NSScreen.main!.backingScaleFactor
image.size = CGSize(width: width * scale, height: width / image.size.width * image.size.height * scale)
texture = SKTexture(image: image)
size = CGSize(width: width, height: width / image.size.width * image.size.height)
}
}
I am working on a SceneKit project where I use a CAShapeLayer as the content for SCNMaterial's diffuse.contents to display a progress bar. Here's my initial code:
func setupProgressWithCAShapeLayer() {
let progressLayer = createProgressLayer()
progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer
DispatchQueue.main.async {
var progress: CGFloat = 0.0
Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { timer in
progress += 0.01
if progress > 1.0 {
progress = 0.0
}
progressLayer.strokeEnd = progress // Update progress
}
}
}
// MARK: - ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
progressBarPlane = SCNPlane(width: 0.2, height: 0.2)
setupProgressWithCAShapeLayer()
let planeNode = SCNNode(geometry: progressBarPlane)
planeNode.position = SCNVector3(x: 0, y: 0.2, z: 0)
node.addChildNode(planeNode)
}
This works fine, and the progress bar updates smoothly. However, when I change the code to use a class property (self.progressLayer) instead of a local variable, the rendering starts flickering on the screen:
func setupProgressWithCAShapeLayer() {
self.progressLayer = createProgressLayer()
progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer
DispatchQueue.main.async { [weak self] in
var progress: CGFloat = 0.0
Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { [weak self] timer in
progress += 0.01
if progress > 1.0 {
progress = 0.0
}
self?.progressLayer?.strokeEnd = progress // Update progress
}
}
}
After this change, the progressBarPlane in SceneKit starts flickering while being rendered on the screen.
My Question:
Why does switching from a local variable (progressLayer) to a class property (self.progressLayer) cause the flickering issue in SceneKit rendering?
I want to apply a SCNTechnique pipeline to the camera feed. To achieve this, I want to bring the camera input into the SceneKit world.
The perfects API seems to be:
let captureDevice = …
scnScene.background.contents = captureDevice
This is demonstrated in "SceneKit: What's New" (WWDC17) (at 44m19s) and is mentioned in the documentation of SCNMaterialProperty's contents.
Instead of showing camera feed, it crashes with these messages:
*** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[AVCaptureVideoDataOutput setVideoSettings:] Unsupported pixel format type - use -availableVideoCVPixelFormatTypes'
*** First throw call stack:
(0x18993c7cc <REDACTED> 0x211e18488)
libc++abi: terminating due to uncaught exception of type NSException
Please advise.
STEPS TO REPRODUCE
Create a new Xcode project, starting from the SceneKit game template.
Add Info.plist entry for NSCameraUsageDescription.
Add a capture device property to GameViewController:
class GameViewController: UIViewController {
let captureDevice = AVCaptureDevice.default(for: .video)
Set the background contents:
scene.background.contents = captureDevice
Run the app on device.
PLATFORM AND VERSION
iOS
Development environment: Xcode 16.1, macOS 15.0.1. Run-time configuration: iOS 18.1
I would like to preload and use some images for both SpriteKit and SceneKit models (my game uses SceneKit with a SpriteKit overlay), and as far as I can see the only efficient way would be to create and preload SKTexture objects which can be supplied to SKSpriteNode(texture:) and SCNMaterial.diffuse.contents.
The problem is that SKTexture are rendered too bright in SceneKit, for some unknown reason. Here a comparison between rendering an image (from URL) and a SKTexture:
And the code that produces it:
let url = Bundle.main.url(forResource: "art.scnassets/texture.png", withExtension: nil)!
let plane1 = SCNPlane(width: 10, height: 10)
plane1.firstMaterial!.diffuse.contents = url.path
let node1 = SCNNode(geometry: plane1)
node1.position.x = -5
scene.rootNode.addChildNode(node1)
let plane2 = SCNPlane(width: 10, height: 10)
plane2.firstMaterial!.diffuse.contents = SKTexture(image: NSImage(byReferencing: url))
let node2 = SCNNode(geometry: plane2)
node2.position.x = 5
scene.rootNode.addChildNode(node2)
This issue was already mentioned in this other post, but since I wasn't notified of the reply from Quinn asking about the feedback number I created at the time, it didn't make any progress.
I've been running my SceneKit game for many weeks in Xcode without performance issues. The game itself is finished, so I thought I could go on with publishing it on the App Store, but when archiving it in Xcode and running the archived app, I noticed that it seriously hangs.
The hangs only seem to happen when I run the game in fullscreen mode. I tried disabling game mode, but the hangs still happen. Only when I run in windowed mode the game runs smoothly.
Instruments confirms that there are many serious hangs, but it also reports that CPU usage is quite low during those hangs, on average about 15%. From what I know, hangs happen when the main thread is busy, but how can that be when CPU usage is so low, and why does it only happen in fullscreen mode for release builds?
In the simplest case I can come up with, I create a scene (either fully or partially in code) with a single dynamic body, located slightly away from the origin.
I give the body a charge as well as adding an electric field to the node. Body does nothing (as to be expected, since it's the source of the field).
However if I replace that field with a custom field (does nothing except reports back the passed in position value) the position shown is the location of the body in the local space of its parent (in this case, the root node) rather than the node the field is attached to (i.e. itself).
I've attached the code customising the SwiftUI app template. Hopefully someone can tell me what I'm doing wrong?
ContentView customisation…
struct ContentView: View
{
var body: some View
{
SceneView(scene: ElectricScene(), options: [.allowsCameraControl, .autoenablesDefaultLighting])
}
}
And the code to create the scene…
import Foundation
import SceneKit
class ElectricScene: SCNScene
{
override init()
{
super.init()
physicsWorld.gravity = SCNVector3(0, 0, 0)
let cameraNode = SCNNode()
cameraNode.camera = SCNCamera()
cameraNode.position = SCNVector3(0, 0, 10)
rootNode.addChildNode(cameraNode)
let ballNode = SCNNode(geometry: SCNSphere(radius: 0.5))
ballNode.position = SCNVector3(2, 0, 0)
ballNode.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil)
ballNode.physicsBody?.charge = -1
rootNode.addChildNode(ballNode)
// ballNode.physicsField = SCNPhysicsField.electric()
ballNode.physicsField = SCNPhysicsField
.customField {position, _, _, _, _ in
print(position)
return SCNVector3Zero
}
}
@available(*, unavailable)
required init?(coder: NSCoder)
{
fatalError("init(coder:) has not been implemented")
}
}
This (repeatedly) prints out the following…
SCNVector3(x: 2.0, y: 0.0, z: 0.0)
…which is the position of the node relative to the root node, rather than relative to the source of the field (itself).
Even when the action is run on the main thread, the following code causes a crash on iOS, but not on macOS. The game launches with a simple yellow rectangle, and when it finishes fading out and should be removed from the overlay scene, the app crashes.
The code can be pasted into the file GameController.swift of Xcode's default project for Multiplatform macOS and iOS game.
import SceneKit
import SpriteKit
@MainActor
class GameController: NSObject {
let scene: SCNScene
let sceneRenderer: SCNSceneRenderer
init(sceneRenderer renderer: SCNSceneRenderer) {
sceneRenderer = renderer
scene = SCNScene(named: "Art.scnassets/ship.scn")!
super.init()
sceneRenderer.scene = scene
renderer.overlaySKScene = SKScene(size: CGSize(width: 500, height: 500))
DispatchQueue.main.async {
let node = SKShapeNode(rect: CGRect(x: 100, y: 100, width: 100, height: 100))
node.fillColor = .yellow
node.run(.sequence([
.fadeOut(withDuration: 1),
.removeFromParent()
]))
renderer.overlaySKScene!.addChild(node)
}
}
}
The Xcode console shows this stacktrace:
*** Assertion failure in -[UIApplication _performAfterCATransactionCommitsWithLegacyRunloopObserverBasedTiming:block:], UIApplication.m:3246
*** Terminating app due to uncaught exception 'NSInternalInconsistencyException', reason: 'Call must be made on main thread'
*** First throw call stack:
(
0 CoreFoundation 0x00000001804ae0f8 __exceptionPreprocess + 172
1 libobjc.A.dylib 0x0000000180087db4 objc_exception_throw + 56
2 Foundation 0x0000000180d17058 _userInfoForFileAndLine + 0
3 UIKitCore 0x00000001853cf678 -[UIApplication _performAfterCATransactionCommitsWithLegacyRunloopObserverBasedTiming:block:] + 376
4 UIKitCore 0x000000018553f7a0 -[_UIFocusUpdateThrottle scheduleProgrammaticFocusUpdate] + 300
5 UIKitCore 0x0000000184e2e22c -[UIFocusSystem _requestFocusUpdate:] + 548
6 UIKitCore 0x0000000184e2dfa4 -[UIFocusSystem requestFocusUpdateToEnvironment:] + 76
7 UIKitCore 0x0000000184e2e864 -[UIFocusSystem _focusEnvironmentWillDisappear:] + 408
8 SpriteKit 0x00000001a3d472f4 _ZL12_removeChildP6SKNodeS0_P7SKScene + 240
9 SpriteKit 0x00000001a3d473b0 -[SKNode removeChild:] + 80
10 SpriteKit 0x00000001a3d466b8 -[SKNode removeFromParent] + 128
11 SpriteKit 0x00000001a3d1678c -[SKRemove updateWithTarget:forTime:] + 64
12 SpriteKit 0x00000001a3d1b740 _ZN11SKCSequence27cpp_updateWithTargetForTimeEP7SKCNoded + 84
13 SpriteKit 0x00000001a3d20e3c _ZN7SKCNode6updateEdf + 156
14 SpriteKit 0x00000001a3d20f20 _ZN7SKCNode6updateEdf + 384
15 SpriteKit 0x00000001a3d26fb8 -[SKScene _update:] + 464
16 SpriteKit 0x00000001a3cf3168 -[SKSCNRenderer _update:] + 80
17 SceneKit 0x000000019c932bf0 -[SCNMTLRenderContext renderSKSceneWithRenderer:overlay:atTime:] + 60
18 SceneKit 0x000000019c9ebd98 -[SCNRenderer _drawOverlaySceneAtTime:] + 204
19 SceneKit 0x000000019cb1a1c0 _ZN3C3D11OverlayPass7executeERKNS_10RenderArgsE + 60
20 SceneKit 0x000000019c8e05ec _ZN3C3D13__renderSliceEPNS_11RenderGraphEPNS_10RenderPassERtRKNS0_9GraphNodeERPNS0_5StageENS_10RenderArgsEbRPU27objcproto16MTLCommandBuffer11objc_object + 2660
21 SceneKit 0x000000019c8e18ac _ZN3C3D11RenderGraph7executeEv + 3808
22 SceneKit 0x000000019c9ed26c -[SCNRenderer _renderSceneWithEngineContext:sceneTime:] + 756
23 SceneKit 0x000000019c9ed544 -[SCNRenderer _drawSceneWithNewRenderer:] + 208
24 SceneKit 0x000000019c9ed9fc -[SCNRenderer _drawScene:] + 40
25 SceneKit 0x000000019c9edce4 -[SCNRenderer _drawAtTime:] + 500
26 SceneKit 0x000000019ca87950 -[SCNView _drawAtTime:] + 368
27 SceneKit 0x000000019c943b74 __83-[NSObject(SCN_DisplayLinkExtensions) SCN_setupDisplayLinkWithQueue:screen:policy:]_block_invoke + 44
28 SceneKit 0x000000019ca50600 -[SCNDisplayLink _displayLinkCallbackReturningImmediately] + 132
29 libdispatch.dylib 0x000000010239173c _dispatch_client_callout + 16
30 libdispatch.dylib 0x0000000102394c14 _dispatch_continuation_pop + 756
31 libdispatch.dylib 0x00000001023aa4e0 _dispatch_source_invoke + 1736
32 libdispatch.dylib 0x00000001023997f0 _dispatch_lane_serial_drain + 340
33 libdispatch.dylib 0x000000010239a774 _dispatch_lane_invoke + 420
34 libdispatch.dylib 0x00000001023a71a8 _dispatch_root_queue_drain_deferred_wlh + 324
35 libdispatch.dylib 0x00000001023a6604 _dispatch_workloop_worker_thread + 488
36 libsystem_pthread.dylib 0x000000010242bb74 _pthread_wqthread + 284
37 libsystem_pthread.dylib 0x000000010242a934 start_wqthread + 8
)
libc++abi: terminating due to uncaught exception of type NSException
Am I doing something wrong?