SceneKit

RSS for tag

Create 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.

SceneKit Documentation

Posts under SceneKit tag

120 Posts
Sort by:
Post not yet marked as solved
0 Replies
147 Views
If you use SceneKit with ARKit, the AR scene uses the SceneKit renderer. Should you use SCNScene.write() to create a USDZ file and then open the USDZ file with AR Quick Look, AR Quick Look renders the AR scene with the RealityKit renderer. The ARKit-in-app -> USDZ -> AR Quick Look renderers are not the same and could produce different appearances. Have you seen similar problems with SceneKit -> AR Quick Look rendering? I am using such a pipeline with PBR lighting and have observed that the resulting differences in material properties are large. (The geometries are fine.) I have had to compensate by recreating the SCNScene materials with modified properties. The agreement between the app scene and the AR Quick Look scene is greatly improved but unfortunately still not acceptable for critical evaluation of commercial products in interior design.
Posted
by z3wind.
Last updated
.
Post not yet marked as solved
0 Replies
180 Views
Hello, I am using YOLOv3 with Vision to classify objects during my AR session. I want to render the bounding boxes of the detected objects in my screen view. Unfortunately, the bounding boxes are are placed too far down and have a wrong aspect ratio. Does someone know what the issue might be? This is how I am currently transforming the bounding boxes. Assumptions: The app is in portrait mode Vision request is performed with centerCrop and orientation .right. Fix the coordinate origin of vision: let newY = 1 - boundingBox.origin.y     let newBox = CGRect(x: boundingBox.origin.x, y: newY, width: boundingBox.width, height: boundingBox.height) Undo center cropping of Vision: let imageResolution: CGSize = currentFrame.camera.imageResolution // Switching height and width because the original image is rotated let imageWidth = imageResolution.height let imageHeight = imageResolution.width // Square inside of normalized coordinates. let roi = CGRect(x: 0, y: 1 - (imageWidth/imageHeight + ((imageHeight-imageWidth) / (imageHeight*2))), width: 1, height: imageWidth / imageHeight) let newBox = VNImageRectForNormalizedRectUsingRegionOfInterest(boundingBox, Int(imageWidth), Int(imageHeight), roi) Bring coordinates back to normalized form: let imageWidth = imageResolution.height let imageHeight = imageResolution.width let transformNormalize = CGAffineTransform(scaleX: 1.0 / imageWidth, y: 1.0 / imageHeight) let newBox = boundingBox.applying(transformNormalize) Transform to scene view: (I assume the error is here. I found out while debugging that the aspect ratio of the bounding box changes here.) let viewPort = sceneView.frame.size let transformFormat = currentFrame.displayTransform(for: .landscapeRight, viewportSize: viewPort) let newBox = boundingBox.applying(transformFormat) Scale up to viewport size: let viewPort = sceneView.frame.size let transformScale = CGAffineTransform(scaleX: viewPort.width, y: viewPort.height) let newBox = boundingBox.applying(transformScale) Thanks in advance for any help!
Posted Last updated
.
Post not yet marked as solved
16 Replies
9k Views
Unlike pervious years, there are no sessions, or anything, for SceneKit.Are we supposed to go to Unity and (ugh) C#?Was there some sort of fallout with the SceneKit group?Was it written in Obj-C so it's a forgotten stepchild?Is Apple only interested in USDZ support?Reality Composer seems like a rudimentary editor for iPad and iPhone.No new features or editor improvements. Seems like it's been dropped in the hold with OpenGL.There's been time invested. Apple, guidance please?
Posted Last updated
.
Post not yet marked as solved
3 Replies
223 Views
I'm trying to optimize the draw calls in an existing scene with flattenedClone(). From what I can tell enumerateChildNodes would be a good way to go through the scene tree and add the nodes to be flattened to a parent node. I saw something like this in a tutorial but it is saying that 'withName' is an extra argument. gameScene.rootNode.enumerateChildNodes(withName: "//*"){ (node, stop) in   if (node.name == "Large Tree"){         flattenParent.addChildNode(node)         node.removeFromParentNode()    }          } any guidance on this usage? Also would a switch statement or multiple || be optimal when searching through several different mesh nodes?
Posted Last updated
.
Post not yet marked as solved
2 Replies
527 Views
I'm using this code to create a rectangle (that will eventually be a more complex shape): let vertices = [simd_float3(x: 1, y: 1, z: 0), simd_float3(x: 1, y: -1, z: 0), simd_float3(x: -1, y: -1, z: 0), simd_float3(x: -1, y: 1, z: 0)] let vertexSource = SCNGeometrySource(data: Data(bytes: vertices, count: MemoryLayout<simd_float3>.size * vertices.count), semantic: .vertex, vectorCount: vertices.count, usesFloatComponents: true, componentsPerVector: 3, bytesPerComponent: MemoryLayout<Float>.size, dataOffset: 0, dataStride: MemoryLayout<simd_float3>.stride) let indices: [Int32] = Array(0..<Int32(vertices.count)) let element = SCNGeometryElement(data: Data(bytes: indices, count: MemoryLayout<Int32>.size * indices.count), primitiveType: .polygon, primitiveCount: 1, bytesPerIndex: MemoryLayout<Int32>.size) let geometry = SCNGeometry(sources: [vertexSource], elements: [element]) which logs this error in the Xcode console: [SceneKit] Error: SCNGeometryElement initialization - Invalid polygon edge count (0) There also doesn't seem to be any documentation about how to use this .polygon mode. When using .triangleStrip with a primitiveCount of 2, no error is logged.
Posted
by Nickkk.
Last updated
.
Post marked as solved
1 Replies
238 Views
My SceneKit Game failed with a com.apple.scenekit.scnview-renderer (10): signal SIGABRT The error was marked on the line @main Here's the log navigator: 2022-05-25 15:24:18.829319+0800 MyWorldiOS[9022:293392] Metal API Validation Enabled validateRenderPassDescriptor:899: failed assertion `RenderPass Descriptor Validation MTLRenderPassAttachmentDescriptor MTLStoreActionMultisampleResolve store action for the depth attachment is not supported by device PixelFormat MTLPixelFormatDepth32Float cannot be a MSAA resolve target ' dyld4 config: DYLD_ROOT_PATH=/Applications/Xcode-beta.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot DYLD_LIBRARY_PATH=/Users/wangkeshijian/Library/Developer/Xcode/DerivedData/MyWorld-aayoxjgvyfzbxvgqnvylzgvlwkyr/Build/Products/Debug-iphonesimulator:/Applications/Xcode-beta.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/Applications/Xcode-beta.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/usr/lib/libBacktraceRecording.dylib:/Applications/Xcode-beta.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/usr/lib/libMainThreadChecker.dylib:/Applications/Xcode-beta.app/Contents/Developer/Platforms/iPhoneOS.platform/Library/Developer/CoreSimulator/Profiles/Runtimes/iOS.simruntime/Contents/Resources/RuntimeRoot/Developer/Library/PrivateFrameworks/DTDDISupport.framework/libViewDebuggerSupport.dylib DYLD_FRAMEWORK_PATH=/Users/wangkeshijian/Library/Developer/Xcode/DerivedData/MyWorld-aayoxjgvyfzbxvgqnvylzgvlwkyr/Build/Products/Debug-iphonesimulator validateRenderPassDescriptor:899: failed assertion `RenderPass Descriptor Validation MTLRenderPassAttachmentDescriptor MTLStoreActionMultisampleResolve store action for the depth attachment is not supported by device PixelFormat MTLPixelFormatDepth32Float cannot be a MSAA resolve target ' CoreSimulator 802.6 - Device: MyPhone (EBB1ECDE-8AD7-4418-84AF-0B761E0A2EA7) - Runtime: iOS 15.4 (19E5234a) - DeviceType: iPhone 12 (lldb)  I'm not sure what else should I put in here
Posted Last updated
.
Post not yet marked as solved
0 Replies
148 Views
Hi all, I am currently making a fitness app and want to replicate the way Apple incorporate medals in their Fitness app. They have a page called awards and it looks like a collection view of different medals that the user can unlock. Once unlocking a medal the user can tap on the medal and it opens to a full screen intractable scene where the user can rotate the medal etc. I have had a play around with SceneKit and managed to get a medal loaded into a scene and display it in the app. However my functionality does not look as smooth or as polished as Apples. Does anyone have any idea on how they have managed to present all of the different Scene-kit scenes in a collection-view for example? Or how they have achieved the smooth transition between tapping on the medal in the collection-view and the scene. I guess this is created using SwiftUI. Can this be replicated in Storyboards? Thanks!
Posted Last updated
.
Post not yet marked as solved
1 Replies
327 Views
I saw it's possible that you can use a gobo as light effect like in the real world, flag some lights off. Can someone help me to get the gobo effect to work? In the following example you will see a cube with a plane below it and a spotlight above it. I would like to load a pattern (image) in front off the spotlight-source which would flag (block) certain parts of the light. Expected behaviour would be that you will see part of the pattern on the top of the cube as you will see part of the blocked light pattern on the plane. Could anyone help me with solving this gobo puzzle? &#9;view.backgroundColor = .blue let sceneView = SCNView(frame: self.view.frame)     self.view.addSubview(sceneView)     view.bringSubviewToFront(sceneView)     sceneView.backgroundColor = .clear     sceneView.allowsCameraControl = true     let scene = SCNScene()     sceneView.scene = scene     // Add camera node     let camera = SCNCamera()     let cameraNode = SCNNode()     cameraNode.camera = camera     cameraNode.position = SCNVector3(x: 0, y: 0, z: 0)     scene.rootNode.addChildNode(cameraNode)     // Add a cube to the scene     let cubeGeometry = SCNBox(width: 0.5, height: 0.5, length: 0.5, chamferRadius: 0.0)     let cubeNode = SCNNode(geometry: cubeGeometry)     cubeNode.position = SCNVector3(x: 0.0, y: -0.5, z: -1.5)       // Make the cube white     let whiteMaterial = SCNMaterial()     whiteMaterial.diffuse.contents = UIColor.white     cubeGeometry.materials = [whiteMaterial]     scene.rootNode.addChildNode(cubeNode)     // Add a plane to the scene so we can see the lights & shadows     let planeGeometry = SCNPlane(width: 100.0, height: 100.0)     let lightGrayMaterial = SCNMaterial()     lightGrayMaterial.diffuse.contents = UIColor.lightGray     planeGeometry.materials = [lightGrayMaterial]     let planeNode = SCNNode(geometry: planeGeometry)     planeNode.eulerAngles = SCNVector3(x: GLKMathDegreesToRadians(-90), y: 0, z: 0)     planeNode.position = SCNVector3(x: 0, y: -15, z: 0)     scene.rootNode.addChildNode(planeNode)           // Create a spotlight     let light = SCNLight()     light.type = SCNLight.LightType.spot     light.castsShadow = true     // Create a Gobo mask: //    if let gobo = light.gobo //    { //      gobo.contents = UIImage(named: "gobo") //      gobo.intensity = 0.5 //      //light.categoryBitMask = -1 //    }           // Create al lightNode     let lightNode = SCNNode()     lightNode.light = light     lightNode.position = SCNVector3(x: 0, y: 10, z: -1.5)     let lightAngle = -90 * Float.pi / 180 // light facing down     lightNode.eulerAngles = SCNVector3(x: lightAngle, y: 0, z: 0)     scene.rootNode.addChildNode(lightNode)
Posted
by m@rco.
Last updated
.
Post not yet marked as solved
0 Replies
164 Views
I want to crop the usdz model in runtime. I use ModelIO for this. Before: [https://i.stack.imgur.com/yDXXF.jpg) After: [https://i.stack.imgur.com/m9ryg.jpg) First of all, get file from bundle let url = URL(fileURLWithPath: file) } else { print("Object not found in Bundle") } And then I need to access asset let asset = MDLAsset(url: url) What should I do after this step? How am I supposed to use SCNGeometrySource and SCNGeometryElement or MDLVoxelArray classes?
Posted
by mudur.
Last updated
.
Post not yet marked as solved
2 Replies
194 Views
I am new to Swift. I want to display the point clouds data collected by lidar on the robot in real time on iOS. Any suggestions for me? Thanks.
Posted
by myles96.
Last updated
.
Post not yet marked as solved
0 Replies
101 Views
I am facing issues baking lightprobes on iOS. The same logic bakes lightprobes on macOS successfully. iOS throws the following exception [MTLDebugCommandBuffer waitUntilCompleted]:201: failed assertion `waitUntilCompleted on uncommitted command buffer' Some specs : Runtime: iOS 15.4 - DeviceType: iPhone 13 Pro Max. I created two template apps from xcode (one for iOS and the other for macOS). Following is the code for lightprobe bake added in viewDidLoad SCNScene* scene = [SCNScene scene]; SCNNode *ambientLight = [SCNNode node]; ambientLight.light = [SCNLight light]; ambientLight.light.type = SCNLightTypeAmbient; ambientLight.light.color = [UIColor whiteColor]; ambientLight.light.intensity = 1000.0; [scene.rootNode addChildNode:ambientLight]; scene.background.contents = [UIColor whiteColor]; scene.background.intensity = 2000.; SCNNode *probe1 = [SCNNode node]; probe1.position = SCNVector3Make(-0.493530, 1.7285934, -0.150000); probe1.light = [SCNLight light]; probe1.light.type = SCNLightTypeProbe; [scene.rootNode addChildNode:probe1]; SCNRenderer* probeRenderer = [SCNRenderer rendererWithDevice:nil options:nil]; probeRenderer.scene = scene; NSArray<SCNNode*> *probes = [NSArray arrayWithObjects: probe1, nil]; [probeRenderer updateProbes: probes atTime:1.0]; The crash occurs at updateProbes. Also, I have logged and checked the 27 floats and they are not garbage for macOS so essentially the bake is working as expected on macOS. Any help would be really appreciated!
Posted
by MridulK.
Last updated
.
Post not yet marked as solved
2 Replies
340 Views
I want to remove unnecessary materials or textures in order to reduce the size of the USDZ model I have. How can I manipulate this model with swift? or, I can try any advice to reduce the size of the USDZ model
Posted
by mudur.
Last updated
.
Post marked as solved
1 Replies
222 Views
How can I crop a 3D model as seen in the photos? Should I use MetalKit or can I handle it with sceneKit and modelIO? I couldn't find any code examples on this topic. Can you share the code snippet Before: [https://i.stack.imgur.com/yDXXF.jpg) After: [https://i.stack.imgur.com/m9ryg.jpg)
Posted
by mudur.
Last updated
.
Post not yet marked as solved
1 Replies
355 Views
I'm not sure which combination of iOS/XCode/Mac OS is causing this issue, but all of a sudden when I try to run our SceneKit app and the "Scheme -> Diagnostics -> Metal -> API Validation" setting is turned off the scene won't render and the console is just full of the following errors: Execution of the command buffer was aborted due to an error during execution. Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource) [SceneKit] Error: Main command buffer execution failed with status 5, error: Error Domain=MTLCommandBufferErrorDomain Code=9 "Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource)"  ) If you run the app outside of xcode it's fine, also enabling the "API Validation" option stops the issue. One of my schemes has this option disabled since the project began and never had an issue before. Just throwing this out there incase someone else has spent hours of their life trying to figure out why this is not working for them. Also you can just create a new SceneKit project and turn that diagnostic option off and the app won't render anything.
Posted
by markdaws.
Last updated
.
Post not yet marked as solved
1 Replies
486 Views
It appears SCNLayer is now deprecated, but I can't find the replacement for it. I'm just trying to get a SceneKit scene to render into a CALayer, without an SCNView attached. I actually want 3 different cameras from the same scene to render into 3 different layers inside the same NSView (not 3 separate SCNViews). Should we use CAMetalLayer for this now? If so, how do I set it up to render the scene? I'm using macOS 10.14 (but planning on using 10.15 when it's stable) with Xcode 11 Beta-3 BTW. Thank you if you can help with this.
Posted
by bstahl.
Last updated
.
Post not yet marked as solved
1 Replies
203 Views
I am using scene.write(to:"dirpath\name.usdz") to get usdz export functionality into my app (universal, macOS & iOS). My problem is, it ceases to work after the first use, quitting & restarting the app is the only way to re-enable it. I have tried reusing the same scene, and instantiating a new scene (both ways with the exact same node structure), same results every time: first invocation writes a file of ~14MB, any calls after that write 1.5-2k of garbage. I use a unique filename for each write, and check to make sure it doesn't already exist. Any ideas?
Posted Last updated
.
Post marked as solved
1 Replies
215 Views
I need some help adding an .scn file to my submission for this year. I've tried creating an art.scnassets file from xcode within the project, adding the .scn file and trying to use it with SCNScene(named: "scene.scn"), but I get a blank SCNView and printing value of the SCNScene, I get nil. Please help in this context ASAP.
Posted Last updated
.
Post not yet marked as solved
1 Replies
379 Views
I need to set the image in the center of the SCNScene in order to do this I use such an approach ... let image = UIImage(named: "my_image") let scene = SCNScene() scene.background.contents = image ... but the image I get is stretched in order to fill the entire screen, what I need is just to set it center. I tried different approaches, but without luck. For example, create such an extension was the last attempt extension SCNScene { func addBackground(imageName: String = "YOUR DEFAULT IMAGE NAME", contentMode: UIView.ContentMode = .scaleToFill) { // setup the UIImageView let backgroundImageView = UIImageView(frame: UIScreen.main.bounds) backgroundImageView.image = UIImage(named: imageName) backgroundImageView.contentMode = contentMode backgroundImageView.translatesAutoresizingMaskIntoConstraints = false // adding NSLayoutConstraints let leadingConstraint = NSLayoutConstraint(item: backgroundImageView, attribute: .leading, relatedBy: .equal, toItem: self, attribute: .leading, multiplier: 1.0, constant: 0.0) let trailingConstraint = NSLayoutConstraint(item: backgroundImageView, attribute: .trailing, relatedBy: .equal, toItem: self, attribute: .trailing, multiplier: 1.0, constant: 0.0) let topConstraint = NSLayoutConstraint(item: backgroundImageView, attribute: .top, relatedBy: .equal, toItem: self, attribute: .top, multiplier: 1.0, constant: 0.0) let bottomConstraint = NSLayoutConstraint(item: backgroundImageView, attribute: .bottom, relatedBy: .equal, toItem: self, attribute: .bottom, multiplier: 1.0, constant: 0.0) self.background.contents = backgroundImageView NSLayoutConstraint.activate([leadingConstraint, trailingConstraint, topConstraint, bottomConstraint]) } } UPD Actual result is desired result is
Posted Last updated
.
Post not yet marked as solved
2 Replies
578 Views
I'm doing an experiment integrating SwiftUI views as Materials for a SceneKit scene SCNPanel node. It is working perfectly in iOS using UIHostingController with the following code: Swift func createInfoPanel() { let panel = SCNPlane(width: 6.0, height: 6.0) let panelNode = SCNNode(geometry: panel) let infoPanelHost = SCNHostingController(rootView: helloWorld) infoPanelHost.view.isOpaque = false infoPanelHost.view.backgroundColor = SCNColor.clear infoPanelHost.view.frame = CGRect(x: 0, y: 0, width: 256, height: 256) panel.materials.first?.diffuse.contents = infoPanelHost.view panel.materials.first?.emission.contents = infoPanelHost.view panel.materials.first?.emission.intensity = 3.0 [... BillBoardConstraint etc here ...] addNodeToScene(panelNode) } Yet, when I tried to apply the same to macOS, I don't seem to be able to make the view created by NSHostingController transparent. Invoking infoPanelHost.view.isOpaque = false returns an error, saying isOpaque is read-only and can't be set. I tried subclassing NSHostingController and overriding viewWillAppear to try and make the view transparent / non-opaque, to no avail. Swift override func viewWillAppear() { super.viewWillAppear() self.view.wantsLayer = true self.view.layer?.backgroundColor = NSColor.clear.cgColor self.view.layer?.isOpaque = false self.view.opaqueAncestor?.layer?.backgroundColor = NSColor.clear.cgColor self.view.opaqueAncestor?.layer?.isOpaque = false self.view.opaqueAncestor?.alphaValue = 0.0 self.view.alphaValue = 0.0 self.view.window?.isOpaque = false self.view.window?.backgroundColor = NSColor.clear     } Tried setting everything I could think of to non-opaque as you can see, and still, the panels are opaque, show no info, and obscure the 3D entity they should overlay... Can someone please advise?
Posted
by TarqTeles.
Last updated
.
Post not yet marked as solved
1 Replies
197 Views
Hi, I just wanted to display a SpriteKit Scene in a SCNPlane. So I set the the SCNMaterial contents to my SKScene, but instead of getting the scene I'm getting a grey plane. This is my code by the way: var mainScene: SKScene {     let scene = Game()     scene.size = CGSize(width: 1024, height: 1024)     scene.scaleMode = .resizeFill     scene.backgroundColor = .purple     scene.view?.backgroundColor = .purple     scene.view?.allowsTransparency = false     return scene } func initMainScene() -> SceneView {     mainScene.view?.isPaused = false     let scene = SCNScene()     let mainSceneMaterial = SCNMaterial()     mainSceneMaterial.normal.contents = mainScene     mainSceneMaterial.isDoubleSided = true     let planeGeometry = SCNPlane(width: 1, height: 1)     planeGeometry.materials = [mainSceneMaterial]     let plane: SCNNode = SCNNode(geometry: planeGeometry)     let camera: SCNNode = SCNNode()     camera.name = "Camera"     camera.camera = SCNCamera()     camera.position = SCNVector3(x: 0.0, y: 0.0, z: 4.0)     let light: SCNNode = SCNNode()     light.light =  SCNLight()     light.light!.type = .omni     light.position = SCNVector3(x: 1.5, y: 1.5, z: 1.5)     scene.rootNode.addChildNode(camera)     scene.rootNode.addChildNode(light)     scene.rootNode.addChildNode(plane)     return SceneView(         scene: scene,         pointOfView: scene.rootNode.childNode(withName: "Camera", recursively: false),         options: []     ) } Here is the screenshot: Also, my SpriteKit scene has touchesBegan and touchesMoved functions implemented, will those events still work if I embed the scene in the SCNMaterial? Thanks very much 🙏
Posted
by ItsTheGuy.
Last updated
.