SceneKit

RSS for tag

Create 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.

SceneKit Documentation

Post

Replies

Boosts

Views

Activity

Did iOS 16's SceneKit change something about handling pixel format?
Hi, my app displays a video as texture on SceneKit's SCNNode. Now I've just found that some videos looks different in iOS 16.4 from previous versions. The videos look more pale than they should be. I looked up some documents and set SCNDisableLinearSpaceRendering as true in info.plist, they look exactly what they should be but then the problem is that the other videos which already looked fine now turned different. But anyway it seems to relate to the linear or gamma color spaces regarding to this answer (https://developer.apple.com/forums/thread/710643). Definitely those problematic videos have some different color space setting or something. I am not really expert in these field, where should I start to dig in? Or how can I just make iOS 16.4 behave same as previous versions? It worked well for all the videos then. What was actually updated?
1
0
696
May ’23
Creating a custom polygon plane SCNGeometry error
Hello!I have been using this piece of code in order to create a custom plane geometry using the SCNGeometryPrimitiveType option ".polygon".extension SCNGeometry { static func polygonPlane(vertices: [SCNVector3]) -> SCNGeometry { var indices: [Int32] = [Int32(vertices.count)] var index: Int32 = 0 for _ in vertices { indices.append(index) index += 1 } let vertexSource = SCNGeometrySource(vertices: vertices) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout.size) let element = SCNGeometryElement(data: indexData, primitiveType: .polygon, primitiveCount: 1, bytesPerIndex: MemoryLayout.size) let geometry = SCNGeometry(sources: [vertexSource], elements: [element]) let material = SCNMaterial() material.diffuse.contents = UIColor.blue material.isDoubleSided = true geometry.firstMaterial = material return geometry } }This works by sending in vertex coordinates in an order which represent the outline of the desired plane. As an example I might make an array with vertices representing a rectangle geometry as follows: [lowerLeft, upperLeft, upperRight, lowerRight].This method seems to work well for simpler shapes, but I sometimes get an error which I haven't been able to find the cause of when using more complex shapes or vertex coordinates which are randomly scattered in a plane (eg. when the method recieves an array where the vertices order is not outlining a shape. In the rectangle case it could look like this: [lowerLeft, upperRight, lowerRight, upperLeft] ). The error seems more likely to occur when the number of vertices used increases.I'm using this method to allow the user of my app to "paint" an outline of the desired plane, and as I can't control how the user chooses to do so I want this method to be able to handle those cases as well.This is the error print i recieve after calling this method:-[MTLDebugDevice validateNewBufferArgs:options:]:467: failed assertion `Cannot create buffer of zero length.'(lldb)And this is what appears in the debug navigator:libsystem_kernel.dylib`__pthread_kill: 0x219cad0c4 <+0>: mov x16, #0x148 0x219cad0c8 <+4>: svc #0x80 -> 0x219cad0cc <+8>: b.lo 0x219cad0e4 ; <+32> 0x219cad0d0 <+12>: stp x29, x30, [sp, #-0x10]! 0x219cad0d4 <+16>: mov x29, sp 0x219cad0d8 <+20>: bl 0x219ca25d4 ; cerror_nocancel 0x219cad0dc <+24>: mov sp, x29 0x219cad0e0 <+28>: ldp x29, x30, [sp], #0x10 0x219cad0e4 <+32>: ret Where line 4 has the error: com.apple.scenekit.scnview-renderer (16): signal SIGABRTAny help or explanation for this error would be greatly appriciated!
3
0
1.5k
Jun ’19
Xcode 15 Beta 3- 6 .scn and .dae file crash
is any one else having issues with game scene-view .dae and or .scn files it seems these new beta release is very incompatible with files that work perfect with previous Xcode releases up to Xcode 14 I'm working on upgrading a simple striped down version of my chess game and run in to strange and bogus errors messages and crashes /Users/helmut/Desktop/schachGame8423/schach2023/scntool:1:1 failed to convert file with failure reason: *** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0] all tools reality converter exporting the .dae file to other graphic files working fine as in prior Xcode releases but some thing is missing in current beta release of Xcode 15
0
0
490
Aug ’23
Properly projecting points with different orientations and camera positions?
Summary: I am using the Vision framework, in conjunction with AVFoundation, to detect facial landmarks of each face in the camera feed (by way of the VNDetectFaceLandmarksRequest). From here, I am taking the found observations and unprojecting each point to a SceneKit View (SCNView), then using those points as the vertices to draw a custom geometry that is textured with a material over each found face. Effectively, I am working to recreate how an ARFaceTrackingConfiguration functions. In general, this task is functioning as expected, but only when my device is using the front camera in landscape right orientation. When I rotate my device, or switch to the rear camera, the unprojected points do not properly align with the found face as they do in landscape right/front camera. Problem: When testing this code, the mesh appears properly (that is, appears affixed to a user's face), but again, only when using the front camera in landscape right. While the code runs as expected (that is, generating the face mesh for each found face) in all orientations, the mesh is wildly misaligned in all other cases. My belief is this issue either stems from my converting the face's bounding box (using VNImageRectForNormalizedRect, which I am calculating using the width/height of my SCNView, not my pixel buffer, which is typically much larger), though all modifications I have tried result in the same issue. Outside of that, I also believe this could be an issue with my SCNCamera, as I am a bit unsure how the transform/projection matrix works and whether that would be needed here. Sample of Vision Request Setup: // Setup Vision request options var requestHandlerOptions: [VNImageOption: AnyObject] = [:] // Setup Camera Intrinsics let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) if cameraIntrinsicData != nil { requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData } // Set EXIF orientation let exifOrientation = self.exifOrientationForCurrentDeviceOrientation() // Setup vision request handler let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: requestHandlerOptions) // Setup the completion handler let completion: VNRequestCompletionHandler = {request, error in let observations = request.results as! [VNFaceObservation] // Draw faces DispatchQueue.main.async { drawFaceGeometry(observations: observations) } } // Setup the image request let request = VNDetectFaceLandmarksRequest(completionHandler: completion) // Handle the request do { try handler.perform([request]) } catch { print(error) } Sample of SCNView Setup: // Setup SCNView let scnView = SCNView() scnView.translatesAutoresizingMaskIntoConstraints = false self.view.addSubview(scnView) scnView.showsStatistics = true NSLayoutConstraint.activate([ scnView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor), scnView.topAnchor.constraint(equalTo: self.view.topAnchor), scnView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor), scnView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor) ]) // Setup scene let scene = SCNScene() scnView.scene = scene // Setup camera let cameraNode = SCNNode() let camera = SCNCamera() cameraNode.camera = camera scnView.scene?.rootNode.addChildNode(cameraNode) cameraNode.position = SCNVector3(x: 0, y: 0, z: 16) // Setup light let ambientLightNode = SCNNode() ambientLightNode.light = SCNLight() ambientLightNode.light?.type = SCNLight.LightType.ambient ambientLightNode.light?.color = UIColor.darkGray scnView.scene?.rootNode.addChildNode(ambientLightNode) Sample of "face processing" func drawFaceGeometry(observations: [VNFaceObservation]) { // An array of face nodes, one SCNNode for each detected face var faceNode = [SCNNode]() // The origin point let projectedOrigin = sceneView.projectPoint(SCNVector3Zero) // Iterate through each found face for observation in observations { // Setup a SCNNode for the face let face = SCNNode() // Setup the found bounds let faceBounds = VNImageRectForNormalizedRect(observation.boundingBox, Int(self.scnView.bounds.width), Int(self.scnView.bounds.height)) // Verify we have landmarks if let landmarks = observation.landmarks { // Landmarks are relative to and normalized within face bounds let affineTransform = CGAffineTransform(translationX: faceBounds.origin.x, y: faceBounds.origin.y) .scaledBy(x: faceBounds.size.width, y: faceBounds.size.height) // Add all points as vertices var vertices = [SCNVector3]() // Verify we have points if let allPoints = landmarks.allPoints { // Iterate through each point for (index, point) in allPoints.normalizedPoints.enumerated() { // Apply the transform to convert each point to the face's bounding box range _ = index let normalizedPoint = point.applying(affineTransform) let projected = SCNVector3(normalizedPoint.x, normalizedPoint.y, CGFloat(projectedOrigin.z)) let unprojected = sceneView.unprojectPoint(projected) vertices.append(unprojected) } } // Setup Indices var indices = [UInt16]() // Add indices // ... Removed for brevity ... // Setup texture coordinates var coordinates = [CGPoint]() // Add texture coordinates // ... Removed for brevity ... // Setup texture image let imageWidth = 2048.0 let normalizedCoordinates = coordinates.map { coord -> CGPoint in let x = coord.x / CGFloat(imageWidth) let y = coord.y / CGFloat(imageWidth) let textureCoord = CGPoint(x: x, y: y) return textureCoord } // Setup sources let sources = SCNGeometrySource(vertices: vertices) let textureCoordinates = SCNGeometrySource(textureCoordinates: normalizedCoordinates) // Setup elements let elements = SCNGeometryElement(indices: indices, primitiveType: .triangles) // Setup Geometry let geometry = SCNGeometry(sources: [sources, textureCoordinates], elements: [elements]) geometry.firstMaterial?.diffuse.contents = textureImage // Setup node let customFace = SCNNode(geometry: geometry) sceneView.scene?.rootNode.addChildNode(customFace) // Append the face to the face nodes array faceNode.append(face) } // Iterate the face nodes and append to the scene for node in faceNode { sceneView.scene?.rootNode.addChildNode(node) } }
3
0
1.7k
Oct ’20
SceneKit - error Thread 1: "*** -[NSPathStore2 stringByAppendingPathExtension:]: nil argument
We are trying to save scene into usdz by using scene?.write method , which seems to work as expected until iOS 17. in iOS 17 we are getting error Thread 1: "*** -[NSPathStore2 stringByAppendingPathExtension:]: nil argument which seems to be because of scenekit issue attaching StackTrace screenshot for reference we have used updated method for url in scene?.write(to : url, delegate:nil) where url has been generated using .appending(path: String) method
2
3
835
Jun ’23
Adding an object to RoomPlan
A feature of my app I am in need of is to allow users to go through their room and mark specific positions, then be able to navigate these positions with a floor plan like a map. Think google maps but for your room, showing the user's position. I know it is possible to make a floor plan with RoomPlan, which could act as a map, but would it be possible after the plan is made to track a user's location in the room and show it? Is this too complex for RoomPlan? And if so how would I tackle this problem?
0
0
757
Jul ’23
Understanding and subclassing SCNCameraController
Understanding SCNCameraController TLDR; I'm able to create my own subclassed camera controller, but it only works for rotation, not translation. I made a demo repo here. Background I want to use SceneKit's camera controller to drive my scene's camera. The reason I want to subclass it is that my camera is on a rig where I apply rotation to the rig and translation to the camera. I do that because I animate the camera, and applying both translation and rotation to the camera node doesn't create the animation I want. Setting up Instantiate my own SCNCameraController Set its pointofView to my scene's pointOfView (or its parent node I guess) Using the camera controller We now want the new camera controller to drive the scene. When interactions begin (e.g. mouseDown), call beginInteraction(_ location: CGPoint, withViewport viewport: CGSize) When interactions update and end call the corresponding functions on the camera controller Actual behavior It works when I begin/update/end interactions from mouse down events. It ignores any other event types, like magnification, scrollwheel, which work in e.g. the SceneKit Editor in Xcode. See MySCNView.swift in the repo for a demo. By overriding the camera controller's rotate function, I can see that it is called with deltas. This is great. But when I override translateInCameraSpaceBy my print statements don't appear and the scene doesn't translate. Expected behavior I expected SCNCameraController to also apply translations and rolls to the pointOfView by inspecting the currentEvent and figuring out what to do. I'm inclined to think that I'm supposed to call translateInCameraSpaceBy myself, but that seems inconsistent with how Begin/Continue/End interaction seems to call rotate. Demo repo: https://github.com/mortenjust/Camera-Control-Demo
1
0
642
Jul ’23