SceneKit

RSS for tag

Create 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.

SceneKit Documentation

Post

Replies

Boosts

Views

Activity

Creating a custom polygon plane SCNGeometry error
Hello!I have been using this piece of code in order to create a custom plane geometry using the SCNGeometryPrimitiveType option ".polygon".extension SCNGeometry { static func polygonPlane(vertices: [SCNVector3]) -> SCNGeometry { var indices: [Int32] = [Int32(vertices.count)] var index: Int32 = 0 for _ in vertices { indices.append(index) index += 1 } let vertexSource = SCNGeometrySource(vertices: vertices) let indexData = Data(bytes: indices, count: indices.count * MemoryLayout.size) let element = SCNGeometryElement(data: indexData, primitiveType: .polygon, primitiveCount: 1, bytesPerIndex: MemoryLayout.size) let geometry = SCNGeometry(sources: [vertexSource], elements: [element]) let material = SCNMaterial() material.diffuse.contents = UIColor.blue material.isDoubleSided = true geometry.firstMaterial = material return geometry } }This works by sending in vertex coordinates in an order which represent the outline of the desired plane. As an example I might make an array with vertices representing a rectangle geometry as follows: [lowerLeft, upperLeft, upperRight, lowerRight].This method seems to work well for simpler shapes, but I sometimes get an error which I haven't been able to find the cause of when using more complex shapes or vertex coordinates which are randomly scattered in a plane (eg. when the method recieves an array where the vertices order is not outlining a shape. In the rectangle case it could look like this: [lowerLeft, upperRight, lowerRight, upperLeft] ). The error seems more likely to occur when the number of vertices used increases.I'm using this method to allow the user of my app to "paint" an outline of the desired plane, and as I can't control how the user chooses to do so I want this method to be able to handle those cases as well.This is the error print i recieve after calling this method:-[MTLDebugDevice validateNewBufferArgs:options:]:467: failed assertion `Cannot create buffer of zero length.'(lldb)And this is what appears in the debug navigator:libsystem_kernel.dylib`__pthread_kill: 0x219cad0c4 <+0>: mov x16, #0x148 0x219cad0c8 <+4>: svc #0x80 -> 0x219cad0cc <+8>: b.lo 0x219cad0e4 ; <+32> 0x219cad0d0 <+12>: stp x29, x30, [sp, #-0x10]! 0x219cad0d4 <+16>: mov x29, sp 0x219cad0d8 <+20>: bl 0x219ca25d4 ; cerror_nocancel 0x219cad0dc <+24>: mov sp, x29 0x219cad0e0 <+28>: ldp x29, x30, [sp], #0x10 0x219cad0e4 <+32>: ret Where line 4 has the error: com.apple.scenekit.scnview-renderer (16): signal SIGABRTAny help or explanation for this error would be greatly appriciated!
3
0
1.5k
Jun ’19
Properly projecting points with different orientations and camera positions?
Summary: I am using the Vision framework, in conjunction with AVFoundation, to detect facial landmarks of each face in the camera feed (by way of the VNDetectFaceLandmarksRequest). From here, I am taking the found observations and unprojecting each point to a SceneKit View (SCNView), then using those points as the vertices to draw a custom geometry that is textured with a material over each found face. Effectively, I am working to recreate how an ARFaceTrackingConfiguration functions. In general, this task is functioning as expected, but only when my device is using the front camera in landscape right orientation. When I rotate my device, or switch to the rear camera, the unprojected points do not properly align with the found face as they do in landscape right/front camera. Problem: When testing this code, the mesh appears properly (that is, appears affixed to a user's face), but again, only when using the front camera in landscape right. While the code runs as expected (that is, generating the face mesh for each found face) in all orientations, the mesh is wildly misaligned in all other cases. My belief is this issue either stems from my converting the face's bounding box (using VNImageRectForNormalizedRect, which I am calculating using the width/height of my SCNView, not my pixel buffer, which is typically much larger), though all modifications I have tried result in the same issue. Outside of that, I also believe this could be an issue with my SCNCamera, as I am a bit unsure how the transform/projection matrix works and whether that would be needed here. Sample of Vision Request Setup: // Setup Vision request options var requestHandlerOptions: [VNImageOption: AnyObject] = [:] // Setup Camera Intrinsics let cameraIntrinsicData = CMGetAttachment(sampleBuffer, key: kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, attachmentModeOut: nil) if cameraIntrinsicData != nil { requestHandlerOptions[VNImageOption.cameraIntrinsics] = cameraIntrinsicData } // Set EXIF orientation let exifOrientation = self.exifOrientationForCurrentDeviceOrientation() // Setup vision request handler let handler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer, orientation: exifOrientation, options: requestHandlerOptions) // Setup the completion handler let completion: VNRequestCompletionHandler = {request, error in let observations = request.results as! [VNFaceObservation] // Draw faces DispatchQueue.main.async { drawFaceGeometry(observations: observations) } } // Setup the image request let request = VNDetectFaceLandmarksRequest(completionHandler: completion) // Handle the request do { try handler.perform([request]) } catch { print(error) } Sample of SCNView Setup: // Setup SCNView let scnView = SCNView() scnView.translatesAutoresizingMaskIntoConstraints = false self.view.addSubview(scnView) scnView.showsStatistics = true NSLayoutConstraint.activate([ scnView.leadingAnchor.constraint(equalTo: self.view.leadingAnchor), scnView.topAnchor.constraint(equalTo: self.view.topAnchor), scnView.bottomAnchor.constraint(equalTo: self.view.bottomAnchor), scnView.trailingAnchor.constraint(equalTo: self.view.trailingAnchor) ]) // Setup scene let scene = SCNScene() scnView.scene = scene // Setup camera let cameraNode = SCNNode() let camera = SCNCamera() cameraNode.camera = camera scnView.scene?.rootNode.addChildNode(cameraNode) cameraNode.position = SCNVector3(x: 0, y: 0, z: 16) // Setup light let ambientLightNode = SCNNode() ambientLightNode.light = SCNLight() ambientLightNode.light?.type = SCNLight.LightType.ambient ambientLightNode.light?.color = UIColor.darkGray scnView.scene?.rootNode.addChildNode(ambientLightNode) Sample of "face processing" func drawFaceGeometry(observations: [VNFaceObservation]) { // An array of face nodes, one SCNNode for each detected face var faceNode = [SCNNode]() // The origin point let projectedOrigin = sceneView.projectPoint(SCNVector3Zero) // Iterate through each found face for observation in observations { // Setup a SCNNode for the face let face = SCNNode() // Setup the found bounds let faceBounds = VNImageRectForNormalizedRect(observation.boundingBox, Int(self.scnView.bounds.width), Int(self.scnView.bounds.height)) // Verify we have landmarks if let landmarks = observation.landmarks { // Landmarks are relative to and normalized within face bounds let affineTransform = CGAffineTransform(translationX: faceBounds.origin.x, y: faceBounds.origin.y) .scaledBy(x: faceBounds.size.width, y: faceBounds.size.height) // Add all points as vertices var vertices = [SCNVector3]() // Verify we have points if let allPoints = landmarks.allPoints { // Iterate through each point for (index, point) in allPoints.normalizedPoints.enumerated() { // Apply the transform to convert each point to the face's bounding box range _ = index let normalizedPoint = point.applying(affineTransform) let projected = SCNVector3(normalizedPoint.x, normalizedPoint.y, CGFloat(projectedOrigin.z)) let unprojected = sceneView.unprojectPoint(projected) vertices.append(unprojected) } } // Setup Indices var indices = [UInt16]() // Add indices // ... Removed for brevity ... // Setup texture coordinates var coordinates = [CGPoint]() // Add texture coordinates // ... Removed for brevity ... // Setup texture image let imageWidth = 2048.0 let normalizedCoordinates = coordinates.map { coord -> CGPoint in let x = coord.x / CGFloat(imageWidth) let y = coord.y / CGFloat(imageWidth) let textureCoord = CGPoint(x: x, y: y) return textureCoord } // Setup sources let sources = SCNGeometrySource(vertices: vertices) let textureCoordinates = SCNGeometrySource(textureCoordinates: normalizedCoordinates) // Setup elements let elements = SCNGeometryElement(indices: indices, primitiveType: .triangles) // Setup Geometry let geometry = SCNGeometry(sources: [sources, textureCoordinates], elements: [elements]) geometry.firstMaterial?.diffuse.contents = textureImage // Setup node let customFace = SCNNode(geometry: geometry) sceneView.scene?.rootNode.addChildNode(customFace) // Append the face to the face nodes array faceNode.append(face) } // Iterate the face nodes and append to the scene for node in faceNode { sceneView.scene?.rootNode.addChildNode(node) } }
3
0
1.8k
Oct ’20
How to play Vorbis/OGG files with swift?
Does anyone have a working example on how to play OGG files with swift? I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C\++ to fill the PCM because this method seems to only be available in C\++ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes. I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac) I also tried using Cricket Audio framework below. https://github.com/sjmerel/ck It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue https://github.com/sjmerel/ck/issues/3 Right now I believe every player that can play OGGs on mac is written in Objective-C or C++. Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.
1
0
2.7k
Dec ’20
SceneKit log/console spamming
SceneKit has started filling my console with this log message: "Pass FloorPass is not linked to the rendering graph and will be ignored check it's input/output" Feels like I'm the only one on the planet using SceneKit, but if anyone can guess at what is happening, or the reason for this - I'm thankful.
4
3
2k
Oct ’22
Did iOS 16's SceneKit change something about handling pixel format?
Hi, my app displays a video as texture on SceneKit's SCNNode. Now I've just found that some videos looks different in iOS 16.4 from previous versions. The videos look more pale than they should be. I looked up some documents and set SCNDisableLinearSpaceRendering as true in info.plist, they look exactly what they should be but then the problem is that the other videos which already looked fine now turned different. But anyway it seems to relate to the linear or gamma color spaces regarding to this answer (https://developer.apple.com/forums/thread/710643). Definitely those problematic videos have some different color space setting or something. I am not really expert in these field, where should I start to dig in? Or how can I just make iOS 16.4 behave same as previous versions? It worked well for all the videos then. What was actually updated?
1
0
738
May ’23
iOS 17 SceneKit normalmap & morphtarget causes lighting/shading issue
After the iOS 17 update, objects rendered in SceneKit that have both a normal map and morph targets do not render correctly. The shading and lighting appear dark and without reflections. Using a normal map without morph targets or having morph targets on an object without using a normal map works fine. However, the combination of using both breaks the rendering. Using diffuse, normal map and a morpher: Diffuse and normal, NO morpher:
5
1
1.7k
Jun ’23
Xcode 15 Beta 3- 6 .scn and .dae file crash
is any one else having issues with game scene-view .dae and or .scn files it seems these new beta release is very incompatible with files that work perfect with previous Xcode releases up to Xcode 14 I'm working on upgrading a simple striped down version of my chess game and run in to strange and bogus errors messages and crashes /Users/helmut/Desktop/schachGame8423/schach2023/scntool:1:1 failed to convert file with failure reason: *** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0] all tools reality converter exporting the .dae file to other graphic files working fine as in prior Xcode releases but some thing is missing in current beta release of Xcode 15
0
0
525
Aug ’23
Rotating SceneKit IBL lighting environment
I have a spherical HDR image that is being used for environment lighting in a SceneKit scene. I want to rotate the environment image. To set the environment lighting, I use the lightingEnvironment SCNMaterialProperty. This works fine, and my scene is lit using the IBL. As with all SCNMaterialProperty, I expect that I can use the contentsTransform property to rotate or transform the HDR. So I set it as follows: lightingEnvironment.contentsTransform = SCNMatrix4MakeRotation((45.0).degreesAsRadians, 0.0, 1.0, 0.0) My expectation is that the lighting environment would rotate 45 degrees in Y, but it doesn't change at all. Even if I throw in a completely random transform on all axis, there is no apparent change. To test if there is a change, I added a chrome ball and a diffuse ball to my scene and I'm comparing reflections on the chrome ball, and lighting on the diffuse ball. There is no change on either. It doesn't matter where I set the contentsTransform, it doesn't work. I had intended to set it from the renderer(_:updateAtTime:) method on the SCNRendererDelegate, so that I can rotate the IBL to match the point of view of the scene, but even if I transform the environment immediately after it is set, there is never a change. Is this a bug? Or am I doing something entirely wrong? Has anyone on here ever managed to get this to work?
0
0
701
Nov ’23
Exporting models from Maya to SceneKit with animation
I am trying to use my animated model in XCode with SceneKit. I exported my model from Maya with Animation Data in .usd format, then converted it to .usdz with Reality Converter. When I open it in XCode viewer it is animated and everything is fine. However when I try to use it in my app it doesn't animate. On the other hand, when I try with the robot_walk_idle model from Apple's example models, it is animated. Maybe I am missing a option in export settings. Thanks for any help. import SwiftUI import SceneKit struct ModelView: View { var body: some View{ VStack{ SceneView(scene: SCNScene(named: "robot_walk_idle.usdz")) } } }
1
0
637
Dec ’23
Scenekit crash on iOS 17
I found Scenekit crash on iOS 17 very frequently for all device on iOS 17 here is crash trace Crashed: com.apple.scenekit.renderingQueue.SCNView0x15878c630 0 SceneKit 0x3eee4 C3DMatrix4x4GetAffineTransforms + 344 1 SceneKit 0x30208 C3DAdjustZRangeOfProjectionInfos + 140 2 SceneKit 0x2c0a90 C3DCullingContextSetupPointOfViewMatrices + 700 the attachment have the whole log Crash Log have anybody know how fo fix it
1
0
433
Dec ’23
How to get SceneKit to update a nodes orientation based on values that update in real time without lagging or stuttering
Hi everyone I'm making a small private app for my one of my engineering projects, a part of this app shows a 3D model of what it looks like in real life based on a position value of a joint that needs to be updated in real time. I was able import a USDZ of the exact model of the project, and make the proper nodes that can rotate, however I run into a problem where SceneKit takes forever to update the node, I'm not sure if my code just needs optimizing or SceneKit is just not the framework to use when needing things in a 3D model to be updated in real time I've confirmed that the device receives the values in realtime, it is just SceneKit that doesn't update the model in time I'm not very good at explaining things so I put as much detail as I possibly can and hope my problem is clear, I'm also pretty new to swift and iOS development. Here is the code I'm using import SwiftUI import SceneKit struct ModelView2: UIViewRepresentable { @State private var eulerAngle: Float = 0.0 @StateObject var service = BluetoothService() let sceneView = SCNView() func makeUIView(context: Context) -> SCNView { if let scene = SCNScene(named: "V4.usdz") { sceneView.scene = scene if let meshInstanceNode = scene.rootNode.childNode(withName: "MeshInstance", recursively: true), let meshInstance1Node = scene.rootNode.childNode(withName: "MeshInstance_1", recursively: true), let meshInstance562Node = scene.rootNode.childNode(withName: "MeshInstance_562", recursively: true) { // Rotate mesh instance around its own axis /* meshInstance562Node.eulerAngles = SCNVector3(x: 0, y: -0.01745329 * service.posititonValue, z: 0) */ print(meshInstance562Node.eulerAngles) } } sceneView.allowsCameraControl = true sceneView.autoenablesDefaultLighting = true return sceneView } func updateUIView(_ uiView: SCNView, context: Context) { if let scene = SCNScene(named: "V4.usdz") { sceneView.scene = scene if let meshInstanceNode = scene.rootNode.childNode(withName: "MeshInstance", recursively: true), let meshInstance1Node = scene.rootNode.childNode(withName: "MeshInstance_1", recursively: true), let meshInstance562Node = scene.rootNode.childNode(withName: "MeshInstance_562", recursively: true) { let boundingBox = meshInstance562Node.boundingBox let pivot = SCNMatrix4MakeTranslation( boundingBox.min.x + (boundingBox.max.x - boundingBox.min.x) / 2, boundingBox.min.y + (boundingBox.max.y - boundingBox.min.y) / 2, boundingBox.min.z + (boundingBox.max.z - boundingBox.min.z) / 2 ) meshInstance562Node.pivot = pivot meshInstance562Node.addChildNode(meshInstanceNode) meshInstance562Node.addChildNode(meshInstance1Node) var original = SCNMatrix4Identity original = SCNMatrix4Translate(original, 182.85785, 123.54999, 17.857864) // Translate along the Y-axis meshInstance562Node.transform = original print(service.posititonValue) var buffer: Float = 0.0 if service.posititonValue != buffer { meshInstance562Node.eulerAngles = SCNVector3(x: 0, y: -0.01745329 * service.posititonValue, z: 0) buffer = service.posititonValue } } } } func rotateNodeInPlace(node: SCNNode, duration: TimeInterval, angle: Float) { // Create a rotation action let rotationAction = SCNAction.rotateBy(x: 0, y: CGFloat(angle), z: 0, duration: duration) // Repeat the rotation action indefinitely // let repeatAction = SCNAction.repeatForever(rotationAction) // Run the action on the node node.runAction(rotationAction) print(node.transform) } func rotate(node: SCNNode, angle: Float) { node.eulerAngles = SCNVector3(x: 0, y: -0.01745329 * angle, z: 0) } } #Preview { ModelView2() }
1
0
600
Jan ’24
SWIFTUI view not completely refreshed!
Hello fellow developers here is something that I don t fully grasp : 1/ I have a fake SceneKit with two nodes both having light 2/ I have a small widget to explore those lights and tweak some param -> in the small widget I can t update a toggle item when a new light is selected while other params are updated ! here is a short sample that illustrate what I am trying to resolve import SwiftUI import SceneKit class ShortScene { var scene = SCNScene() var lightNodes : [SCNNode] { get {scene.rootNode.childNodes(passingTest: { current, stop in current.light != nil} ) } } init() { let light1 = SCNLight() light1.castsShadow = false light1.type = .omni light1.intensity = 100 let nodelight1 = SCNNode() nodelight1.light = light1 nodelight1.name = "nodeLight1" scene.rootNode.addChildNode(nodelight1) let light2 = SCNLight() light2.castsShadow = false light2.type = .ambient light2.intensity = 300 let nodelight2 = SCNNode() nodelight2.light = light2 nodelight2.name = "nodeLight2" scene.rootNode.addChildNode(nodelight2) } } extension SCNLight : ObservableObject {} extension SCNNode : ObservableObject {} struct LightViewEx : View { @ObservedObject var lightParam : SCNLight @ObservedObject var lightNode : SCNNode var bindCol : Binding<Color> @State var castShadows : Bool init( _ _lightNode : SCNNode) { if let _light = _lightNode.light { lightParam = _light lightNode = _lightNode bindCol = Binding<Color>( get: { if let _lightcol = _lightNode.light!.color as! NSColor? { return Color(_lightcol)} else { return Color.red } }, set: { newCol in _lightNode.light!.color = NSColor(newCol) } ) castShadows = _lightNode.light!.castsShadow print( "For \(lightNode.name!) : CShadows \(castShadows)") } else { fatalError("No Light attached to Node") } } var body : some View { VStack(alignment: .leading) { Text("Light Params") Picker("Type",selection : $lightParam.type) { Text("IES").tag(SCNLight.LightType.IES) Text("Ambient").tag(SCNLight.LightType.ambient) Text("Directionnal").tag(SCNLight.LightType.directional) Text("Directionnal").tag(SCNLight.LightType.directional) Text("Omni").tag(SCNLight.LightType.omni) Text("Probe").tag(SCNLight.LightType.probe) Text("Spot").tag(SCNLight.LightType.spot) Text("Area").tag(SCNLight.LightType.area) } ColorPicker("Light Color", selection: bindCol) Text("Intensity") TextField("Intensity", value: $lightParam.intensity, formatter: NumberFormatter()) Divider() // Toggle("shadows", isOn: $lightParam.castsShadow ).onChange(of: lightParam.castsShadow, { lightParam.castsShadow.toggle() }) Toggle("CastShadows", isOn: $castShadows ) .onChange(of: castShadows) { lightParam.castsShadow = castShadows;print("castsShadows changed to \(castShadows)") } } } } struct sceneView : View { @State var _lightIdx : Int = 0 @State var shortScene = ShortScene() var body : some View { VStack(alignment: .leading) { if shortScene.lightNodes.isEmpty == false { Picker("Lights", selection: $_lightIdx) { ForEach(0..<shortScene.lightNodes.count, id: \.self) { index in Text(shortScene.lightNodes[index].name ?? "NoName" ).tag(index) } } GridRow(alignment: .top) { LightViewEx(shortScene.lightNodes[_lightIdx]) } } } } } struct testUIView: View { var body: some View { sceneView() } } #Preview { testUIView() } Something is obviously not right ! Anyone has some idea ?
1
1
484
Feb ’24
Several instances of 3D model with skinner, and duplication of weights/indices information
I have a human-like rigged 3D model in a DAE file. I want to programmatically build a scene with several instances of this model in different poses. I can extract the SCNSkinner and skeleton chain from the DAE file without problem. I have discovered that to have different poses, I need to clone the skeleton chain, and clone the SCNSkinner as well, then modify the skeletons position. Works fine. This is done this way: // Read the skinner from the DAE file let skinnerNode = daeScene.rootNode.childNode(withName: "toto-base", recursively: true)! // skinner let skeletonNode1 = skinnerNode.skinner!.skeleton! // Adding the skinner node as a child of the skeleton node makes it easier to // 1) clone the whole thing // 2) add the whole thing to a scene skeletonNode1.addChildNode(skinnerNode) // Clone first instance to have a second instance var skeletonNode2 = skeletonNode1.clone() // Position and move the first instance skeletonNode1.position.x = -3 let skeletonNode1_rightLeg = skeletonNode1.childNode(withName: "RightLeg", recursively: true)! skeletonNode1_rightLeg.eulerAngles.x = 0.6 scene.rootNode.addChildNode(skeletonNode1) // Position and move the second instance skeletonNode2.position.x = 3 let skeletonNode2_leftLeg = skeletonNode2.childNode(withName: "LeftLeg", recursively: true)! skeletonNode2_leftLeg.eulerAngles.z = 1.3 scene.rootNode.addChildNode(skeletonNode2) It seems the boneWeights and boneIndices sources are duplicated for each skinner, so if I have let's say 100 instances, I eat a huge amount of memory, for something that is constant. Is there any way to avoid the duplication of the boneWeights and boneIndices ?
0
1
441
Feb ’24
How can Picker or ColorPicker be used in a volumetric scenes in visionOS?
Hi, My app has a volumetric window displaying some 3D content for the user. I would like the user to be able to control the color of the material using a color picker displayed below the model in the same window, but unfortunately neither ColorPicker nor Picker are functional in volumetric scenes. Attempting to use them causes the app to crash with NSInternalInconsistencyException: Presentations are not permitted within volumetric window scenes. This seems rather limiting. Is there a way either of these components can be utilized? I could build a different "control panel" window but it would not be attached to the model window and it would get confusing if user has multiple 3d windows open. Thank you
3
2
695
Feb ’24
Would you recommend SceneKit or Unity to start a test 3D game on tvOS?
Hi, I'm an experienced developer on Apple platforms (having worked on iOS/tvOS projects for more than 10 years now). However, I've only worked on applications, or simple games which didn't require more than using UIKit or SwiftUI. Now, I'd like to start a new project, recreating an old game on tvOS with 3D graphics. This is not a project I plan to release, only a simple personal challenge. I'm torn with starting this project with either SceneKit, or Unity. On one hand, I love Apple frameworks and tools, so I guess I could easily progress with SceneKit. Also, I don't know Unity very well, but even if it's a simple project, I've seen that there are several restrictions for free plans (no custom splash screen, etc). On the other hand, I've read several threads (i.e. this one) making it look like that SceneKit isn't going anywhere, and clearly recommending Unity due to the fact that its documentation is way better, and the game more easily portable to other platforms. Also, if I'm going to learn something new, maybe I could learn more with Unity (using a completely different platform, software and language) than I would with SceneKit and Swift stuff. What's your opinion about this? Thanks!
1
0
497
Mar ’24