Post marked as solved
3.1k
Views
We have a reasonably complex mesh and need to update the vertex positions for every frame using custom code running on the CPU. It seems like SceneKit is not really set up to make this easy, as the SCNGeometry is immutable. What is the easiest (yet performant) way to achieve this? So far I can see two possible approaches: 1) Create a new SCNGeometry for every frame. I suspect that this will be prohibitively expensive, but maybe not? 2) It seems that SCNProgram and its handleBinding... method would allow updating the vertex positions. But does using SCNProgram mean that we have to write all our own shaders from scratch? Or can we still use the default Scenekit vertex and fragment shaders even when using SCNProgram?
Post marked as unsolved
1.1k
Views
Hello,I'm trying to render a point cloud using SceneKit (ultimately for ARKit). However, I'm finding the pointSize (and minimumPointScreenSpaceRadius) properties are ignored when trying to modify the SCNGeometryElement. No matter what pointSize I put in, the points are the same size. This is roughly how my code is laid out. I'm wondering if I'm doing something wrong with my GeometryElement setup? var index: Int = 0
for x_pos in stride(from: start , to: end, by: increment) {
for y_pos in stride(from: start , to: end, by: increment) {
for z_pos in stride(from: start , to: end, by: increment) {
let pos:SCNVector3 = SCNVector3(x: x_pos, y: y_pos, z: z_pos)
positions.append(pos)
normals.append(normal)
indices.append(index)
index = index + 1
}
}
}
let pt_positions : SCNGeometrySource = SCNGeometrySource.init(vertices: positions)
let n_positions: SCNGeometrySource = SCNGeometrySource.init(normals: normals)
let pointer = UnsafeRawPointer(indices)
let indexData = NSData(bytes: pointer, length: MemoryLayout<Int32>.size * indices.count)
let elements = SCNGeometryElement(data: indexData as Data, primitiveType: .point, primitiveCount: positions.count, bytesPerIndex: MemoryLayout<Int>.size)
elements.pointSize = 10.0 //being ignored
elements.minimumPointScreenSpaceRadius = 10 //also being ignored
let pointCloud = SCNGeometry (sources: [pt_positions, n_positions], elements: [elements])
pointCloud.firstMaterial?.lightingModel = SCNMaterial.LightingModel.constant
let ptNode = SCNNode(geometry: pointCloud)
scene.rootNode.addChildNode(ptNode)
Post marked as unsolved
3.9k
Views
I have an idea for a rendering method which I thought would be a good use for SceneKit and Metal. I have figured out how to do DRAW_QUAD and DRAW_SCENE shaders which work.But I am repeatedly hitting problems with a lack of documentation.I want a scene shader technique to render-out a special buffer. This shader needs some custom uniform parameters for each draw call. My technique assocates this uniform with a symbol and SCNTechnique provides a handy callback solution: handleBindingOfSymbol( ) - A block is invoked at each draw call allowing my code to supply custom pararameters ....... Except this is an OpenGL-only call. The block is never called for a Metal implementation. There is no direct Metal equivalent. Is there a way round this? Or should I just leave Metal alone for a year?
Post marked as unsolved
67
Views
Hello,
I am using the camera's fieldOfView property to "zoom" on a particular node in a sceneKit scene that is embedded in a UIView.
I am changing the value of sceneView.pointOfView?.camera?.fieldOfView which works fine.
However I am unable to animate that change, which results in a jarring transition.
I have tried to create a custom SCNAction attached to the pointOfView node without success.
I have also tried an SCNTransation to animate the change of value and it didn't work either.
Any idea how I can achieve this since xFov is deprecated?
Post marked as unsolved
110
Views
Hi,
Is there any way to integrate SceneKit object gizmos in Metal or do I have to implement it from scratch? I want to rotate, scale and translate object in a simple way with gizmo like Blender style.
Thanks.
Post marked as unsolved
2.3k
Views
The Metal compiler when invoked by SceneKit is generating a lot of spurious warnings in iOS / iPadOS 14 Beta 7 & 8. This seems to be causing a significant degradation in performance for our SceneKit/ARKit-based app.
Is there any way to disable this unnecessary Metal compiler logging? I tried making a MTLCOMPILERFLAGS = -w environment variable, but it didn't seem to have any effect.
Feedback ID FB8618939.
Logging looks like this:
2020-09-09 14:23:33.700122-0700 App[4672:1638397] [Metal Compiler Warning] Warning: Compilation succeeded with:
programsource:95:26: warning: unused function 'reduceop'
static inline float4 reduceop(float4 d0, float4 d1)
^
programsource:581:26: warning: unused variable 'scnshadowsamplerordz'
static constexpr sampler scnshadowsamplerordz = sampler(coord::normalized, filter::linear, mipfilter::none, address::clamptoedge, comparefunc::greaterequal);
^
2020-09-09 14:23:33.962519-0700 App[4672:1638397] [Metal Compiler Warning] Warning: Compilation succeeded with:
programsource:95:26: warning: unused function 'reduceop'
static inline float4 reduceop(float4 d0, float4 d1)
^
programsource:581:26: warning: unused variable 'scnshadowsamplerordz'
static constexpr sampler scnshadowsamplerordz = sampler(coord::normalized, filter::linear, mipfilter::none, address::clamptoedge, comparefunc::greaterequal);
Post marked as unsolved
2.4k
Views
Looks like the particle system for scenekit is not available as a template anymore in xcode 11. I can only see the SpriteKit version. Is this by design or a mistake, or is there another way to create one?
Post marked as unsolved
93
Views
Is it possible to setup a camera to use two-point perspective? I only found perspective and orthographic options in the Scenekit editor. Using two point perspective is a standard functionality that is available in every 3D software and is an Integral feature needed for correctly setting up camera views.
Can't include links in here so please google ' 3ds max Camera Correction Modifier' and click on the first link. It explains how two point perspective works.
If the current scenekit camera does not have this functionality, is Apple planning to add it and if so when would that happen?
Thanks,
Husam
Post marked as unsolved
170
Views
I've seen two crashes in the fox sample code already for simple SCNVector functions.
The enemies also don't move - I suppose that's a Gameplay Kit error.
I can see glimpses of other possible problems too - and of course the intermittent display stutter is there. (You know, the one on every OS after High Sierra for any Metal backed view)
I tried both on m1 native and rosetta.
Unless I have some wrong setting, SceneKit's SCNVector math seems crippled at the moment.
Already sent reports...
Can someone else confirm the crashes?
Post marked as solved
474
Views
I am trying to use the new LiDAR scanner on the iPhone 12 Pro in order to gather points clouds which later on will be used as input data for neural networks.
Since I am relatively new to the field of computer vision and augmented reality, I started by looking at the official code examples (e.g., Visualizing a Point Cloud Using Scene Depth - https://developer.apple.com/documentation/arkit/environmental_analysis/visualizing_a_point_cloud_using_scene_depth) and the documentation of ARKit, SceneKit, Metal and so. However, I still do not understand how to get the LiDAR data.
I found another thread in this forum (Exporting Point Cloud as 3D PLY Model - https://developer.apple.com/forums/thread/658109) and the given solution works so far. However, I do not understand that code in detail, unfortunately. So I am not sure if this gives me really the raw LiDAR data or if some (internal) fusion with other (camera) data is happening, since I could not figure out where the data comes from exactly in the example.
Could you please give me some tips or code examples on how to work with/access the LiDAR data? It would be very much appreciated!
Post marked as solved
142
Views
I have an MTKView in which I want to first render a SceneKit scene onto a Render Texture and then pass this render texture into my own post-processing stage.
It works fine, I see my Scene rendered correctly with all the objects, but there are no shadows being renderered and I've worked out that it's the SCNRenderer call that's not drawing them (the scene in a SCNView, ARSCNView etc. will show shadows fine). i.e., the nodes are casting correctly, and the light is set to cast shadows, so it's not a scene issue.
Presumably I've not set up the descriptor with extra details that it needs for shadow rendering, but I'm not certain what the SCNRenderer is actually needing here? Sadly the documentation isn't revealing a great to deal to me on this subject.
Any pointers would be much appreciated, no matter how general. I can't even find examples of the SCNRenderer being used in similar situations.
let descriptor = MTLRenderPassDescriptor()
descriptor.colorAttachments[0].loadAction = .clear
descriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.0, green: 0.0, blue: 0.0, alpha: 1.0)
descriptor.colorAttachments[0].texture = metalRenderTexture
descriptor.colorAttachments[0].storeAction = .store
if let commandBuffer = commandQueue.makeCommandBuffer() {
commandBuffer.label = "3DSceneBuffer"
if isDrawingPreVizScene {
sceneKitRenderer.render(withViewport: CGRect(x: 0, y: 0, width: metalRenderTexture!.width, height: metalRenderTexture!.height),
commandBuffer: commandBuffer,
passDescriptor: descriptor)
}
commandBuffer.commit()
}
Post marked as solved
224
Views
I am porting a functioning Swift App to SwiftUI with the ambitious intention of Multiple platform coding, so the project was created as such.
I've worked out every issue except as the title suggests.
My pertinent code:
import SwiftUI
import SceneKit
struct ContentView: View {
var scene : SCNScene = SCNScene = { () -> SCNScene in // DO NOT REPLACE OR MODIFY THE FIRST AND LAST LINES OF THIS VAR DECLARATION, e.g., "var scene:SCNScene{ ... }" else initScene(newScene) FAILS to create children!
let newScene = SCNScene()
initScene(newScene) // adds ChildNodes named: "precession" & "nuclearSpin"
let newCamera = SCNNode()
newCamera.position = SCNVector3(x: 0, y: 0 ,z: 40)
newCamera.name = "cameraNode"
newCamera.camera = SCNCamera()
newScene.rootNode.addChildNode(newCamera)
10. newScene.fogDensityExponent = 0.0 return newScene
}()
@EnvironmentObject var structures:Structures
@State private var Ncv = Int()
@State private var indexMomentcv = Int()
var body: some View {
ZStack{
20. SceneView(
21. scene: scene,
22. pointOfView: scene.rootNode.childNode(withName: "cameraNode", recursively: false),
23. options: [.allowsCameraControl, .autoenablesDefaultLighting, .temporalAntialiasingEnabled]
24. ).colorInvert()
25. VStack(alignment: .trailing){
26. Group{
27. HStack{
28. VStack{ // ... and so on
29.
I have verified the existence of all nodes created for the scene, i.e., the objects to be seen, using these useful functions:
func printChildNodeHierarchy(node: SCNNode) { node.enumerateChildNodes( { (anyNode,false) -> Void in return print( anyNode ) } ) } // SUPER handy
func printChildNodeTransform(node: SCNNode) { node.enumerateChildNodes( { (anyNode,false) -> Void in return print( anyNode.name!,"\t", anyNode.simdTransform,"\t",anyNode.scale ) } ) }
func printChildNodeAction (node: SCNNode) { node.enumerateChildNodes( { (anyNode,false) -> Void in return print( anyNode.name!,"\t", anyNode.hasActions,"\t",anyNode.actionKeys ) } ) }
...which print in the debugging area. I have verified the materials assignments to the elements of those nodes. So, everything is in place as in the functioning Swift/macOS App but I just don't see the objects.
What I've tried:
Since I run in DarkMode theme I have cycled through the combinations of Dark/Light and with/without line 24's .colorInvert() and swapping ZStack with HStack.
Some combo's do wash-out, but still no objects.
I zero'd out the fog thinking it might be too foggy. Not.
I've created a scene.scn file as a resource, applied the above attributes to it, reference it with scene = SCNScene(named: scene.scn"), but get the same non-result.
Idk if some checkbox is errant in the build/target etc. area. Wouldn't know if it was. I'm not a professional programmer.
Any thoughts are appreciated. I've been stuck for days.
oh yeah: as of this posting I am up to date on all Apple software, running 2018 MacMini/Intel/SSD
Post marked as unsolved
622
Views
Using Scenekit UIViewRepresentable (code block 1 bellow), we could add a tap gesture recognizer hit test like
With the new SceneView for Scenekit/SwiftUI (code block 2), we add a tap gesture recognizer?
I found this API, but its not clear how to use it...
https://developer.apple.com/documentation/scenekit/sceneview/3607839-ontapgesture
import SwiftUI
import SceneKit
import UIKit
import QuartzCore
struct SceneView: UIViewRepresentable {
func makeUIView(context: Context) -> SCNView {
let view = SCNView(frame: .zero)
let scene = SCNScene(named: "ship")!
view.allowsCameraControl = true
view.scene = scene
// add a tap gesture recognizer
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(handleTap(_:)))
	view.addGestureRecognizer(tapGesture)
return view
}
func handleTap(_ gestureRecognize: UIGestureRecognizer) {
// retrieve the SCNView
let view = SCNView(frame: .zero)
// check what nodes are tapped
let p = gestureRecognize.location(in: view)
let hitResults = view.hitTest(p, options: [:])
// check that we clicked on at least one object
if hitResults.count > 0 {
// retrieved the first clicked object
let result = hitResults[0]
// get material for selected geometry element
let material = result.node.geometry!.materials[(result.geometryIndex)]
// highlight it
SCNTransaction.begin()
SCNTransaction.animationDuration = 0.5
// on completion - unhighlight
SCNTransaction.completionBlock = {
SCNTransaction.begin()
SCNTransaction.animationDuration = 0.5
material.emission.contents = UIColor.black
SCNTransaction.commit()
}
material.emission.contents = UIColor.green
SCNTransaction.commit()
}
}
func updateUIView(_ view: SCNView, context: Context) {
}
}
import SwiftUI
import SceneKit
struct ContentView: View {
var scene = SCNScene(named: "ship.scn")
var cameraNode: SCNNode? {
scene?.rootNode.childNode(withName: "camera", recursively: false)
}
var body: some View {
SceneView(
scene: scene,
pointOfView: cameraNode,
options: []
)
.allowsHitTesting(/*@START_MENU_TOKEN@*/true/*@END_MENU_TOKEN@*/)
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
Post marked as unsolved
113
Views
I'm doing an experiment integrating SwiftUI views as Materials for a SceneKit scene SCNPanel node.
It is working perfectly in iOS using UIHostingController with the following code:
Swift
func createInfoPanel() {
let panel = SCNPlane(width: 6.0, height: 6.0)
let panelNode = SCNNode(geometry: panel)
let infoPanelHost = SCNHostingController(rootView: helloWorld)
infoPanelHost.view.isOpaque = false
infoPanelHost.view.backgroundColor = SCNColor.clear
infoPanelHost.view.frame = CGRect(x: 0, y: 0, width: 256, height: 256)
panel.materials.first?.diffuse.contents = infoPanelHost.view
panel.materials.first?.emission.contents = infoPanelHost.view
panel.materials.first?.emission.intensity = 3.0
[... BillBoardConstraint etc here ...]
addNodeToScene(panelNode)
}
Yet, when I tried to apply the same to macOS, I don't seem to be able to make the view created by NSHostingController transparent.
Invoking infoPanelHost.view.isOpaque = false returns an error, saying isOpaque is read-only and can't be set.
I tried subclassing NSHostingController and overriding viewWillAppear to try and make the view transparent / non-opaque, to no avail.
Swift
override func viewWillAppear() {
super.viewWillAppear()
self.view.wantsLayer = true
self.view.layer?.backgroundColor = NSColor.clear.cgColor
self.view.layer?.isOpaque = false
self.view.opaqueAncestor?.layer?.backgroundColor = NSColor.clear.cgColor
self.view.opaqueAncestor?.layer?.isOpaque = false
self.view.opaqueAncestor?.alphaValue = 0.0
self.view.alphaValue = 0.0
self.view.window?.isOpaque = false
self.view.window?.backgroundColor = NSColor.clear
}
Tried setting everything I could think of to non-opaque as you can see, and still, the panels are opaque, show no info, and obscure the 3D entity they should overlay...
Can someone please advise?
Post marked as solved
118
Views
It is unclear to me how the whole thing has to be implemented in SwiftUI.
Currently:
After an input (for example moving the camera) the renderer function is called correctly.
Goal:
The renderer function should also be called without needing input.
Setup:
struct ContentView: View {
@ObservedObject var gameData : GameViewModel
var body: some View {
SceneView(
scene: gameData.scene,
pointOfView: gameData.camera,
options: [
.allowsCameraControl
],
delegate: gameData
)
}
}
class GameViewModel: NSObject, ObservableObject { ... }
extension GameViewModel: SCNSceneRendererDelegate {
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) { ... }
}