Post not yet marked as solved
Is it possible to use custom clipping in SceneKit? Meaning, can I somehow enable "GL_CLIP_DISTANCE0" and write to "gl_ClipDistance" in the fragment shader? Or do I have to provide my own renderer via the renderer delegate?
Post not yet marked as solved
How can you set the resolution in scenekit?I would like to render the scenekit view at half the resolution and then scale it up. (I'm hoping this will give a performance boost to my game at the cost of non retina graphics)
Post not yet marked as solved
I've attached a SCNProgram to a SceneKit's geometry, and I'm trying to pass uniforms to the fragment shader. In my simple code snippet I just pass the output color to the fragment shader as a uniform, which returns it as an output value.I've already tested the shaders and they work, in the sense that I can succesfully rotate an object in the vertex shader, or draw an object in a different color in the fragment shader, etc... but the problem is when I pass the uniforms. This is my fragment shader:struct Uniform
{
float4 color;
};
fragment float4 myFragment(MyVertexOutput in [[ stage_in ]],
constant Uniforms& uniforms [[buffer(2)]])
{
return uniforms.color;
}And this is how I try to pass the uniforms in my SceneKit+Swift code:SCNTransaction.begin()
cube.geometry?.setValue(NSValue(SCNVector4:SCNVector4(0.0,1.0,0.0,1.0)), forKey: "uniforms.color")
SCNTransaction.commit()But my object (it's a cube) is not even drawn (it's black), and I get this error: 2016-04-01 01:00:34.485 Shaded Cube[30266:12687154] SceneKit: error, missing buffer [-1/0]
Post not yet marked as solved
Hi,I'm trying to understand what SceneKit does to load assets — when it loads them, where it loads them, etc..I cannot find detailed documentation on this but maybe I'm looking in the wrong place. The best I've found so faris the discussion sections in the API docs for SCNSceneRenderer.prepare(_:withCompletionHandler: ) and SCNSceneRenderer.prepare(_:shouldAbortBlock: ).Both of these methods have discussion sections that say the following:By default, SceneKit lazily loads resources onto the GPU for rendering. This approach uses memory and GPU bandwidth efficiently, but can lead to stutters in an otherwise smooth frame rate when you add large amounts of new content to an animated scene. To avoid such issues, use this method to prepare content for drawing before adding it to the scene. You can call this method on a secondary thread to prepare content asynchronously.SceneKit prepares all content associated with the object parameter you provide. If you provide an
SCNMaterial object, SceneKit loads any texture images assigned to its material properties. If you provide an
SCNGeometry object, SceneKit loads all materials attached to the geometry, as well as its vertex data. If you provide an
SCNNode or
SCNScene object, SceneKit loads all geometries and materials associated with the node and all its child nodes, or with the entire node hierarchy of the scene.
...You can observe the progress of this operation with the Progress class. For details, see
Progress.
This raises more questions for me, some of them probably pretty basic (I'm no expert in 3D graphics programming), and some maybe not so dumb:Does "loads resources onto the GPU" mean it is loading the assets into dedicated memroy separate from normal RAM? Or into normal RAM reserved for the GPU? Or into some kind of memory that's actually on the GPU? (This is the dumb question, I bet.)How many assets can be loaded ahead of time in this way?What happens if I try to load more than can fit?If SceneKit uses "a secondary thread to prepare content asynchronously", what happens if I try to access the content before the background loading operation completes? Does my access block or fail?If I can observe the operation with an NSProgress object, then how come these methods do not return NSProgress instances? Or how come the SCNSceneRenderer does not provide an accessor to get return an NSProgress instance?I'd appreciate any insight into these questions, or pointers to WWDC videos, documentation, or third-party books that go into enough detail that they would address these questions.Thanks!
Post not yet marked as solved
I know a similar question was asked here...(https://forums.developer.apple.com/thread/27681)...but it doesnt give an example of how to do it which is what I am after as I can't work it out from the information provided.I am trying to work out how I can change the volume of an SCNAudioPlayer in real time.Currently, I have an SCNAudioSource connected to an SCNAudioPlayer. This audio player is then assigned to a SCNNode so that my sound makes use of SceneKits spatial audio processing.As it stands, I am able to change the volume of each SCNNode using SCNAudioSource.volumetriggered by the boolean variable vol. An extract of my code for this is shown below:(audioSource, audioCount) = soundFileSelect(audioFileIndex: 0)
audioSource?.isPositional = true
if vol {
audioSource?.volume = 1.0
}
else {
audioSource?.volume = 0.0
}
let audioPlayer = SCNAudioPlayer(source: audioSource!)
geometryNode.addAudioPlayer(audioPlayer)
let play = SCNAction.playAudio(audioSource!, waitForCompletion: true)
let loopPlay = SCNAction.repeatForever(play)
geometryNode.runAction(loopPlay)However, this only changes the default volume setting of the audio source, and so only happens when the node is spawned. I am trying to change the volume of it in real time, after a button press.I feel I am missing something very simple, and have trolled the internet for documentation and examples but am really struggling. From Apple's API documentation I am fairly certain it can be done. As far as I can tell you have to use the audioNode property associated with the SCNAudioPlayer which forms part of the AVAudioMixing protocol. Using this AVAudioMxing volume property should be what I need!However, for the life of me I can't work out how to implement this process in code within my current setup. I can access mainMixer.volume/outputvolume such as:audioPlayer.audioNode?.engine?.mainMixerNode.outputVolume=0but this doesn't seem to have any effect.How do the AV and SCN components link together?Any help or examples of how one goes about implementing this would be hugely appreciated!Thanks in advanced.
Post not yet marked as solved
I want to take a Scene kit view which is normally displayed in a .xib file like so:SCNView *sceneView = (SCNView *)self.view;This sceneview is then generally called by the awakeFromNib method from what I have seen.But instead of the view being created in the view i want to display it in the canvas of Motion 5 which grabbed via the FxPlug SDK via the renderOutput method in the FxPlug API as seen in the method signature. In the signature there is an outputImage where you cna grab the width and the height which represent the canvas in which to use for Motion. - (BOOL)renderOutput:(FxImage *)outputImage
withInfo:(FxRenderInfo)renderInfoI want to be able to use the Scenekit SDK and functionaility that it provides but display what I create inside of Motion 5 via the FxPlug Framework. Is there a way to accomplish this? Is it possible to take a Scenekit and convert it into an FxImage so they are usable textures? I know this is a long shot but any help or guidance would be greatly appreciated.
Post not yet marked as solved
Hello all,I have a pretty simple scene set up at the moment displaying a single centered molecule (molNode). I have been attempting to make use of SCNCameraController to avoid reimplementing arcball camera control. Unfortunately, I'm running into some very strange behaviors with the below settings on iOS, which do not seem to be present on macOS.I do not have a scene file, molNode is generated from a macromolecular information file. It consists, roughly, of a few hundred spheres and some cylinders. I have also included a snippet in which I perform a small hack to attach a light to the "free camera" node which SceneKit manages. At the first user interaction with the scene, SceneKit hijacks the only camera in the scene and attaches it to this node, so I use this observation to keep my directional light attached to the camera. This observation is handled every time one performs a double-tap to reset the viewport, as well as on the first user interaction. I doubt that it is related.// molNode is set up...
// Camera and Lighting
let camera = SCNCamera()
camera.usesOrthographicProjection = true
camera.orthographicScale = 30
let cameraNode = SCNNode()
cameraNode.camera = camera // this camera will be removed from this node by SceneKit automatically.
scene.rootNode.addChildNode(cameraNode)
cameraNode.position = SCNVector3(x: 0, y: 0, z: 30)
let ambientLightNode = SCNNode()
ambientLightNode.light = SCNLight()
ambientLightNode.light!.type = .ambient
ambientLightNode.light!.color = UIColor(red: 0.14, green: 0.14, blue: 0.14, alpha: 1.0)
scene.rootNode.addChildNode(ambientLightNode)
let cameraLightNode = SCNNode()
cameraLightNode.light = SCNLight()
cameraLightNode.light!.type = .directional // defaults to (0, 0, -1), which matches the camera.
cameraLightNode.light!.intensity = 800
scene.rootNode.addChildNode(cameraLightNode)
// Scene View Setup
let scnView = self.view as! SCNView
scnView.preferredFramesPerSecond = 60
scnView.scene = scene
scnView.allowsCameraControl = true
scnView.defaultCameraController.interactionMode = .orbitArcball
scnView.defaultCameraController.target = molNode.position
scnView.defaultCameraController.inertiaEnabled = true
self.povObservation = scnView.defaultCameraController.observe(\.pointOfView, options: [.new]) { (cameraController, change) in
if let pointOfView = cameraController.pointOfView {
// Whenever SceneKit creates a new pointOfView node and attaches the camera to it,
// recover the camera light and attach it as well.
cameraLightNode.removeFromParentNode()
pointOfView.addChildNode(cameraLightNode)
}
}1. The first problem is that y-axis camera control is inverted. Drawing an arc vertically results in the opposite of expected rotation for the "pinch and drag the surface of the sphere" idiom of arcball control. This does not occur on macOS with mouse control. There, dragging the scene rotates as one would expect. This occurs in both arcball interaction modes. It does not occur in .orbitTurntable or .orbitAngleMapping... however turntable is not appropriate for my use case, and angleMapping does not handle rotations for diagonal drag/swipes intuitively (it results in rotations along multiple degrees of freedom).2. The second, even more confounding issue, has to do with inertia. While "drag" control is inverted, inertia is not. This produces some extremely jarring results, namely that a drag will move in one direction, but when released, the model continues moving inertially in the direction opposite to the drag. "Flicking" gestures therefore result in the kind of movement you would expect, while dragging does not. Even moreso than #1, this simply seems wrong and entirely unintuitive.I am somewhat apprehensive that this is an error/bug in SceneKit. If one watches the WWDC 2017 video (604) in which SCNCameraController is introduced, the demonstration of arcball control there works the way one would expect, e.g. without the inverted vertical orbiting. Presumably this is because the demo was on macOS.I have been attempting to find means to remedy these issues (without implementing my own arcball camera control), but have so far come up short.Thank you all in advance for any help you may be able to provide.
Post not yet marked as solved
When rendering a scene using environment lighting and the physically based lighting model, I have a need for an object to reflect another object. As I understand it, in this type of rendering, reflections are only based on the environment lighting and nothing else. As a solution I was intending to use a light probe placed between the object to be reflected and the reflecting object. My scene has been developed programatically and not through an XCode scene file. From Apple's WWDC 2016 presentation on SceneKit I could gather that light probes can be updated programatically through the use of the updateProbes method of the SCNRenderer class. I have the following code, where I am trying to initialize a light probe by using the updateProbes method:let sceneView = SCNView(frame: self.view.frame)
self.view.addSubview(sceneView)
let scene = SCNScene()
sceneView.scene = scene
let lightProbeNode = SCNNode()
lightProbe = SCNLight()
lightProbeNode.light = lightProbe
lightProbe.type = .probe
scene.rootNode.addChildNode(lightProbeNode)
var initLightProbe = true
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
if initLightProbe {
initLightProbe = false
let scnRenderer = SCNRenderer(device: sceneView.device, options: nil)
scnRenderer.scene = scene
scnRenderer.updateProbes([lightProbeNode], atTime: time)
print ("Initializing light probe")
}
}I don't seem to get any light coming from this light probe. My question is simple, can the updateProbes method be used to initialize a lightProbe? If not, how can you initialize a light probe programatically?
Post not yet marked as solved
Unlike pervious years, there are no sessions, or anything, for SceneKit.Are we supposed to go to Unity and (ugh) C#?Was there some sort of fallout with the SceneKit group?Was it written in Obj-C so it's a forgotten stepchild?Is Apple only interested in USDZ support?Reality Composer seems like a rudimentary editor for iPad and iPhone.No new features or editor improvements. Seems like it's been dropped in the hold with OpenGL.There's been time invested. Apple, guidance please?
Post not yet marked as solved
It appears SCNLayer is now deprecated, but I can't find the replacement for it. I'm just trying to get a SceneKit scene to render into a CALayer, without an SCNView attached. I actually want 3 different cameras from the same scene to render into 3 different layers inside the same NSView (not 3 separate SCNViews). Should we use CAMetalLayer for this now? If so, how do I set it up to render the scene? I'm using macOS 10.14 (but planning on using 10.15 when it's stable) with Xcode 11 Beta-3 BTW. Thank you if you can help with this.
Post not yet marked as solved
Looks like the particle system for scenekit is not available as a template anymore in xcode 11. I can only see the SpriteKit version. Is this by design or a mistake, or is there another way to create one?
I'm implementing an example app using ARKit + SceneKit. In order to do hittest on scenekit objects(nodes) in the scene, I used ARSCNView.hittest() and it worked well. However when I draw same scene with SCNRenderer(I'm writing also an offscreen renderer) and try to do hittest with SCNRenderer.hittest() but it doesn't return any results.I assumed the range of input points should be only inside of GPU viewport which is SCNRenderer.currentViewport. But it doesn't work Any idea how to do properly do hittest with SCNRenderer?Thanks in advance
Post not yet marked as solved
I have developed an app using SceneKit since Swift came out. Now SwiftUI is out and uses structs rather than classes. SceneKit is a cascade of classes. As a newbie, I am concerned that my code might be obsolete through depreciations soon after publication and I'd rather get ahead of the issues now. So, is SceneKit long-term viable? My first attempts at converting my custom classes to structs seems impossible without Apple leading the way. If I understand correctly, I can make a struct whose only member is an instance of a class, but the benefits of the struct, e.g., minimal memory, processing time, etc., are lost.
Post not yet marked as solved
I need to run sceneView.projectPoint a couple of 100 times in a loop, it freezes my app regularly... not always. Sometimes it freezes for a second sometimes for 10 seconds.Any ideas?
Post not yet marked as solved
Hey guys,Goal: export morph targets DMaya -> Dae -> SceneKitAE, with morph targets controllersIn Maya everything works fine, morphs with sliders work as intendedWhen exported to FBX/DAE and opened on Xcode, the morphs targets are there, but it places the entire avatar, instead of scaling the specific node (nose, arm, etc).https://gyazo.com/6f36a90ce5292b85a6f7a21b9a8918f2How can I export from Maya to DAE, keeping morphs for each node? Or any other Maya -> Scenekit?Thanks!
Post not yet marked as solved
Using Scenekit UIViewRepresentable (code block 1 bellow), we could add a tap gesture recognizer hit test like
With the new SceneView for Scenekit/SwiftUI (code block 2), we add a tap gesture recognizer?
I found this API, but its not clear how to use it...
https://developer.apple.com/documentation/scenekit/sceneview/3607839-ontapgesture
import SwiftUI
import SceneKit
import UIKit
import QuartzCore
struct SceneView: UIViewRepresentable {
func makeUIView(context: Context) -> SCNView {
let view = SCNView(frame: .zero)
let scene = SCNScene(named: "ship")!
view.allowsCameraControl = true
view.scene = scene
// add a tap gesture recognizer
let tapGesture = UITapGestureRecognizer(target: self, action: #selector(handleTap(_:)))
	view.addGestureRecognizer(tapGesture)
return view
}
func handleTap(_ gestureRecognize: UIGestureRecognizer) {
// retrieve the SCNView
let view = SCNView(frame: .zero)
// check what nodes are tapped
let p = gestureRecognize.location(in: view)
let hitResults = view.hitTest(p, options: [:])
// check that we clicked on at least one object
if hitResults.count > 0 {
// retrieved the first clicked object
let result = hitResults[0]
// get material for selected geometry element
let material = result.node.geometry!.materials[(result.geometryIndex)]
// highlight it
SCNTransaction.begin()
SCNTransaction.animationDuration = 0.5
// on completion - unhighlight
SCNTransaction.completionBlock = {
SCNTransaction.begin()
SCNTransaction.animationDuration = 0.5
material.emission.contents = UIColor.black
SCNTransaction.commit()
}
material.emission.contents = UIColor.green
SCNTransaction.commit()
}
}
func updateUIView(_ view: SCNView, context: Context) {
}
}
import SwiftUI
import SceneKit
struct ContentView: View {
var scene = SCNScene(named: "ship.scn")
var cameraNode: SCNNode? {
scene?.rootNode.childNode(withName: "camera", recursively: false)
}
var body: some View {
SceneView(
scene: scene,
pointOfView: cameraNode,
options: []
)
.allowsHitTesting(/*@START_MENU_TOKEN@*/true/*@END_MENU_TOKEN@*/)
}
}
struct ContentView_Previews: PreviewProvider {
static var previews: some View {
ContentView()
}
}
Post not yet marked as solved
When I import the robot.fbx file into reality converter found in the Apple developer documentation found here:
https://developer.apple.com/sample-code/ar/Biped-Robot.zip
If I bring the .fbx version into RealityConverter, export it to USDZ and bring the USDZ file into Xcode 12 beta or any version below, the usdz robot character loses its skin bindings and the mesh has no skeleton attached.
Has anyone experienced this?
Have you been able to successfully convert an FBX rigged model to an USDZ model with all bones connected?
I have tried this using Reality Converter beta 3
I did not edit the FBX robot character provided with Apples documentation sample.
Post not yet marked as solved
I saw it's possible that you can use a gobo as light effect like in the real world, flag some lights off.
Can someone help me to get the gobo effect to work?
In the following example you will see a cube with a plane below it and a spotlight above it. I would like to load a pattern (image) in front off the spotlight-source which would flag (block) certain parts of the light.
Expected behaviour would be that you will see part of the pattern on the top of the cube as you will see part of the blocked light pattern on the plane.
Could anyone help me with solving this gobo puzzle?
	view.backgroundColor = .blue
let sceneView = SCNView(frame: self.view.frame)
self.view.addSubview(sceneView)
view.bringSubviewToFront(sceneView)
sceneView.backgroundColor = .clear
sceneView.allowsCameraControl = true
let scene = SCNScene()
sceneView.scene = scene
// Add camera node
let camera = SCNCamera()
let cameraNode = SCNNode()
cameraNode.camera = camera
cameraNode.position = SCNVector3(x: 0, y: 0, z: 0)
scene.rootNode.addChildNode(cameraNode)
// Add a cube to the scene
let cubeGeometry = SCNBox(width: 0.5, height: 0.5, length: 0.5, chamferRadius: 0.0)
let cubeNode = SCNNode(geometry: cubeGeometry)
cubeNode.position = SCNVector3(x: 0.0, y: -0.5, z: -1.5)
// Make the cube white
let whiteMaterial = SCNMaterial()
whiteMaterial.diffuse.contents = UIColor.white
cubeGeometry.materials = [whiteMaterial]
scene.rootNode.addChildNode(cubeNode)
// Add a plane to the scene so we can see the lights & shadows
let planeGeometry = SCNPlane(width: 100.0, height: 100.0)
let lightGrayMaterial = SCNMaterial()
lightGrayMaterial.diffuse.contents = UIColor.lightGray
planeGeometry.materials = [lightGrayMaterial]
let planeNode = SCNNode(geometry: planeGeometry)
planeNode.eulerAngles = SCNVector3(x: GLKMathDegreesToRadians(-90), y: 0, z: 0)
planeNode.position = SCNVector3(x: 0, y: -15, z: 0)
scene.rootNode.addChildNode(planeNode)
// Create a spotlight
let light = SCNLight()
light.type = SCNLight.LightType.spot
light.castsShadow = true
// Create a Gobo mask:
// if let gobo = light.gobo
// {
// gobo.contents = UIImage(named: "gobo")
// gobo.intensity = 0.5
// //light.categoryBitMask = -1
// }
// Create al lightNode
let lightNode = SCNNode()
lightNode.light = light
lightNode.position = SCNVector3(x: 0, y: 10, z: -1.5)
let lightAngle = -90 * Float.pi / 180 // light facing down
lightNode.eulerAngles = SCNVector3(x: lightAngle, y: 0, z: 0)
scene.rootNode.addChildNode(lightNode)
Post not yet marked as solved
I've seen two crashes in the fox sample code already for simple SCNVector functions.
The enemies also don't move - I suppose that's a Gameplay Kit error.
I can see glimpses of other possible problems too - and of course the intermittent display stutter is there. (You know, the one on every OS after High Sierra for any Metal backed view)
I tried both on m1 native and rosetta.
Unless I have some wrong setting, SceneKit's SCNVector math seems crippled at the moment.
Already sent reports...
Can someone else confirm the crashes?
Post not yet marked as solved
Is there a way to access the SDK code of the measure app in our iPhone, iPad, or iPod touch?
Thank you!