Post not yet marked as solved
I'm doing an experiment integrating SwiftUI views as Materials for a SceneKit scene SCNPanel node.
It is working perfectly in iOS using UIHostingController with the following code:
Swift
func createInfoPanel() {
let panel = SCNPlane(width: 6.0, height: 6.0)
let panelNode = SCNNode(geometry: panel)
let infoPanelHost = SCNHostingController(rootView: helloWorld)
infoPanelHost.view.isOpaque = false
infoPanelHost.view.backgroundColor = SCNColor.clear
infoPanelHost.view.frame = CGRect(x: 0, y: 0, width: 256, height: 256)
panel.materials.first?.diffuse.contents = infoPanelHost.view
panel.materials.first?.emission.contents = infoPanelHost.view
panel.materials.first?.emission.intensity = 3.0
[... BillBoardConstraint etc here ...]
addNodeToScene(panelNode)
}
Yet, when I tried to apply the same to macOS, I don't seem to be able to make the view created by NSHostingController transparent.
Invoking infoPanelHost.view.isOpaque = false returns an error, saying isOpaque is read-only and can't be set.
I tried subclassing NSHostingController and overriding viewWillAppear to try and make the view transparent / non-opaque, to no avail.
Swift
override func viewWillAppear() {
super.viewWillAppear()
self.view.wantsLayer = true
self.view.layer?.backgroundColor = NSColor.clear.cgColor
self.view.layer?.isOpaque = false
self.view.opaqueAncestor?.layer?.backgroundColor = NSColor.clear.cgColor
self.view.opaqueAncestor?.layer?.isOpaque = false
self.view.opaqueAncestor?.alphaValue = 0.0
self.view.alphaValue = 0.0
self.view.window?.isOpaque = false
self.view.window?.backgroundColor = NSColor.clear
}
Tried setting everything I could think of to non-opaque as you can see, and still, the panels are opaque, show no info, and obscure the 3D entity they should overlay...
Can someone please advise?
Post not yet marked as solved
Hi, I just wanted to display a SpriteKit Scene in a SCNPlane. So I set the the SCNMaterial contents to my SKScene, but instead of getting the scene I'm getting a grey plane. This is my code by the way:
var mainScene: SKScene {
let scene = Game()
scene.size = CGSize(width: 1024, height: 1024)
scene.scaleMode = .resizeFill
scene.backgroundColor = .purple
scene.view?.backgroundColor = .purple
scene.view?.allowsTransparency = false
return scene
}
func initMainScene() -> SceneView {
mainScene.view?.isPaused = false
let scene = SCNScene()
let mainSceneMaterial = SCNMaterial()
mainSceneMaterial.normal.contents = mainScene
mainSceneMaterial.isDoubleSided = true
let planeGeometry = SCNPlane(width: 1, height: 1)
planeGeometry.materials = [mainSceneMaterial]
let plane: SCNNode = SCNNode(geometry: planeGeometry)
let camera: SCNNode = SCNNode()
camera.name = "Camera"
camera.camera = SCNCamera()
camera.position = SCNVector3(x: 0.0, y: 0.0, z: 4.0)
let light: SCNNode = SCNNode()
light.light = SCNLight()
light.light!.type = .omni
light.position = SCNVector3(x: 1.5, y: 1.5, z: 1.5)
scene.rootNode.addChildNode(camera)
scene.rootNode.addChildNode(light)
scene.rootNode.addChildNode(plane)
return SceneView(
scene: scene,
pointOfView: scene.rootNode.childNode(withName: "Camera", recursively: false),
options: []
)
}
Here is the screenshot:
Also, my SpriteKit scene has touchesBegan and touchesMoved functions implemented, will those events still work if I embed the scene in the SCNMaterial?
Thanks very much 🙏
Hi, I just wanted to add a 3D perspective to my SpriteKit Scene, so I wondered if I render the SpriteKit Scene as a material of SCNGeometry is possible. If not, is there another way to give a 3D perspective to a SpriteKit Scene.
Thanks very much 🙏
Post not yet marked as solved
Hi, I want to build an ARKit + SceneKit (using Storyboard) game for this year's WWDC Swift Student Challenge.
However, the format for submission must be in swiftpm, and it seems that it only supports SwiftUI.
I'm not too familiar with SwiftUI, so can I still use UIKit, ARKit, and SceneKit in swiftpm? Or must I just build something with SwiftUI?
Thanks!
Post not yet marked as solved
I'm having the most frustrating time trying to get SceneKit to render custom geometry on Monterey on an MBP M1Max. After some mucking about, I'm starting to suspect that the internal floating point representation has shifted to 64-bit?!? I'm wondering if this is also the case for element buffers, because rendering with UInt32 element arrays is FUBAR. Unfortunately, SceneKit won't allow me to create 64-bit element arrays.
Post not yet marked as solved
required: Xcode 13.3 on macOS 12.3
is it okay to run Mac version 12.3.1 not 12.3?
playground book is not allowed to be submitted?
Post not yet marked as solved
Hi,
I'm building SceneKit app and run into a weird situation. From time to time, I can 't see any virtual objects in AR Scene, but it works perfectly in most cases. And in this moment I can't see any objects, sceneView.pointOfView.position.x or y or z is Nan, rotation is the same.
guard let camera = sceneView?.pointOfView else {
print("Get phonePosition Fail")
return nil
}
// TODO: Don't know why sceneView?.pointOfView gets nothing (NaN). Should find out why.
if camera.position.x.isNaN {
if let session = sceneView?.session {
interrupted?(session)
}
return nil
}
Did anyone handle this case before?
Post not yet marked as solved
Can you develop the feature to pan & zoom simultaneously on the AR/VR SceneKit?
Post not yet marked as solved
How do you set the size of an area light in sceneKit?
let areaLightNode = SCNNode()
areaLightNode.light = SCNLight()
areaLightNode.light!.type = .area
areaLightNode.position = SCNVector3(x: 0, y: 5, z: 0)
areaLightNode.light?.intensity = 1000
Nothing appears when using the above
Post not yet marked as solved
I'm trying to populate scene view inside Table cell (for 3D objects) as a subview. For first view load its working fine but as new child nodes are added/Updated to view that is not happening its remain static(objects are not updating on view).
Tried with renderer delegate values of child nodes are updated but on view its not happening we have to switch between different scene then view is loaded properly.
How to update or refresh subview continuously ?
Post not yet marked as solved
Hello, I am preparing for WWDC2021, but I encountered a small problem. I don’t know how to use RealityKit to create a facial geometry, just like using SCNKit.
SCNKit:
func renderer(_ renderer: SCNSceneRenderer, nodeFor anchor: ARAnchor) - SCNNode? {
let faceMesh = ARSCNFaceGeometry(device: sceneView.device!)
let node = SCNNode(geometry: faceMesh)
node.geometry?.firstMaterial?.fillMode = .lines
return node
}
Post not yet marked as solved
Hello!
I've been working on an AR app where users can place nodes and (occasionally) move them around. The approach so far is by creating an ARAnchor based on the hittest location, and then saving information on core data with ARAnchor's identifier as reference. I have had no issue whatsoever with this part of the app.
However, I'm finding some difficulties when moving these nodes around. The approach I've tried so far (note that I don't include the animating process for simplicity):
Finding the node's ARAnchor using self.sceneView.anchor(for: node)
Create a new transformation matrix using the anchor's transform after applying translations:
let transform = SCNMatrix4Mult(translationOnAxis, SCNMatrix4(anc.transform))
Create a new ARAnchor with new transformation above
Update core data record for the Anchor by changing the identifier reference with that of the new ARAnchor.
Add the new ARAnchor to the session's ARWorldMap file.
Remove the previous ARAnchor from the session's ARWorldMap
Save the core data context and the session's ARWorldMap file.
After saving the records on the core data and writing the ARWorldMap file updates, I have no issue when rendering the nodes in their current positions. However, when I closed the app and reopened the AR map, the following behavior surfaces:
All the nodes in the world are rendered correctly, even those with new positions.
The session continues for several seconds (2-3 seconds) and then crashes.
Error logs that I've collected:
Thread 3 SIGABRT (I reckon this usually happened after a user consciously terminate the app? i might be wrong though)
Non descriptive (or maybe I just don't understand it enough) Dynamic library Assert Error:
Assert: in line 508
dyld4 config: DYLD_LIBRARY_PATH=/usr/lib/system/introspection DYLD_INSERT_LIBRARIES=/Developer/usr/lib/libBacktraceRecording.dylib:/Developer/usr/lib/libMainThreadChecker.dylib:/Developer/Library/PrivateFrameworks/DTDDISupport.framework/libViewDebuggerSupport.dylib:/usr/lib/libMTLCapture.dylib
That's about it. No other error description that I could find. Some stackoverflow thread mention signing issues on third party frameworks using SPM. I do use some AWS libraries using SPM, but none of its services was used during the aforementioned process, so I don't believe that would be the cause.
Any help or questions are welcome!
Post not yet marked as solved
Hey guys,Goal: export morph targets DMaya -> Dae -> SceneKitAE, with morph targets controllersIn Maya everything works fine, morphs with sliders work as intendedWhen exported to FBX/DAE and opened on Xcode, the morphs targets are there, but it places the entire avatar, instead of scaling the specific node (nose, arm, etc).https://gyazo.com/6f36a90ce5292b85a6f7a21b9a8918f2How can I export from Maya to DAE, keeping morphs for each node? Or any other Maya -> Scenekit?Thanks!
Post not yet marked as solved
Is there a way to access the SDK code of the measure app in our iPhone, iPad, or iPod touch?
Thank you!
Post not yet marked as solved
Good morning,
We are creating an AR app made with Unity and AR Foundation and we would like to associate our app with an App Clip.
Is it possible to create an App Clip from a Unity app?
I understand that Unity builds may be too heavy to be used as App Clips.
Otherwise, is it possible to associate a Unity app with an App Clip created on SceneKit or RealityKit and to upload it on the App Store?
How could we achieve that? Is it possible to add a custom App Clip to an app archive made with Unity?
Thanks in advance!
Post not yet marked as solved
When rendering a scene using environment lighting and the physically based lighting model, I have a need for an object to reflect another object. As I understand it, in this type of rendering, reflections are only based on the environment lighting and nothing else. As a solution I was intending to use a light probe placed between the object to be reflected and the reflecting object. My scene has been developed programatically and not through an XCode scene file. From Apple's WWDC 2016 presentation on SceneKit I could gather that light probes can be updated programatically through the use of the updateProbes method of the SCNRenderer class. I have the following code, where I am trying to initialize a light probe by using the updateProbes method:let sceneView = SCNView(frame: self.view.frame)
self.view.addSubview(sceneView)
let scene = SCNScene()
sceneView.scene = scene
let lightProbeNode = SCNNode()
lightProbe = SCNLight()
lightProbeNode.light = lightProbe
lightProbe.type = .probe
scene.rootNode.addChildNode(lightProbeNode)
var initLightProbe = true
func renderer(_ renderer: SCNSceneRenderer, updateAtTime time: TimeInterval) {
if initLightProbe {
initLightProbe = false
let scnRenderer = SCNRenderer(device: sceneView.device, options: nil)
scnRenderer.scene = scene
scnRenderer.updateProbes([lightProbeNode], atTime: time)
print ("Initializing light probe")
}
}I don't seem to get any light coming from this light probe. My question is simple, can the updateProbes method be used to initialize a lightProbe? If not, how can you initialize a light probe programatically?
Post not yet marked as solved
This is really frustrating.
i know in SceneKit editor in Xcode there is an option to select a uv channel to apply a texture to, but i can't see how to bring in a model with multiple UVs
i'm working in cinema 4d, i can add multiple uv tags and export as FBX and it keeps multiple UV channels intact. But i can't import FBX into SceneKit. Other formats like USD (ugh), Collada, etc only keep 1 channel map.
does anyone know the correct workflow to go from C4D to SceneKit with multiple maps per model?
Post not yet marked as solved
In my AR app, I rotate my node around some position in the real world.
In order to do that, I use the pivot or simdPivot attribute of the node to rotate around that position. Otherwise it would rotate around the node's center which is elsewhere and the whole node would move around instead of just rotating in place.
Up to here all is fine.
The problem is that in ARKit, using a pivot also moves the camera to the pivot's location.
I can compensate by changing the node's position in the opposite direction. But then, if I continue this way, the user will never be in the correct "real" position inside the node, but in some "other" location and only due to the pivot he will "see" the correct location.
This is a problem, because later on I will need to catch collisions between the user and elements in the scene. I could continue compensating for the pivot's location, but that's a real pain and not so elegant.
I thought that I could use the pivot, rotate, and then reset the pivot.
But it turns out that doing so will not keep the rotation that I set before. Rather, the rotation will behave as if the pivot was never created and will give me the exact behavior I didn't want to achieve.
So let's say I do this in my code:
let translation = simd_float4x4(SCNMatrix4MakeTranslation(0,0,-2))
node?.simdPivot = translation
let rotationMatrix = simd_float4x4(SCNMatrix4MakeRotation(.pi/4, 0, 1, 0))
node?.simdTransform = rotationMatrix
(I found that setting the simdRotation attribute didn't rotate the node so I set the simdTransform instead).
The above code will correctly rotate the node at position (0, 0, -2)
But then if I reset the pivot as such:
node?.simdPivot = matrix_identity_float4x4
Now the node will be rotated around (0, 0, 0).
Is there any other way to resolve this?
Can I move my node in any other way without having to always compensate for the pivot's location so that the user's location will always correspond to his real location inside the node?
Or is there another way to remove the pivot, but leave the rotation that I accomplished with the pivot and freeze the node in its new position/rotation?
Thanks!
Post not yet marked as solved
Hi,I'm trying to understand what SceneKit does to load assets — when it loads them, where it loads them, etc..I cannot find detailed documentation on this but maybe I'm looking in the wrong place. The best I've found so faris the discussion sections in the API docs for SCNSceneRenderer.prepare(_:withCompletionHandler: ) and SCNSceneRenderer.prepare(_:shouldAbortBlock: ).Both of these methods have discussion sections that say the following:By default, SceneKit lazily loads resources onto the GPU for rendering. This approach uses memory and GPU bandwidth efficiently, but can lead to stutters in an otherwise smooth frame rate when you add large amounts of new content to an animated scene. To avoid such issues, use this method to prepare content for drawing before adding it to the scene. You can call this method on a secondary thread to prepare content asynchronously.SceneKit prepares all content associated with the object parameter you provide. If you provide an
SCNMaterial object, SceneKit loads any texture images assigned to its material properties. If you provide an
SCNGeometry object, SceneKit loads all materials attached to the geometry, as well as its vertex data. If you provide an
SCNNode or
SCNScene object, SceneKit loads all geometries and materials associated with the node and all its child nodes, or with the entire node hierarchy of the scene.
...You can observe the progress of this operation with the Progress class. For details, see
Progress.
This raises more questions for me, some of them probably pretty basic (I'm no expert in 3D graphics programming), and some maybe not so dumb:Does "loads resources onto the GPU" mean it is loading the assets into dedicated memroy separate from normal RAM? Or into normal RAM reserved for the GPU? Or into some kind of memory that's actually on the GPU? (This is the dumb question, I bet.)How many assets can be loaded ahead of time in this way?What happens if I try to load more than can fit?If SceneKit uses "a secondary thread to prepare content asynchronously", what happens if I try to access the content before the background loading operation completes? Does my access block or fail?If I can observe the operation with an NSProgress object, then how come these methods do not return NSProgress instances? Or how come the SCNSceneRenderer does not provide an accessor to get return an NSProgress instance?I'd appreciate any insight into these questions, or pointers to WWDC videos, documentation, or third-party books that go into enough detail that they would address these questions.Thanks!
Post not yet marked as solved
I am trying to use the new LiDAR scanner on the iPhone 12 Pro in order to gather points clouds which later on will be used as input data for neural networks.
Since I am relatively new to the field of computer vision and augmented reality, I started by looking at the official code examples (e.g., Visualizing a Point Cloud Using Scene Depth - https://developer.apple.com/documentation/arkit/environmental_analysis/visualizing_a_point_cloud_using_scene_depth) and the documentation of ARKit, SceneKit, Metal and so. However, I still do not understand how to get the LiDAR data.
I found another thread in this forum (Exporting Point Cloud as 3D PLY Model - https://developer.apple.com/forums/thread/658109) and the given solution works so far. However, I do not understand that code in detail, unfortunately. So I am not sure if this gives me really the raw LiDAR data or if some (internal) fusion with other (camera) data is happening, since I could not figure out where the data comes from exactly in the example.
Could you please give me some tips or code examples on how to work with/access the LiDAR data? It would be very much appreciated!