SceneKit

RSS for tag

Create 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.

SceneKit Documentation

Posts under SceneKit tag

120 Posts
Sort by:
Post not yet marked as solved
0 Replies
14 Views
Hello developers, I have a original image that is ordinary one. and set image in image view like this imageview.image = UIImage(cgImage: image) result is this, and then set same image in scenekit like this scnview.scene.background.contents = image and result is, I use same image to two othre view, and one in scnview is darker than original one, and I don't know the problem... found in google, who has a same problem, but there is no answer. (https://stackoverflow.com/questions/60679819/adding-uiimageview-to-arscnview-scene-background-causes-saturation-hue-to-be-off) checked scnview's pixelformat it was bgra8unorm. What's the problem??
Posted
by
Post not yet marked as solved
0 Replies
53 Views
Hi guys, in one of the video of WWDC 21 I saw this spectacular image of a green fluid over the sofa 🛋 i can’t find to much tutorial online, How could be done this? I was looking into the particle system of SceneKit but looks like not doing exactly the same think( is more for smoke and fire) this fluid look more 3D. what I should look for reproduce this fluid.. thanks Looking for some help to start my research.
Posted
by
Post not yet marked as solved
2 Replies
42 Views
I am trying to learn GameplayKit and am using it with SceneKit so I am using a GKAgent3d for my agent to set behaviours on. I have got behaviours working but I am trying to have the agent wander around a plane that I have representing ground but when the agent is wandering it seems to be wandering on all axis so it just ends up flying around. Is there any way to have the agent stick to the ground when it is wandering?
Posted
by
Post not yet marked as solved
2 Replies
131 Views
Hey guys, Seems like simple question, but was not able to find a clear answer. I am building an app (game like) and all 3D geometry is going to be created and modified at run time. What framework should I use with SwiftUI ? SceneKit or RealityKit Thanks
Posted
by
Post not yet marked as solved
1 Replies
63 Views
Hello! I'm very new to coding and am exploring a basic proof of concept which I can then work off. What I would like to do is a create a grid over a tilemap that can be navigated by a player piece. A simple piece, and my input on the grid would tell the piece to walk toward and stop on the clicked tile. Not bothered with graphics yet. I have dug into the documentation for Xcode, reading the Pathfinding via GameplayKit and SceneKit structures. I have identified that I believe I will need to work with the GKGridGraph, for an integer based grid. I will also obviously need a player controller, utilising touch controls. The first thing I am trying to work out... is how to actually create and implement the GKGridGraph. If anyone could.. explain it like I'm an absolute... dummy, or has some tutorials that will help me with this, it would be greatly appreciated.
Posted
by
Post not yet marked as solved
2 Replies
144 Views
I select usdz file and add reference node with file url. let scene = SCNScene() let refNode = SCNReferenceNode(url: usdzPath) refNode?.position = SCNVector3(0, -0.1, -0.25) refNode?.load() refNode?.scale = SCNVector3(0.01, 0.01, 0.01) scene.rootNode.addChildNode(refNode!) sceneView.scene = scene sceneView.delegate = self sceneView.autoenablesDefaultLighting = true sceneView!.allowsCameraControl = true Its works for me. I show the 3D object. But i want to change usdz file texture in swift. how can i it?
Posted
by
Post not yet marked as solved
1 Replies
90 Views
Hi, just fresh beginner with sceneKit, I got a problem I can't understand nor work around ... the idea: a lunar module "Eagle" landing on the Moon. A camera and a spotlight are supposed to track it. All nodes have been created in a scene with the scene editor and/or directly coded from SCN routines. the code: camera = sceneView.scene!.rootNode.childNodes.filter({ $0.name == "theCamera" }).first! //-> got it theEagle = sceneView.scene!.rootNode.childNodes.filter({ $0.name == "Eagle2" }).first! //- got it too spotLight = sceneView.scene!.rootNode.childNodes.filter({ $0.name == "theSpotLight" }).first! // done. ... then the constraints: let cameraLookAtTheEagle = SCNLookAtConstraint(target: theEagle) let spotLookAtTheEagle = SCNLookAtConstraint(target: theEagle) let cameraLookAtTheEagle.isGimbalLockEnabled = true let spotLookAtTheEagle.isGimbalLockEnabled = true let camera.constraints    = [cameraLookAtTheEagle] let spotLight.constraints = [spotLookAtTheEagle] ... The problem is: when my Eagle starts moving up and down, it is perfectly tracked by the spot, but ignored by the camera ... and of course gets quickly out of the field. I tried to use one only constraint for both of them: same result I tried to implement the constraints with the scene editor: same result I tried to implement the camera and the spot directly in the code: same result tried to set the isGimbalLockEnabled = false: same result I even try to change the order of the actions on spot and camera ... same result Whatever I tried, the camera refuses to move a inch ... So where's the problem ? Why does the spot always do the job, and why doesn't the camera do it? Any advice or clue will be welcome like fresh water in the desert ... Thanks.
Posted
by
Post not yet marked as solved
1 Replies
84 Views
In Xcode, create open a new Scene graph and open it in the SceneKit editor. Go to Editor > Create > Geometry > Box. Open the Attributes Inspector and set the Subdivision level to 1, and enable Tessellation (optional). I'd expect to see a chamfered cube, but instead there are gaps in the corners. Is there a way to fix this?
Posted
by
Post not yet marked as solved
0 Replies
78 Views
In an ARKit game using SceneKit, I’m detecting collisions between the real user and walls that I build out of SCNBoxes. I have a cylinder that follows the device’s pointOfView to accomplish that. The node of boxes is set as dynamic and the cylinder is set as kinematic, that way when the user moves and collides with the boxes, the boxes will move along. All of the above works exactly right. I use both physicsWorld didBegin and didEnd contact delegate methods because I need to know both when collisions start and when they end to disable and re-enable a certain button. So I have standard code like the following. In didBegin contact I hide the button as such: func physicsWorld(_ world: SCNPhysicsWorld, didBegin contact: SCNPhysicsContact) { if contact.nodeA.physicsBody?.categoryBitMask == BodyType.wallsCategory.rawValue || contact.nodeB.physicsBody?.categoryBitMask == BodyType.wallsCategory.rawValue { freezeButton.isHidden = true } } Then in didEnd contact I re-enable the button like that: func physicsWorld(_ world: SCNPhysicsWorld, didEnd contact: SCNPhysicsContact) { freezeButton.isHidden = false } Most of the time this works fine, however, sometimes the didBegin method runs last, even though I clearly moved away and there are no more collisions so my app still thinks that we’re in a collision state and my button isn't restored. I have SCNDebugOptions.showPhysicsShapes as debugOptions so I can clearly see that the cylinder is not touching the walls at all. Yet, through debugging the code I can see that sometimes didBegin was run last. I then need to cause a new collision then walk back to have the didEnd function be called and get my button back. But I cannot expect the user to perform such a workaround because my game is buggy. I'm not using the didUpdate contact call as I don't need it. How come didBegin is called last when clearly the collision has ended? What could cause this to happen like that? And what could I do to fix this? I am thinking of a workaround which would check if some time has passed since the last didBegin call that would indicate this is an old collision. Or is there something else I could check if a collision is currently still in progress? AFAIK, SceneKit doesn't have a function similar to allContactedBodies as in SpriteKit Is there something else I could use to check that 2 bodies are still in contact or not? Thanks!
Posted
by
Post not yet marked as solved
0 Replies
130 Views
As mentioned in the title, whenever an iMac19,2 owner running 10.15.7 runs my app, which features a scene with grain noise and rendering against a transparent background, it crashes like so: Crashed: CVDisplayLink 0 libobjc.A.dylib 0x681d objc_msgSend + 29 1 SceneKit 0x250780 SCNMTLComputeCommandEncoder::dispatchOnTexture2DWithoutOptimizedThreadGroupPerGrid(id<MTLTexture>, id<MTLComputePipelineState>) + 104 2 SceneKit 0x15abf6 C3D::getGrainNoise256(id<MTLCommandBuffer>, SCNMTLRenderContext*, C3D::RenderGraphResourceManager&) + 403 3 SceneKit 0xc6a9e C3D::CompositePass::compile() + 1410 4 SceneKit 0x391a46 C3D::RenderGraph::allocateResources() + 2198 5 SceneKit 0x14dc2d C3DEngineContextRenderWithRenderGraph + 52 6 SceneKit 0x22666b -[SCNRenderer _renderSceneWithEngineContext:sceneTime:] + 532 7 SceneKit 0x227222 -[SCNRenderer _drawSceneWithNewRenderer:] + 281 8 SceneKit 0x227786 -[SCNRenderer _drawScene:] + 46 9 SceneKit 0x227c8b -[SCNRenderer _drawAtTime:] + 965 10 SceneKit 0x214d29 -[SCNView _drawAtTime:WithContext:] + 542 11 SceneKit 0x214653 -[SCNView SCN_displayLinkCallback:] + 306 12 SceneKit 0x1af4f8 __69-[NSObject(SCN_DisplayLinkExtensions) SCN_setupDisplayLinkWithQueue:]_block_invoke + 49 13 SceneKit 0x2a1468 __36-[SCNDisplayLink _callbackWithTime:]_block_invoke.13 + 52 14 libdispatch.dylib 0x2658 _dispatch_client_callout + 8 15 libdispatch.dylib 0xe6ec _dispatch_lane_barrier_sync_invoke_and_complete + 60 16 SceneKit 0x2a13c5 -[SCNDisplayLink _callbackWithTime:] + 307 17 SceneKit 0x2a10c6 _cvDisplayLinkCallback + 261 18 CoreVideo 0x2e92 CVDisplayLink::performIO(CVTimeStamp*) + 230 19 CoreVideo 0x22c8 CVDisplayLink::runIOThread() + 626 20 libsystem_pthread.dylib 0x6109 _pthread_start + 148 21 libsystem_pthread.dylib 0x1b8b thread_start + 15
Posted
by
Post not yet marked as solved
1 Replies
144 Views
Hi, I'm making an app to render glb models with scenekit (using this library https://github.com/warrenm/GLTFKit2). Everything works great, but some nodes become invisible when moving the camera (this usually happens after performing a SCNAnimation). I know that in BabylonJS there is a an option to make a node always active (and shortcut the frustum clipping phase). See this post: https://forum.babylonjs.com/t/models-become-non-visible-when-moving-around/20949. I would like to know if there is a similar option in Scenekit or any solution to my problem. Here is a video of the problem: https://drive.google.com/file/d/1eUsiUk5dEcV72GhB-nHk9rKTF6dQttei/view?usp=sharing Thanks!
Posted
by
Post not yet marked as solved
0 Replies
121 Views
I'm animating body+ face moments using skeleton animation , SceneKit and Metal custom shader. I need to deform some vertices in the vertex shader, therefore I need skinningJointMatrices in the vertex shader. However if I just add  the line   float4 skinningJointMatrices[183]; in the NodeBuffer : struct NodeBuffer {   float4x4 inverseModelTransform;   float4x4 inverseModelViewTransform;   float4x4 modelTransform;   float4x4 modelViewProjectionTransform;   float4x4 modelViewTransform;   float4x4 normalTransform;   float2x3 boundingBox; float4 skinningJointMatrices[765]; }; I get the following assertion: [SceneKit] Assertion 'C3DSkinnerGetEffectiveCalculationMode(skinner, C3DNodeGetGeometry(context->_nodeUniforms.instanceNode)) == kC3DSkinnerCalculationModeGPUVertexFunction' failed. skinningJointMatrices should only be used when skinning is done in the vertex function Is there a way to workaround this assert? The code seems to be working fine although the assertion.
Posted
by
Post marked as solved
1 Replies
242 Views
I'm trying to use data returned from the RoomCaptureSession delegate to draw four corners of a door in SceneKit. The CapturedRoom.Surface class has Dimensions and Transform members. I was told in the WWDC 2022 RoomPlan Slack lounge: "You can use transform and dimensions parameters to draw lines. The 4 corners can be inferred from those 2 parameters: the first column of the transform is the "right" vector, and second is the "up" vector. The fourth column is the position of the wall/door/opening/window/object Combining those unit vectors with the dimensions vector will give you the corners." So I think this is a basic vector question. I'm not sure what is meant by "Combining those unit vectors with the dimensions vector". I've tried a few things, but I'm not sure how to go about this.
Posted
by
Post not yet marked as solved
0 Replies
153 Views
I am attempting to build an AR app using Storyboard and SceneKit. When I went to run an existing app I have already used it runs but nothing would happen. I thought this behavior was odd so I decided to start from scratch on a new project. I started with the default AR project for Storyboard and SceneKit and upon run it immediately fails with an unwrapping nil error on the scene. This scene file is obviously there. I am also given four build time warnings: Could not find bundle inside /Library/Developer/CommandLineTools failed to convert file with failure reason: *** -[__NSPlaceholderDictionary initWithObjects:forKeys:count:]: attempt to insert nil object from objects[0] Conversion failed, will simply copy input to output. Copy failed file:///Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn -> file:///Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn error:Error Domain=NSCocoaErrorDomain Code=516 "“ship.scn” couldn’t be copied to “art.scnassets” because an item with the same name already exists." UserInfo={NSSourceFilePathErrorKey=/Users/kruegerwilliams/Library/Developer/Xcode/DerivedData/ARtest-bjuwvdjoflchdaagofedfxpravsc/Build/Products/Debug-iphoneos/ARtest.app/art.scnassets/ship.scn, NSUserStringVariant=( I currently am unsure how to fix these errors? It appears as if they must be in the command line tools because after moving the device support files back to a stable version of Xcode the same issue is present. Is anyone else having these issues?
Posted
by
Post not yet marked as solved
0 Replies
147 Views
If you use SceneKit with ARKit, the AR scene uses the SceneKit renderer. Should you use SCNScene.write() to create a USDZ file and then open the USDZ file with AR Quick Look, AR Quick Look renders the AR scene with the RealityKit renderer. The ARKit-in-app -> USDZ -> AR Quick Look renderers are not the same and could produce different appearances. Have you seen similar problems with SceneKit -> AR Quick Look rendering? I am using such a pipeline with PBR lighting and have observed that the resulting differences in material properties are large. (The geometries are fine.) I have had to compensate by recreating the SCNScene materials with modified properties. The agreement between the app scene and the AR Quick Look scene is greatly improved but unfortunately still not acceptable for critical evaluation of commercial products in interior design.
Posted
by
Post marked as solved
4 Replies
503 Views
We have an App that does something similar to RoomPlan. We use SceneKit to draw all the wall lines. We have noticed that RoomPlan has trouble detecting walls around 7 inches or shorter. Our app has tools to deal with this. It seems the difference in time to capture the walls of a room between our app and the RoomPlan demo app is negligible. But we could save time in our app with auto detection of all the other things like windows, doors, openings, cabinets, etc. Are the lines you see drawn in the RoomPlan demo App SCNNodes? If so will you ever be able to call .addNode() inside the RoomPlan framework? If not, does RoomPlan use SpriteKit to draw? We use an ARSCNView to keep track of all the lines in our app. Changing that member to an instance of RoomCaptureView seems like a non starter. Starting a new RoomCaptureSession when we're ready to scan for objects other than walls wipes all the wall lines we've previously captured. Thanks, Mike
Posted
by
Post not yet marked as solved
0 Replies
180 Views
Hello, I am using YOLOv3 with Vision to classify objects during my AR session. I want to render the bounding boxes of the detected objects in my screen view. Unfortunately, the bounding boxes are are placed too far down and have a wrong aspect ratio. Does someone know what the issue might be? This is how I am currently transforming the bounding boxes. Assumptions: The app is in portrait mode Vision request is performed with centerCrop and orientation .right. Fix the coordinate origin of vision: let newY = 1 - boundingBox.origin.y     let newBox = CGRect(x: boundingBox.origin.x, y: newY, width: boundingBox.width, height: boundingBox.height) Undo center cropping of Vision: let imageResolution: CGSize = currentFrame.camera.imageResolution // Switching height and width because the original image is rotated let imageWidth = imageResolution.height let imageHeight = imageResolution.width // Square inside of normalized coordinates. let roi = CGRect(x: 0, y: 1 - (imageWidth/imageHeight + ((imageHeight-imageWidth) / (imageHeight*2))), width: 1, height: imageWidth / imageHeight) let newBox = VNImageRectForNormalizedRectUsingRegionOfInterest(boundingBox, Int(imageWidth), Int(imageHeight), roi) Bring coordinates back to normalized form: let imageWidth = imageResolution.height let imageHeight = imageResolution.width let transformNormalize = CGAffineTransform(scaleX: 1.0 / imageWidth, y: 1.0 / imageHeight) let newBox = boundingBox.applying(transformNormalize) Transform to scene view: (I assume the error is here. I found out while debugging that the aspect ratio of the bounding box changes here.) let viewPort = sceneView.frame.size let transformFormat = currentFrame.displayTransform(for: .landscapeRight, viewportSize: viewPort) let newBox = boundingBox.applying(transformFormat) Scale up to viewport size: let viewPort = sceneView.frame.size let transformScale = CGAffineTransform(scaleX: viewPort.width, y: viewPort.height) let newBox = boundingBox.applying(transformScale) Thanks in advance for any help!
Posted
by
Post not yet marked as solved
3 Replies
223 Views
I'm trying to optimize the draw calls in an existing scene with flattenedClone(). From what I can tell enumerateChildNodes would be a good way to go through the scene tree and add the nodes to be flattened to a parent node. I saw something like this in a tutorial but it is saying that 'withName' is an extra argument. gameScene.rootNode.enumerateChildNodes(withName: "//*"){ (node, stop) in   if (node.name == "Large Tree"){         flattenParent.addChildNode(node)         node.removeFromParentNode()    }          } any guidance on this usage? Also would a switch statement or multiple || be optimal when searching through several different mesh nodes?
Posted
by