SceneKit

RSS for tag

Create 3D games and add 3D content to apps using high-level scene descriptions using SceneKit.

SceneKit Documentation

Posts under SceneKit tag

123 Posts
Sort by:
Post not yet marked as solved
1 Replies
228 Views
Good day, everyone, I have a some questions. I got color frame from ipad camera and rendered on scnview using metal. and framedata's format is kCVPixelFormatType_420YpCbCr8BiPlanarFullRange. And I can't change color frame pixel format because it's from some private sdk... I got CMSamplerBuffer at every frame, metal shader changed it to rgba color texture. it's some weird, metal texture from ycbcr is darker or brighter than original image. It's my metal shader code. // https://github.com/google/filament/blob/main/filament/backend/src/metal/MetalExternalImage.mm #include <metal_stdlib> #include <simd/simd.h> using namespace metal; kernel void ycbcrToRgb(texture2d<half, access::read> inYTexture  [[texture(0)]],       texture2d<half, access::read> inCbCrTexture [[texture(1)]],       texture2d<half, access::write> outTexture  [[texture(2)]],       uint2              gid     [[thread_position_in_grid]]) {   if (gid.x >= outTexture.get_width() || gid.y >= outTexture.get_height()) {     return;   }   half luminance = inYTexture.read(gid).r;       // The color plane is half the size of the luminance plane.   half2 color = inCbCrTexture.read(gid / 2).rg;   half4 ycbcr = half4(luminance, color, 1.0);   const half4x4 ycbcrToRGBTransform = half4x4(                         half4(+1.0000f, +1.0000f, +1.0000f, +0.0000f),                         half4(+0.0000f, -0.3441f, +1.7720f, +0.0000f),                         half4(+1.4020f, -0.7141f, +0.0000f, +0.0000f),                         half4(-0.7010f, +0.5291f, -0.8860f, +1.0000f)                         );       outTexture.write(ycbcrToRGBTransform * ycbcr, gid); } and it's my render code    func render(colorFrame: STColorFrame) {     // var frame = CGImage.create(sampleBuffer: colorFrame.sampleBuffer)!     let buffer = CMSampleBufferGetImageBuffer(colorFrame.sampleBuffer)!           convertVideoFrameToImage1(buffer)     scnview.scene?.background.contents = outTexture   } private func convertVideoFrameToImage1(_ buffer: CVImageBuffer) {     // kCVPixelFormatType_420YpCbCr8BiPlanarFullRange           CVPixelBufferLockBaseAddress(buffer, CVPixelBufferLockFlags(rawValue: 0))           let commandQueue = device!.makeCommandQueue()!     let library = device!.makeDefaultLibrary()!           let commandBuffer = commandQueue.makeCommandBuffer()!     let encoder = commandBuffer.makeComputeCommandEncoder()!     encoder.setComputePipelineState(       try! device!.makeComputePipelineState(function:         library.makeFunction(name: "ycbcrToRgb")!))           // input     // Extract Y and CbCr textures     // https://stackoverflow.com/questions/58175811/how-to-convert-an-rgba-texture-to-y-and-cbcr-textures-in-metal     let imageTextureY = createTexture(fromPixelBuffer: buffer,                      pixelFormat: .r8Unorm, planeIndex: 0)!     let imageTextureCbCr = createTexture(fromPixelBuffer: buffer,                        pixelFormat: .rg8Unorm, planeIndex: 1)!     let width = CVPixelBufferGetWidth(buffer)     let height = CVPixelBufferGetHeight(buffer)       //    NSLog("aaa3 \(imageTextureY.usage.rawValue) \(imageTextureCbCr.usage.rawValue) \(MTLTextureUsage.shaderRead.rawValue)")           encoder.setTexture(imageTextureY, index: 0)     encoder.setTexture(imageTextureCbCr, index: 1)           if outTexture == nil {       let descriptor = MTLTextureDescriptor()       descriptor.textureType = .type2D       descriptor.pixelFormat = .rgba32Float       descriptor.width = width       descriptor.height = height       descriptor.usage = [.shaderWrite, .shaderRead]       outTexture = device!.makeTexture(descriptor: descriptor)     }           encoder.setTexture(outTexture, index: 2)           let numThreadgroups = MTLSize(width: 32, height: 32, depth: 1)     let threadsPerThreadgroup = MTLSize(width: width / numThreadgroups.width,                       height: height / numThreadgroups.height, depth: 1)     encoder.dispatchThreadgroups(numThreadgroups,                    threadsPerThreadgroup: threadsPerThreadgroup)     encoder.endEncoding()     commandBuffer.commit()     commandBuffer.waitUntilCompleted()           CVPixelBufferUnlockBaseAddress(buffer, CVPixelBufferLockFlags(rawValue: 0))   } original image is this, but render in scnview like this They are a different little bit... I've been thinking about it for three days. Can You help me?
Posted
by wonkieun.
Last updated
.
Post not yet marked as solved
2 Replies
175 Views
I am trying to use RoomPlan to create a rendering of a room without any objects inside. To do this I'm taking the list of walls given by CapturedRoom.walls and creating a series of SCNNodes using the information given. This way I can modify the room at-will in the app. However, the walls are showing up in random places? Not sure where I am going wrong: //roomScan is a CapturedRoom object, scene is an SCNScene for i in 0...(roomScan.walls.endIndex-1) { //Generate new wall geometry let scannedWall = roomScan.walls[i] let length = scannedWall.dimensions.x let width = 0.2 let height = scannedWall.dimensions.y let newWall = SCNBox( width: CGFloat(width), height: CGFloat(height), length: CGFloat(length), chamferRadius: 0 ) newWall.firstMaterial?.diffuse.contents = UIColor.white newWall.firstMaterial?.transparency = 0.5 //Generate new SCNNode let newNode = SCNNode(geometry: newWall) newNode.simdTransform = scannedWall.transform scene.rootNode.addChildNode(newNode) }
Posted
by nnaiman.
Last updated
.
Post not yet marked as solved
0 Replies
56 Views
We are facing an issue with rendering a 3D model using Sceneview. This is creating a problem for us in using animations on 3D model. Is there a callback to let us know if 3D model is fully rendered when animation is played, just after 3D model is rendered, sometime animation is played and at other times it is not played as per the design. We are using below method: runAction(action, completionHandler: {          <#code#>        }) completing handler does not always invoke. Kindly help.
Posted Last updated
.
Post marked as solved
5 Replies
861 Views
We have an App that does something similar to RoomPlan. We use SceneKit to draw all the wall lines. We have noticed that RoomPlan has trouble detecting walls around 7 inches or shorter. Our app has tools to deal with this. It seems the difference in time to capture the walls of a room between our app and the RoomPlan demo app is negligible. But we could save time in our app with auto detection of all the other things like windows, doors, openings, cabinets, etc. Are the lines you see drawn in the RoomPlan demo App SCNNodes? If so will you ever be able to call .addNode() inside the RoomPlan framework? If not, does RoomPlan use SpriteKit to draw? We use an ARSCNView to keep track of all the lines in our app. Changing that member to an instance of RoomCaptureView seems like a non starter. Starting a new RoomCaptureSession when we're ready to scan for objects other than walls wipes all the wall lines we've previously captured. Thanks, Mike
Posted Last updated
.
Post not yet marked as solved
1 Replies
126 Views
I made a ball drop animation on the ARSCNView, then after 1 second, then deleted the ball and re-added it to the original position on the view, but this occasionally crashes. something like this: private func dropBall(){ // do ball dropping animation let randomX = Float.random(in: -2...2) * 0.001 let randomY = Float.random(in: 3...10) * 0.001 ballNode?.physicsBody = SCNPhysicsBody(type: .dynamic, shape: nil) ballNode?.physicsBody?.mass = 0.005 let position = SCNVector3(x: 0.05, y: 0.05, z: 0.05) let force = SCNVector3(x: randomX, y: randomY , z: 0) ballNode?.physicsBody?.applyForce(force, at: position, asImpulse: true) let tv = SCNVector4(x: -0.2 + Float(0.4 * randomCGFloat()), y: 0 , z: 0.2, w: 0) ballNode?.physicsBody?.applyTorque(tv, asImpulse: true) //remove the ball DispatchQueue.main.asyncAfter(deadline: .now() + 1){ self.ballNode?.physicsBody = nil self.ballNode?.removeFromParentNode() self.ballNode = nil // add ball again let ballNode = SCNReferenceNode(named: "myscn.scn") contentNode?.addChildNode(ballNode) self.ballNode = ballNode } } crash stack: Exception Type: EXC_BAD_ACCESS (SIGSEGV) Exception Subtype: KERN_INVALID_ADDRESS at 0x0000000000000068 Exception Codes: 0x0000000000000001, 0x0000000000000068 VM Region Info: 0x68 is not in any region. Bytes before following region: 68719476632 REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL UNUSED SPACE AT START ---> commpage (reserved) 1000000000-7000000000 [384.0G] ---/--- SM=NUL ...(unallocated) Termination Reason: SIGNAL 11 Segmentation fault: 11 Terminating Process: exc handler [30984] Triggered by Thread: 11 Thread 11 name: Thread 11 Crashed: 0 SceneKit 0x00000001cf3ef998 btDbvtBroadphase::setAabb(btBroadphaseProxy*, btVector3 const&, btVector3 const&, btDispatcher*) + 52 1 SceneKit 0x00000001cf65126c btCollisionWorld::updateSingleAabb(btCollisionObject*) + 284 2 SceneKit 0x00000001cf65133c btCollisionWorld::updateAabbs() + 56 3 SceneKit 0x00000001cf3ff39c btCollisionWorld::performDiscreteCollisionDetection() + 32 4 SceneKit 0x00000001cf406964 btDiscreteDynamicsWorld::internalSingleStepSimulation(float) + 120 5 SceneKit 0x00000001cf3ff574 btDiscreteDynamicsWorld::stepSimulation(float, int, float) + 276 6 SceneKit 0x00000001cf507ad8 -[SCNPhysicsWorld _step:] + 156 (SCNPhysicsWorld.mm:1423) 7 SceneKit 0x00000001cf3ae9fc -[SCNRenderer _update:] + 932 (SCNRenderer.m:3814) 8 SceneKit 0x00000001cf3ae364 -[SCNRenderer _drawSceneWithNewRenderer:] + 152 (SCNRenderer.m:5303) 9 SceneKit 0x00000001cf3ae278 -[SCNRenderer _drawScene:] + 40 (SCNRenderer.m:5537) 10 SceneKit 0x00000001cf3a78b4 -[SCNRenderer _drawAtTime:] + 500 (SCNRenderer.m:5727) 11 SceneKit 0x00000001cf3a74cc -[SCNView _drawAtTime:] + 368 (SCNView.m:1542) 12 SceneKit 0x00000001cf4ba478 __83-[NSObject(SCN_DisplayLinkExtensions) SCN_setupDisplayLinkWithQueue:screen:policy:]_block_invoke + 44 (SCNDisplayLink_ARC.m:42) 13 SceneKit 0x00000001cf595fa0 -[SCNDisplayLink _displayLinkCallbackReturningImmediately] + 148 (SCNDisplayLink.m:381) 14 libdispatch.dylib 0x0000000190a267c8 _dispatch_client_callout + 16 (object.m:560) 15 libdispatch.dylib 0x00000001909fde7c _dispatch_continuation_pop$VARIANT$armv81 + 436 (inline_internal.h:2632) 16 libdispatch.dylib 0x0000000190a0f860 _dispatch_source_invoke$VARIANT$armv81 + 1552 (source.c:596) 17 libdispatch.dylib 0x0000000190a0172c _dispatch_lane_serial_drain$VARIANT$armv81 + 308 (inline_internal.h:0) 18 libdispatch.dylib 0x0000000190a022e4 _dispatch_lane_invoke$VARIANT$armv81 + 380 (queue.c:3940) 19 libdispatch.dylib 0x0000000190a0c000 _dispatch_workloop_worker_thread + 612 (queue.c:6846) 20 libsystem_pthread.dylib 0x00000001d0f3cb50 _pthread_wqthread + 284 (pthread.c:2618) 21 libsystem_pthread.dylib 0x00000001d0f3c67c start_wqthread + 8 How to fix this? This will only crash by chance and is not too easy to reproduce, is there a problem with me deleting this node in this way?
Posted
by haozes.
Last updated
.
Post not yet marked as solved
1 Replies
451 Views
Hello, I am using YOLOv3 with Vision to classify objects during my AR session. I want to render the bounding boxes of the detected objects in my screen view. Unfortunately, the bounding boxes are are placed too far down and have a wrong aspect ratio. Does someone know what the issue might be? This is how I am currently transforming the bounding boxes. Assumptions: The app is in portrait mode Vision request is performed with centerCrop and orientation .right. Fix the coordinate origin of vision: let newY = 1 - boundingBox.origin.y     let newBox = CGRect(x: boundingBox.origin.x, y: newY, width: boundingBox.width, height: boundingBox.height) Undo center cropping of Vision: let imageResolution: CGSize = currentFrame.camera.imageResolution // Switching height and width because the original image is rotated let imageWidth = imageResolution.height let imageHeight = imageResolution.width // Square inside of normalized coordinates. let roi = CGRect(x: 0, y: 1 - (imageWidth/imageHeight + ((imageHeight-imageWidth) / (imageHeight*2))), width: 1, height: imageWidth / imageHeight) let newBox = VNImageRectForNormalizedRectUsingRegionOfInterest(boundingBox, Int(imageWidth), Int(imageHeight), roi) Bring coordinates back to normalized form: let imageWidth = imageResolution.height let imageHeight = imageResolution.width let transformNormalize = CGAffineTransform(scaleX: 1.0 / imageWidth, y: 1.0 / imageHeight) let newBox = boundingBox.applying(transformNormalize) Transform to scene view: (I assume the error is here. I found out while debugging that the aspect ratio of the bounding box changes here.) let viewPort = sceneView.frame.size let transformFormat = currentFrame.displayTransform(for: .landscapeRight, viewportSize: viewPort) let newBox = boundingBox.applying(transformFormat) Scale up to viewport size: let viewPort = sceneView.frame.size let transformScale = CGAffineTransform(scaleX: viewPort.width, y: viewPort.height) let newBox = boundingBox.applying(transformScale) Thanks in advance for any help!
Posted Last updated
.
Post not yet marked as solved
1 Replies
696 Views
Desired outcome:Draw a silhouette / outline / edge of constant width around meshes of different shapes in SceneKit. I'm only going for the outer silhouette, and not interested in the inside edges. Eventually, I would like to have the outline only so that and have the inside area of the mesh be either discarded or transparent. I figured that using the stencil buffer may achieve the right look.What I tried:I had success making an SCNTechnique where the first pass scales up the mesh and makes every fragment and constant color, then the second renders it at regular size.Then tried writing to a stencil at regular size, then reading from the stencil buffer in the second (scaled up) pass, so that ONLY the outline would be visible. I can't seem to get this to work. I may not have the right "stencilStates" dictionary, though it is hard to know what is wrong, as I can't generate any different visual feedback or errors when trying differnent keys + values. I thought that by clearing the stencil buffer (all 0's), then replacing with 1's ones during the first pass, then reading anything is "notEqual" to 1 would give me just the outline. However I can't seem to stencil out any part of the second pass.Here is a playground where I've replicated what I tried: https://github.com/mackhowell/scenekit-outline-shader-scntechnique/tree/stencil-test. (The linked branch "stencil-test" is where I've documented this issue.)And here is my SCNTechnique dictionary:let stencilPass: [String: Any] = [ "program": "outline", "inputs": [ "a_vertex": "position-symbol", "modelViewProjection": "mvpt-symbol", ], "outputs": [ "stencil": "COLOR" ], "draw": "DRAW_NODE", "stencilStates": [ "enable": true, "clear": true, "behavior": [ "depthFail": "keep", "fail": "keep", "pass": "replace", "function": "always", "referenceValue": 1 ] ] ] let embiggenPass: [String: Any] = [ "program": "embiggen", "inputs": [ "a_vertex": "position-symbol", "modelTransform": "mt-symbol", "viewTransform": "vt-symbol", "projectionTransform": "pt-symbol", ], "outputs": [ "color": "COLOR" ], "draw": "DRAW_NODE", "stencilStates": [ "behavior": [ "depthFail": "keep", "fail": "keep", "pass": "keep", "function": "notEqual", "referenceValue": 1 ] ] ] let technique: [String: Any] = [ "passes": [ "embiggen": embiggenPass, "stencil": stencilPass ], "sequence": [ "stencil", "embiggen" ], "symbols": [ "position-symbol": ["semantic": "vertex"], "mvpt-symbol": ["semantic": "modelViewProjectionTransform"], "mt-symbol": ["semantic": "modelTransform"], "vt-symbol": ["semantic": "viewTransform"], "pt-symbol": ["semantic": "projectionTransform"], ] ]Thanks!
Posted Last updated
.
Post not yet marked as solved
0 Replies
111 Views
I've got a Game @StateObject in my app that's passed to my main ContentView. I'm trying to figure out how best create a SCNSceneRendererDelegate instance that has a reference to the Game state, and then pass that to the SceneView inside my ContentView. I'm trying to do it like this, but obviously this doesn't work because self isn't available at init time: struct ContentView : View { let game : Game var scene = SCNScene(named: "art.scnassets/ship.scn") var cameraNode : SCNNode? { self.scene?.rootNode.childNode(withName: "camera", recursively: false) } var rendererDelegate = RendererDelegate(game: self.game) // Cannot find 'self' in scope var body: some View { SceneView(scene: self.scene, pointOfView: self.cameraNode, delegate: self.rendererDelegate) } } The intent is that in my renderer delegate, I'll update my game's simulation state. Because my game state is an ObservableObject, everything else (I have a bunch of SwiftUI) should update.
Posted
by JetForMe.
Last updated
.
Post not yet marked as solved
0 Replies
152 Views
How can you make a physics object that can be pushed back and forth in one direction but not to the sides? There should be a high static friction to the sides for example for a sledge, skis, a simplified car or rail cart. You can somehow achieve a similar result by using SCNPhysicsVehicle, but it is only satisfying when you are using it on a flat surface. On a slope the wheels will always slowly slip to the sides. You can reduce this by using extreme values for friction and braking power but this will cause other issues. Also with SCNPhysicsVehicle the automatic resting is not functional.
Posted
by hr_.
Last updated
.
Post not yet marked as solved
1 Replies
224 Views
Hi, In our workflow, we have 3D objects with the type of gltf provided to the iOS via API endpoints. Using a different types of files is not an option here. Inside the iOS, the glb file is converted to a scn file using a third-party framework. The nodes in the converted scn file look as expected and similar to the original glb file. In the second step, the scn file should be converted to a USDZ file to use in RealityKit. As far as I know, there is only one way to convert scn to USDZ inside the iOS app. which is using an undocumented approach ( assign a usdz format to URL and using write(to:options:delegate:progressHandler:)) let path = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0] .appendingPathComponent("model.usdz") sceneView.scene.write(to: path) In the converted USDZ file, all the one-sided materials (transparent) materials are changed to a not transparent material. We tested on a car object, and all the window glasses became not transparent after converting. On the other hand, if we use the Reality converter app in macOS to convert glb file directly to USDZ file, every node converts as expected, and it works so fine. Is there any workaround to fix that issue? Or any upcoming update to let us use glb file in the app or successfully convert it to USDZ inside the iOS app? Thanks
Posted Last updated
.
Post not yet marked as solved
0 Replies
198 Views
In this sample project I'm running a texture that's 13200x10200 and it's working fine. When I run the code to my own project it errors out saying the max width/height is 8192. I use the same simulator and can't find any difference in my schema or project settings. What am I overlooking? I am able to build to my iPad mini 5.
Posted
by datfrog.
Last updated
.
Post not yet marked as solved
1 Replies
181 Views
I'm making an app: It displays the image of the rear camera in real time. (in AR View)(✅) When it detects a QR code, it will read the text content in the QR code.(I don't know how to do it in real time, get data from AR session) func getQRCodeContent(_ pixel: CVPixelBuffer) -> String { let requestHandler = VNImageRequestHandler(cvPixelBuffer: pixel, options: [:]) let request = VNDetectBarcodesRequest() request.symbologies = [.qr] try! requestHandler.perform([request]) let result = request.results?.first?.payloadStringValue if let result = result { return result } else { return "non" } } And then do some logic with the content, and display the corresponding AR model in the AR View. I know I have to feed images into the Vision Framework, I started with AVFoundation, but I found that when AR View is loaded, the AVCaptureSession is paused. And I want to feed AR Session's frame into Vision Framework. However, all the tutorials I can find are based on story board and UI kit to complete this function. I don't know how to complete this function in Swift UI at all. I tried to extent ARView: extension ARView: ARSessionDelegate { func renderer(_ renderer: SKRenderer, willRenderScene scene: SCNScene, atTime time: TimeInterval) { let capturedImage = session.currentFrame?.capturedImage print(capturedImage) } } struct ARViewCustom: UIViewRepresentable { func makeUIView(context: Context) -> ARView { let arView = ARView(frame: .zero) arView.session.delegate = arView return arView } func updateUIView(_ uiView: ARView, context: Context) {} } No error, but it doesn't work.
Posted
by fjybiocs.
Last updated
.
Post not yet marked as solved
4 Replies
1.2k Views
Hi guys,I am creating MTL Textures from CIImages. Everything is sRGB, but when it shows up in Scene Kit it looks white too pale, like the data is used as RGB.I am using no lights nothing in the scene, it's purely a scene with SCNPlanes having the textures as diffuse material and a camera that is it. No lights nothing.Any Ideas what I can do to get a normal representation?(Below a picture of the CIImage in the small window and in SCNViewAll the bestChristoph
Posted Last updated
.
Post not yet marked as solved
1 Replies
206 Views
I have been exploring different options to integrate 3D shapes, objects with my app. I came across this starter-friendly 3D creation platform Spline, and it helps generate public URL, .gitf for the 3D objects. Is there anyway that I can have the objects in my app using SwiftUI while keeping all the features?
Posted
by _ismyname.
Last updated
.
Post not yet marked as solved
2 Replies
499 Views
Hi guys, in one of the video of WWDC 21 I saw this spectacular image of a green fluid over the sofa 🛋 i can’t find to much tutorial online, How could be done this? I was looking into the particle system of SceneKit but looks like not doing exactly the same think( is more for smoke and fire) this fluid look more 3D. what I should look for reproduce this fluid.. thanks Looking for some help to start my research.
Posted
by dm1886.
Last updated
.
Post not yet marked as solved
0 Replies
140 Views
Our problem: When we import any model using collada (.dae) format to scenekit and we try to use the blendshapes (named geometry morphers in xcode) the object that has those blendshapes activated changes its scale significantly for each morpher applied, being able to scale by many times its size if you try and apply various morphers at a time. Our testing has included so far: -export and import a body with blendshapes and rigg (recognizes the morphers but it scales the model when used, is bugged) -export and import a body with blendshapes without rigg (doesnt recognize the morphers) -export and import a basic cube 1 from maya and 1 from 3d max with 2 blendshapes (the cube gets scaled as well when we apply the morphers) In all instances, the blendshapes scale the model but only in scenekit, there is not issue between 3d softwares. The models work as intended in Maya, 3D Max and Blender. Screenshots with the cube: https://drive.google.com/drive/folders/1y_iqkE0zW7GryXxVcIAf6OWXydzfH1zW?usp=sharing we can see how with each morpher applied the cube grows bigger, the same thing is happening to our characters.
Posted
by saraMkt.
Last updated
.
Post not yet marked as solved
1 Replies
189 Views
Hi, I'm relatively new to Swift and ARKit, SceneKit and the likes of which you'd use for typical AR applications. TL;DR, I have an app with a reset button that pauses the sceneView, deletes all child nodes of the root node of sceneView.scene, and re-runs my overridden version of viewWillAppear! However, after this reset, my app refuses to detect my anchors (.arobject scan assets) in the scene. Any help would be much appreciated. I did read up on the documentation and relevant forum & SO posts and provide my code and a list of things I tried down below. my overridden viewWillAppear override func viewWillAppear(_ animated: Bool) {         super.viewWillAppear(animated)                  // Models occluded by people are shown correctly         if #available(iOS 13.0, *) {             print("Segmentation with depth working!")             ARWorldTrackingConfiguration.supportsFrameSemantics(.personSegmentationWithDepth)         } else {             // Fallback on earlier versions         }                  guard let referenceObjects = ARReferenceObject.referenceObjects(inGroupNamed: "computer", bundle: nil) else {             fatalError("Missing expected asset catalog resources.")         }         configuration.detectionObjects = referenceObjects         print(configuration.detectionObjects)         sceneView.session.run(configuration)     } my custom reset function @IBAction func reset(_ sender: UIButton!) {         // Pause scene view         sceneView.session.pause()                  // Remove button that redirects to link         DispatchQueue.main.async {             self.sceneView.subviews[1].isHidden = true         }                  // Remove all child nodes         sceneView.scene.rootNode.enumerateChildNodes { (node, stop) in             node.removeFromParentNode()         }                  viewWillAppear(true)     } list of things I tried put everything into viewWillAppear and just trigger viewWillAppear on button press call the custom reset function shown above on button press call overridden viewDidLoad (yes, I know this is *****, but I wanted to see if it fixed anything) Currently, my code is using the second setup from the list above.
Posted
by luorix.
Last updated
.
Post not yet marked as solved
1 Replies
242 Views
I have an ARKit ARSCNView providing an AR experience with the default rear-facing camera. I would like to use other cameras like the ultra-wide camera for that. If possible, it would be great to provide fluent zoom. Is it possible? I did some research and found out one needs to loop over supported video formats of the AR configuration (e.g. ARWorldTrackingConfiguration.supportedVideoFormats). This gives an array of a few video formats with different fps values, ratios, ... But, there is always the AVCaptureDeviceTypeBuiltInWideAngleCamera. Ultra-wide or Tele does not seem to be included. How do we get the AR experience with an other camera than the default (wide) one?
Posted
by tmsta.
Last updated
.
Post not yet marked as solved
1 Replies
201 Views
I want to control RealityKit Animation with UISlider. It’s working correctly with ARView’s cameraMode set to .ar and automaticallyConfigureSession to true. But when setting cameraMode to .nonAR and automaticallyConfigureSession to false UISlider starts to lag and flicker. I both cases ARView is rendering at 60 fps. I don’t really understand what’s happening but I think ARView in nonAR mode is blocking main thread. Is this a bug or am I missing something?
Posted
by NixD.
Last updated
.