Post not yet marked as solved
When opening the USDZ file in the SceneKit Document integrated in Xcode, you can display Identity and Transforms information in the Node column. I want to display this information in RealityKit or SceneKit. What should I do?
Post not yet marked as solved
I am trying to build a point cloud with ARkit's depthmap, when I convert the depthmap to PNG file with following code, the depth map lose Z axis data.Any idea to solve this problem?
This is my code to store depthmap.
let depthimage = frame.sceneDepth!.depthMap
let ciImageDepth = CIImage(cvPixelBuffer: depthimage)
let contextDepth:CIContext = CIContext.init(options: nil)
let cgImageDepth:CGImage = context.createCGImage(ciImageDepth, from: ciImageDepth.extent)!
let width:Int = cgImageDepth.width
let height:Int = cgImageDepth.height
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder16Little.rawValue)
let depthcontext:CGContext = CGContext(data: nil, width: width, height: height, bitsPerComponent: 16, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)!
let rect:CGRect = CGRect.init(x: 0, y: 0, width: width, height: height)
depthcontext.draw(cgImageDepth, in: rect)
let outPutImage:CGImage = depthcontext.makeImage()!
let newImage:UIImage = UIImage.init(cgImage: outPutImage, scale: 1, orientation: UIImage.Orientation.up)
Post not yet marked as solved
Hi everyone, is it possible to use the ARKit Replay Data option for XCUITests? If not, this would be a great feature for automation.
Thanks!
Post not yet marked as solved
So I have a ARView using realitykit. I am reusing the ARView. I have a Entity that has animations (stored in .usdz file). I play the animation with the following line of code.
hummingbird = try! Entity.load(named: "bird")
for animation in hummingbird.availableAnimations {
hummingbird.playAnimation(animation.repeat(duration: 120.0))
}
However I noticed there is a memory leak. using Instruments I found it was at the playAnimation line.
I have no clue how to fix this. At the end of the ARView I do this:
hummingbird.stopAllAnimations(recursive: true)
hummingbird = nil
I thought that should be enough. But it isn't.
In the image there are 2 instances. That from running the same arview 2 times. Basically my set up is
startVC->ARView->backToStartVC->backToSameARView (with new configuration). And so on.
Any idea would be great. And if you have any question or need clarification. Please ask.
Post not yet marked as solved
In an ARKit game using SceneKit, I’m detecting collisions between the real user and walls that I build out of SCNBoxes. I have a cylinder that follows the device’s pointOfView to accomplish that.
The node of boxes is set as dynamic and the cylinder is set as kinematic, that way when the user moves and collides with the boxes, the boxes will move along.
All of the above works exactly right.
I use both physicsWorld didBegin and didEnd contact delegate methods because I need to know both when collisions start and when they end to disable and re-enable a certain button.
So I have standard code like the following.
In didBegin contact I hide the button as such:
func physicsWorld(_ world: SCNPhysicsWorld, didBegin contact: SCNPhysicsContact) {
if contact.nodeA.physicsBody?.categoryBitMask == BodyType.wallsCategory.rawValue || contact.nodeB.physicsBody?.categoryBitMask == BodyType.wallsCategory.rawValue {
freezeButton.isHidden = true
}
}
Then in didEnd contact I re-enable the button like that:
func physicsWorld(_ world: SCNPhysicsWorld, didEnd contact: SCNPhysicsContact) {
freezeButton.isHidden = false
}
Most of the time this works fine, however, sometimes the didBegin method runs last, even though I clearly moved away and there are no more collisions so my app still thinks that we’re in a collision state and my button isn't restored.
I have SCNDebugOptions.showPhysicsShapes as debugOptions so I can clearly see that the cylinder is not touching the walls at all.
Yet, through debugging the code I can see that sometimes didBegin was run last.
I then need to cause a new collision then walk back to have the didEnd function be called and get my button back. But I cannot expect the user to perform such a workaround because my game is buggy.
I'm not using the didUpdate contact call as I don't need it.
How come didBegin is called last when clearly the collision has ended? What could cause this to happen like that?
And what could I do to fix this?
I am thinking of a workaround which would check if some time has passed since the last didBegin call that would indicate this is an old collision.
Or is there something else I could check if a collision is currently still in progress? AFAIK, SceneKit doesn't have a function similar to allContactedBodies as in SpriteKit
Is there something else I could use to check that 2 bodies are still in contact or not?
Thanks!
Post not yet marked as solved
Hey guys and gals,
we're developing an AR App (Unity 3D, AR Kit) for a business trade show. We have a 2x2m low poly world physically build on a table, surrounded by three iPad Pro 2021. The table is scanned as an object tracking reference. Now we project the same model in AR on it and display additional information.
Now to the problem:
We installed the table on the trade show and the 3D model has an offset on the horizontal position axis. And it's not only affecting a single iPad Pro 2021, but all three of them.
Additional information:
We only had the table-top scanned as reference without the stand, the stand was sent directly to the trade show. To begin with, we only used a 25x25cm reference piece from an early example of the table as scanned reference.
My colleague thinks it could be that both trackers, 2x2m and 25x25cm, are both still active and counter each other, resulting in the offset.
When we used the 25x25cm example only, we had the problem that the whole thing rotated sometime 90° degrees to stand vertically in front of us.
Does anybody have experience using object tracking and tackling offsets in position and rotation?
Images
25x25cm reference
how we scanned it
the final form
Please let us know if you can think of a probable solution and if you would like us to provide additional specific information.
Post not yet marked as solved
Hi,
I looked through whole internet searching for some way to compress USDZ files for AR (Quick Look). I see there is some old blog post about Draco compression for USD - https://opensource.googleblog.com/2019/11/google-and-pixar-add-draco-compression.html .
Does anyone know if Quick Look supports USDZ files with Draco compression? I haven't found any mention of it in the official documentations :/ Or maybe there is some other way?
Post not yet marked as solved
I want to add a small banner to my AR model when viewed in QuickLook in my app. I haven't been able to find much information on this however I found this resource where it explains how to add it when you have a model on a website. My model is locally stored on the device so how would I add this?
Post not yet marked as solved
I'm trying to build an ARKit based app which requires detection of roads and placing virtual content 30 feet away from the camera. However horizontal plane detection is stopping to add anchors after about 10 feet. Is there a workaround for this problem?
Post not yet marked as solved
Been programming a long tiime but still getting used to swift.
What I need to do is add a has_been_updated flag to all my ARAnchors that can be set by the didUpdate delegate and later cleared by something else.
Not sure of the correct way to do this, if it can be done with extensions and how to apply it to all ARAnchors.
The application, I am sending the ARKit anchor data over UDP to an external (non Apple) device on another computer. Because ARKit updates anchor data very frequently, I want to mark the data as "has been updated" when I get a didUpdate event and then every few seconds iterate over all the anchors and only send the ones that have been updated. This reduces data a lot because in the course of a second something like a meshAnchor might update 10 times.
What would be the appropriate way to add this flag to the ARAnchor class in Swift?
Post not yet marked as solved
I am thinking of a web application to scan an object using a LiDAR sensor from iPhone. It'll be built using some Javascript-based language preferably using ReactJS.
We want to use this website on the Safari browser only for handling performance. Is there any way to do that?
Post not yet marked as solved
private let boardAnchor: AnchorEntity = AnchorEntity(
*Argument passed to call that takes no arguments
plane: .horizontal, *Cannot infer contextual base in reference to member 'horizontal'
classification: [.floor, .table], *2Reference to member 'floor' cannot be resolved without a contextual type
minimumBounds: SIMD2(0.1, 0.1)
)
how can this be fixed in the demo code of the chess game example capture chess swift presented at WWW2022 session AR experience
any help highly apprishiated
Post not yet marked as solved
I'm having some issues with Reality Composer (the latest v1.5 with the latest Xcode beta) and I just wanted to check if these are known issues.
I have an image of a printed map which I'd like to turn into an AR-based interactive map. Something simple, where I tap a pin on the map, and details about that location move up from beneath the map, then move back out of view. The map, the pin and the location information are all separate flat images, and I'm using a horizontal anchor. Unfortunately, it seems that simple bugs are stopping this from working at all.
Say I set the pin to respond to a Tap. If I use the "Move, Rotate, Scale to" action to move a second object (location info) up, then add a Wait Action, then another "Move, Rotate, Scale to" to move the info object back down to its original position. The result: the info object doesn't initially move up as far as it should, and the second "Move" pushes the info object away and out of sight completely.
If I try another approach, using the Show and Hide actions (using "Move from below" and "Move to below" as the Motion type), and again with a Wait in the middle, it works the first time, but subsequent taps cause the info object to simply appear, with no incoming animation, and then the outgoing animation works correctly.
Is it just something wrong with my system, or is this broken? If I don't try to move objects around (i.e. Show/Hide with "No Motion") then I have more luck, but I'm feeling pretty constrained.
Thanks in advance for all help with this.
Post not yet marked as solved
Is the "new lighting" introduced in WWDC22 "Explore USD tools and rendering" implemented only in AR Quick Look or has it become an integral part of the SceneKit and RealityKit rendering engines?
Post not yet marked as solved
If you use SceneKit with ARKit, the AR scene uses the SceneKit renderer.
Should you use SCNScene.write() to create a USDZ file and then open the USDZ file with AR Quick Look, AR Quick Look renders the AR scene with the RealityKit renderer.
The ARKit-in-app -> USDZ -> AR Quick Look renderers are not the same and could produce different appearances.
Have you seen similar problems with SceneKit -> AR Quick Look rendering?
I am using such a pipeline with PBR lighting and have observed that the resulting differences in material properties are large. (The geometries are fine.) I have had to compensate by recreating the SCNScene materials with modified properties. The agreement between the app scene and the AR Quick Look scene is greatly improved but unfortunately still not acceptable for critical evaluation of commercial products in interior design.
Post not yet marked as solved
I updated my iPhone 12 Pro to iOS 16 beta, and the motion capture feature in ARKit seems stop functioning. I tried my own custom app (MoCáp) and BodyDetection sample code from Apple developer site, and they both don’t work. Anyone have the same issue?
Post not yet marked as solved
What is the correct approach to save the image (eg. CVPixelBuffer to png) obtained after calling the captureHighResolutionFrame method?
"ARKit captures pixel buffers in a full-range planar YCbCr format (also known as YUV) format according to the ITU R. 601-4 standard"
Should I change the color space of the image (ycbcr to rgb using Metal)?
Post not yet marked as solved
Hello, I am using YOLOv3 with Vision to classify objects during my AR session. I want to render the bounding boxes of the detected objects in my screen view. Unfortunately, the bounding boxes are are placed too far down and have a wrong aspect ratio. Does someone know what the issue might be?
This is how I am currently transforming the bounding boxes.
Assumptions:
The app is in portrait mode
Vision request is performed with centerCrop and orientation .right.
Fix the coordinate origin of vision:
let newY = 1 - boundingBox.origin.y
let newBox = CGRect(x: boundingBox.origin.x, y: newY, width: boundingBox.width, height: boundingBox.height)
Undo center cropping of Vision:
let imageResolution: CGSize = currentFrame.camera.imageResolution
// Switching height and width because the original image is rotated
let imageWidth = imageResolution.height
let imageHeight = imageResolution.width
// Square inside of normalized coordinates.
let roi = CGRect(x: 0, y: 1 - (imageWidth/imageHeight + ((imageHeight-imageWidth) / (imageHeight*2))), width: 1, height: imageWidth / imageHeight)
let newBox = VNImageRectForNormalizedRectUsingRegionOfInterest(boundingBox, Int(imageWidth), Int(imageHeight), roi)
Bring coordinates back to normalized form:
let imageWidth = imageResolution.height
let imageHeight = imageResolution.width
let transformNormalize = CGAffineTransform(scaleX: 1.0 / imageWidth, y: 1.0 / imageHeight)
let newBox = boundingBox.applying(transformNormalize)
Transform to scene view: (I assume the error is here. I found out while debugging that the aspect ratio of the bounding box changes here.)
let viewPort = sceneView.frame.size
let transformFormat = currentFrame.displayTransform(for: .landscapeRight, viewportSize: viewPort)
let newBox = boundingBox.applying(transformFormat)
Scale up to viewport size:
let viewPort = sceneView.frame.size
let transformScale = CGAffineTransform(scaleX: viewPort.width, y: viewPort.height)
let newBox = boundingBox.applying(transformScale)
Thanks in advance for any help!
Post not yet marked as solved
How do I overlay an annotation/detail popup to AR Models in QuickLook? This was done with the WWDC Trading Cards AR Model. I haven't been able to find any other info on this. Here is an article which has an image of what I am referring to
https://vrscout.com/news/apple-shows-off-ar-trading-cards-ahead-of-wwdc-2022/
Post not yet marked as solved
How can I start the first work in Apple and what applications do I need to make the first design?