I want to know are depth map and RGB image are perfectly aligned(do both have the same principle point)? If yes then how the depth- map is created.
The depth map on iphone12 has 256x192 resolution as opposed to an RGB image (1920x1440). I am interested in exact pixel-wise depth. Is it possible to get the raw depth map of 1920x1440 resolution ?
How is the depth-map is created at 256 x 192 resolution? Behind the scenes does the pipeline captures it at 1920 x1440 resolution and then resize it to 256x192?
I have so many questions as there are no intrinsic, extrinsic, and calibration data given regarding the lidar.
I would greatly appreciate it if someone can explain the steps from a computer-vision perspective.
Many Thanks
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
Hi,
since iOS 15 I've repeatedly noticed the console warning »ARSessionDelegate is retaining X ARFrames. This can lead to future camera frames being dropped« even for rather simple projects using RealityKit and ARKit. Could someone from the ARKit team please elaborate what causes this warning and what can be done to avoid it?
If I remember correctly I didn't even assign an ARSessionDelegate.
Thank you!
Hi,
I'm struggling to find a way to get a simple Unlit Material working with Reality Composer.
With all the standard objects, it doesn't seem like we have the option of an unlit material which seems really odd to me... This is kind of a basic feature.
The only way I was able to get an unlit material inside RealityConverter was to import a mesh without material and I was getting a white unlit material.
I have seen that you can set an unlit material using RealityKit but from what I saw RealityKit builds an app at the end right? Honestly, I'm not sure what you get when creating an AR app using RealityKit...What I'm looking for is a simple .reality file to be displayed on the web.
I'm creating a custom scanning solution for iOS and using RealityKit Object Capture PhotogrammetrySession API to build a 3D model. I'm finding the data I'm sending to it is ignoring the depth and not building the model to scale. The documentation is a little light on how to format the depth so I'm wondering if someone could take a look at some example files I send to the PhotogrammetrySession. Would you be able to tell me what I'm not doing correctly?
https://drive.google.com/file/d/1-GoeR_KMhX_i7-y8M8ElDRrySasOdoHy/view?usp=sharing
Thank you!
Hello!
I am having trouble calculating accurate distances in the real world using the camera's returned intrinsic matrix and pixel coordinates/depths captured from the iPhone's LiDAR. For example, in the image below, I set a mug 0.5m from the phone. The mug is 8.5cm wide. The intrinsic matrix returned from the phone's AVCameraCalibrationData class has focalx = 1464.9269, focaly = 1464.9269, cx = 960.94916, and cy = 686.3547. Selecting the two pixel locations denoted in the image below, I calculated each one's xyz coordinates using the formula:
x = d * (u - cx) / focalx
y = d * (v - cy) / focaly
z = d
Where I get depth from the appropriate pixel in the depth map - I've verified that both depths were 0.5m. I then calculate the distance between the two points to get the mug width. This gives me a calculated width of 0.0357, or 3.5 cm, instead of the 8.5cm I was expecting. What could be accounting for this discrepancy?
Thank you so much for your help!
I have a arobject files that's already tested and working perfectly in Reality Composer as an anchor. But for whatever reason when I try them both as AR Resource group in my asset or even loading it directly from the url it always fails (returns nil) I've double check all the file/group names and they seems fine and I couldn't find the error, just always nil.
This is my code :
var referenceObject: ARReferenceObject?
if let referenceObjects = ARReferenceObject.referenceObjects(inGroupNamed: "TestAR", bundle: Bundle.main) {
referenceObject = referenceObjects[referenceObjects.startIndex]
}
if let referenceObject = referenceObject{
delegate.didFinishScan(referenceObject, false)
}else {
do {
if let url = Bundle.main.url(forResource: "Dragonball1", withExtension: "arobject") {
referenceObject = try ARReferenceObject(archiveURL: url)
}
} catch let myError{
let error = myError as NSError
print("try \(error.code)")
}
}
Any idea? Thanks
I'd like to capture the room with materials obtained through the camera while scanning with RoomPlan.
Is there any way to capture room surface material and render the object while capturing the room geometry using RoomPlan?
hi I am not sure what is going on...
I have been working on this model for a while on reality composer, and had no problem testing it that way...it always worked out perfectly.
So I imported the file into a brand new Xcode project... I created a new ARApp, and used SwiftUI.
I actually did it twice ...
And tested the version apple has with the box. In Apple's version, the app appears but the whole part where it tries to detect planes didn't show up. So I am confused.
I found a question that mentions the error messages I am getting but I am not sure how to get around it?
https://developer.apple.com/forums/thread/691882
//
// ContentView.swift
// AppToTest-02-14-23
//
// Created by M on 2/14/23.
//
import SwiftUI
import RealityKit
struct ContentView : View {
var body: some View {
return ARViewContainer().edgesIgnoringSafeArea(.all)
}
}
struct ARViewContainer: UIViewRepresentable {
func makeUIView(context: Context) -> ARView {
let arView = ARView(frame: .zero)
// Load the "Box" scene from the "Experience" Reality File
//let boxAnchor = try! Experience.loadBox()
let anchor = try! MyAppToTest.loadFirstScene()
// Add the box anchor to the scene
arView.scene.anchors.append(anchor)
return arView
}
func updateUIView(_ uiView: ARView, context: Context) {}
}
#if DEBUG
struct ContentView_Previews : PreviewProvider {
static var previews: some View {
ContentView()
}
}
#endif
This is what I get at the bottom
2023-02-14 17:14:53.630477-0500 AppToTest-02-14-23[21446:1307215] Metal GPU Frame Capture Enabled
2023-02-14 17:14:53.631192-0500 AppToTest-02-14-23[21446:1307215] Metal API Validation Enabled
2023-02-14 17:14:54.531766-0500 AppToTest-02-14-23[21446:1307215] [AssetTypes] Registering library (/System/Library/PrivateFrameworks/CoreRE.framework/default.metallib) that already exists in shader manager. Library will be overwritten.
2023-02-14 17:14:54.716866-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/suFeatheringCreateMergedOcclusionMask.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.743580-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arKitPassthrough.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.744961-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/drPostAndComposition.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.745988-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arSegmentationComposite.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.747245-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute0.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.748750-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute1.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.749140-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute2.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.761189-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute3.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.761611-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute4.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.761983-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute5.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.762604-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute6.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.763575-0500 AppToTest-02-14-23[21446:1307215] [Assets] Resolving material name 'engine:BuiltinRenderGraphResources/AR/arInPlacePostProcessCombinedPermute7.rematerial' as an asset path -- this usage is deprecated; instead provide a valid bundle
2023-02-14 17:14:54.764859-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 18: Json Deserialization; unknown member 'EnableARProbes' - skipping.
2023-02-14 17:14:54.764902-0500 AppToTest-02-14-23[21446:1307215] [Foundation.Serialization] Json Parse Error line 20: Json Deserialization; unknown member 'EnableGuidedFilterOcclusion' - skipping.
2023-02-14 17:14:55.531748-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534559-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534633-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534680-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534733-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534777-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534825-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534871-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:55.534955-0500 AppToTest-02-14-23[21446:1307215] throwing -10878
2023-02-14 17:14:56.207438-0500 AppToTest-02-14-23[21446:1307383] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [2]
2023-02-14 17:17:15.741931-0500 AppToTest-02-14-23[21446:1307414] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1]
2023-02-14 17:22:07.075990-0500 AppToTest-02-14-23[21446:1308137] [Technique] ARWorldTrackingTechnique <0x1149cd900>: World tracking performance is being affected by resource constraints [1]
code-block
I have a UIKit app and would like to provide spacial experience when run on VisionOS.
I added VisionOS support, but not sure how to present an immersive view. All tutorials are in SwiftUI, but my app is in UIKit.
This is an example from a SwiftUI project, but how how do I declare this ImmersiveView in UIKit?
struct VirtualApp: App {
var body: some Scene {
WindowGroup {
ContentView()
}.windowStyle(.volumetric)
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView()
}
}
}
and in UIKit how do I make the call to open the ImmersiveView?
Hi community,
I have a pair of stereo images, one for each eye. How should I render it on visionOS?
I know that for 3D videos, the AVPlayerViewController could display them in fullscreen mode. But I couldn't find any docs relating to 3D stereo images.
I guess my question can be brought up in a more general way: Is there any method that we can render different content for each eye? This could also be helpful to someone who only has sight on one eye.
How do you force quit apps on the Vision Pro simulator?
I've read online you can double press the digital crown on a real Vision Pro device but there is no such button on the simulator. So far I've tried
Pressing the home button twice (⇧⌘H)
Pressing the Siri button twice (⌥⇧⌘H)
None of them works
I'm on Xcode 15.0 beta 5 (15A5209g) & visionOS 1.0 beta 2 (21N5207e)
I’d like to use ARKit world tracking and display both the back camera feed and the front camera feeds, using the front feed as as a PIP. This would work great for an internet streaming use case.
However, it’s impossible. As soon as ARKit is told to use one mode, the camera for the other side freezes/doesn’t work. This page also says you have to pick one camera to show: https://developer.apple.com/documentation/arkit/arkit_in_ios/choosing_which_camera_feed_to_augment?language=objc
A question to the developers: why is this limitation in-place? Are there any work-arounds for the use case of ARKit world tracking + displaying the back camera feed + displaying the front camera feed as an overlay?
It’s possible to do this with plain camera initialization without ARKit. (There’s an official example.) With ARKit, it no longer works.
It’s strange that I cannot access the front feed via one of the other frameworks, but I guess that ARKit blocks that.
Seeing a flurry of these in Xcode 15 Beta 8 while debugging an ARKit app
<<<< AVPointCloudData >>>> Fig assert: "_dataBuffer" at bail (AVPointCloudData.m:217) - (err=0)
Where can I find sample video, sample encoding apps, viewing apps, etc?
I see specs, high level explanations, etc. but not finding any samples or command lines or app documentation to explain how to make and view these files.
Thank you, looking forward to promoting a spatial video rich future.
Hello,
I am developing a VisionOS based application, that uses the various data providers like Image Tracking, Plane Detection, Scene Reconstruction but these are not supported on VisionOS Simulator. What is the Work Around for this issue ?
We are attempting to update the texture on a node. The code below works correctly when we use a color, but it encounters issues when we attempt to use an image. The image is available in the bundle, and it image correctly in other parts of our application. This texture is being applied to both the floor and the wall. Please assist us with this issue."
for obj in Floor_grp[0].childNodes {
let node = obj.flattenedClone()
node.transform = obj.transform
let imageMaterial = SCNMaterial()
node.geometry?.materials = [imageMaterial]
node.geometry?.firstMaterial?.diffuse.contents = UIColor.brown
obj.removeFromParentNode()
Floor_grp[0].addChildNode(node)
}
What will be the pros and cons if unreal engine is used in apple vision pro?
What are the issues that unreal will face in future on apple vision pro?
What will be the specifications required for unreal to work on vision pro?
Hello all -
I'm experiencing a shading error when I have two UnlitSurface shaders using images for color and opacity. When the shaders are applied to two mesh planes, one placed in front of the other, the shader in front will render and the plane mesh will mask out and not render what is behind.
Basically - it looks like the opacity map on the shader in front is creating a 'mask'.
I've attached some images here to help explain.
Has anyone experienced this error? And how can I go about fixing this - thx!
How do we author a Reality File like the ones under Examples with animations at https://developer.apple.com/augmented-reality/quick-look/
??
For example, "The Hab" : https://developer.apple.com/augmented-reality/quick-look/models/hab/hab_en.reality
Tapping on various buttons in this experience triggers various complex animations. I don't see any way to accomplish this in Reality Composer.
And I don't see any way to export/compile to a "reality file" from within Xcode.
How can I use multiple animations within a single GLTF file?
How can I set up multiple "tap target" on a single object, where each one triggers a different action?
How do we author something similar? What tools do we use?
Thanks