Is it possible to access the raw lidar measurements before the sceneDepth calculation is done to combines the lidar measurements with visual data. In low light environments the lidar scanner should still work and provide depth info but I cannot figure out how to access those pure lidar depth measurements. I am currently using:
guard let frame = arView.session.currentFrame,
let depthData = frame.sceneDepth?.depthMap else {
print("Depth data is unavailable.")
return
}
but this is the depth data after sensor fusion occurs and fails in low light conditions.
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
I’m working on a project in Reality Composer Pro, and I’ve encountered an issue with vertex animations. Here’s what I’ve done:
I created a model in Blender, which includes two animations:
One that scales the vertices of the model.
Another that moves the model's position.
I imported both animations into Reality Composer Pro, and the position animation works fine, but the vertex scaling animation does not seem to work.
What I’m Trying to Achieve:
I want the vertex scaling animation to play correctly in Reality Composer Pro alongside the movement animation.
Problem:
The position animation works as expected, but the vertex scaling animation does not work when applied in Reality Composer Pro.
I have checked the vertex scaling animation in other software, and it works fine there. The issue seems to be specific to Reality Composer Pro.
Is vertex animation scaling supported in Reality Composer Pro? If so, what might be causing this issue? Any advice or solutions would be greatly appreciated!
After creating a ". plist" file in the "Document" folder of the app, uninstalling and reinstalling the app, the "FileManager. default. fileExists (atPath: folderURL. path())" code returns true. I checked if the ". plist" file was not found in the "Document" folder of the app. Is this a bug in the VisionOS system
After implementing the method of obtaining video streams discussed at WWDC in the program, I found that the obtained video stream does not include digital models in the digital space or related videos such as the program UI. I would like to ask how to obtain a video stream or frame that contains only the physical world?
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
var cameraAccessStatus = ARKitSession.AuthorizationStatus.notDetermined
let worldTracking = WorldTrackingProvider()
func requestWorldSensingCameraAccess() async {
let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess])
cameraAccessStatus = authorizationResult[.cameraAccess]!
}
func queryAuthorizationCameraAccess() async{
let authorizationResult = await arKitSession.queryAuthorization(for: [.cameraAccess])
cameraAccessStatus = authorizationResult[.cameraAccess]!
}
func monitorSessionEvents() async {
for await event in arKitSession.events {
switch event {
case .dataProviderStateChanged(_, let newState, let error):
switch newState {
case .initialized:
break
case .running:
break
case .paused:
break
case .stopped:
if let error {
print("An error occurred: \(error)")
}
@unknown default:
break
}
case .authorizationChanged(let type, let status):
print("Authorization type \(type) changed to \(status)")
default:
print("An unknown event occured \(event)")
}
}
}
@MainActor
func processWorldAnchorUpdates() async {
for await anchorUpdate in worldTracking.anchorUpdates {
switch anchorUpdate.event {
case .added:
//检查是否有持久化对象附加到此添加的锚点-
//它可能是该应用程序之前运行的一个世界锚。
//ARKit显示与此应用程序相关的所有世界锚点
//当世界跟踪提供程序启动时。
fallthrough
case .updated:
//使放置的对象的位置与其对应的对象保持同步
//世界锚点,如果未跟踪锚点,则隐藏对象。
break
case .removed:
//如果删除了相应的世界定位点,则删除已放置的对象。
break
}
}
}
func arkitRun() async{
do {
try await arKitSession.run([cameraFrameProvider,worldTracking])
} catch {
return
}
}
@MainActor
func processDeviceAnchorUpdates() async {
await run(function: self.cameraFrameUpdatesBuffer, withFrequency: 90)
}
@MainActor
func cameraFrameUpdatesBuffer() async{
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]),let cameraFrameUpdates1 =
cameraFrameProvider.cameraFrameUpdates(for: formats[1]) else {
return
}
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
self.pixelBuffer = mainCameraSample.pixelBuffer
}
for await cameraFrame in cameraFrameUpdates1 {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
if self.pixelBuffer != nil {
self.pixelBuffer = mergeTwoFrames(frame1: self.pixelBuffer!, frame2: mainCameraSample.pixelBuffer, outputSize: CGSize(width: 1920, height: 1080))
}
}
}
I am having trouble with initializing the SharePlay. It works but we have to leave the game (click the close button) and rejoin it, sometimes several times, for it to establish the connection.
I am also having trouble sharing images over SharePlay with GroupSessionJournal. I am not able to get it to transfer any amount of data or even get recognition on the other participants in the SharePlay that an image is being sent. We have look at all the information we can find online and are not able to establish a connection. I am not sure if I am missing a step, or if I am incorrectly sending the data through the GroupSessionJournal.
Here are the steps I took take to replicate the issue I have:
FaceTime another person with the app.
Open the app and click the SharePlay button to SharePlay it with the other person.
Establish the SharePlay and by making sure that the board states are syncronized across participants. If its not click the close button and click open app again to rejoin the SharePlay. (This is one of the bugs that I would like to fix. This is just a work around we developed to establish the SharePlay. We would like it so that when you click SharePLay and they join the session it works.)
Once the SharePlay has been established, change the image by clicking change 1 image.
Select a jpg image.
The image that represents 1 should be not set. If you dont see the image click on any of the X in the squares and it will change to the image.
The image should appear on the other participant in the SharePlay. (This does not happen and is what we have not been able to figure out how to get working.)
Here are the classes for the example project I created:
Content View
Game Model Class
Activity Manager
Main Starter Class
Hello,
I am currently working on a Unity project for the Apple Vision Pro. I would like to have people passing in front of the virtual objects occlude the virtual objects that are behind. Something similar to this: https://developer.apple.com/documentation/arkit/occluding-virtual-content-with-people
I could unfortunately not find any documentation about this. Is it possible to implement body segmentation or occlusion on the Apple Vision Pro? If it's not currently supported, are there plans to add it? Any ideas on how to achieve this with existing tools?
Thanks!
Mehdi
Hi, Could anyone share some insights on how to get and track the 3D coordinates of real objects in the environment in playgrounds? I searched for some resources and noticed ARKit, Reality may be helpful but not sure how to do it.
When you click the button in the background of three horizontal lines, when the view is about to appear, add buried event statistics, but click the button to close it, it will repeat the view will appear method API, equivalent to the view method repeated execution twice, resulting in incorrect buried event statistics
I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos
I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)):
struct StereoImage: View {
var body: some View {
let spacing: CGFloat = 10.0
let padding: CGFloat = 40.0
VStack(spacing: spacing) {
Text("Stereoscopic Image Example")
.font(.largeTitle)
RealityView { content in
let creator = StereoImageCreator()
guard let entity = await creator.createImageEntity() else {
print("Failed to create the stereoscopic image entity.")
return
}
content.add(entity)
}
.frame(depth: .zero)
}
.padding(padding)
.clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE!
}
}
This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers.
How can I properly apply a clipshape to RealityViews like this? Thanks!
In Reality Composer, it is possible to create child components and manipulate them within the hierarchy of a ModelEntity. Is there a way to create child components in other 3D modeling programs, such as Blender?
In the DestinationVideo demo, the onAppear in UpNextView is triggered again when it is closed, but I only want it to be triggered once. How can I achieve this?
Alternatively, I would like to capture the button click events in the player menu, as shown in the screenshot below.
I am a newby of spatial computing. Here I am learning how to use ARKit to capture the environment texture and apply it on a ModelEntity of RealityKit on Vision Pro. But I do not find a demo of how to use EnvironmentLightEstimationProvider.
After checking the documentation, I also have some questions:
EnvironmentProbeAnchor.environmentTexture is a MTLTexture, but EnvironmentResource needs a CGImage. How do I translate MTLTexture to CGImage(Forgive me that I do not know much about Metal or other framework, so It will be better if there is a code that I can copy and paste directly)
It seems that the EnvironmentProbeAnchor can only get the light information around the device. But what should I do if I want get the light information around the ModelEntity so that I can apply the environment texture on it.
It will be better if you can provide a code demo about how to use the new api.
Thank you!
Hi there
I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However moving the referenceobject quickly causes tracking to stop. (I know this is a limitation and I am trying to embrace it as a feature)
Is there a way to play a USDZ animation at the last known location, after detecting that the reference object is no longer tracked? is it possible to set this up in Reality Composer pro?
I'm trying to get the USDZ to play before the Virtual Content disappears (due to reference object not being located). So that it smooths out the vanishing of the content.
Nearly everything is set up in Reality Composer pro with my immersive.scene just adding virtual content to the reference object which anchors it in the RCP Scene, so my immersive view just does this -
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
& this
.onAppear {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
I have tried Using SpatialTracking & WorldTrackingProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and/or if this is the right way to go about it.
Also I have implemented this at the beginning of object tracking.
All I had to do was add a onAppear behavior to the object to play a USDZ and that works.
Doing it for disappearing (due to loss of reference object) seems to be a lot harder.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
AR / VR
RealityKit
Reality Composer Pro
Hi there
I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However by moving the referenceobject quickly, it causes tracking to stop. (I know this is a limitation so im trying to make it a feature)
IS there a way to play a USDZ animation at the last known location, after detecting that reference object is no longer being tracked? is it possible to set this up in Reality Composer pro?
Nearly everything is set up in Reality Composer pro with my immersive.scene just anchoring virtual content to the Reference object in the RCP Scene, so my immersive view just does this -
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
& this
.onAppear {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
I have tried Using SpatialTracking & WorldTrackProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and if this is actually the right way to do it.
Apologies for my lack of knowledge.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
visionOS
Hi there,
I’m building a workplace experience that requires using virtual desktop, is there a way to launch it in my code, so user doesn’t have to do it manually?
Thanks in advance!
If I update vision pro size then textfield and button are not update as per its new size.
Its working on some scren but not in some screen.
Please refer below screenshot for your reference,
Hi
I'm trying to create a water shader using the shader graph in Reality Composer Pro, but quite a few of the features you would need for realistic water rendering appear to be missing.
One big issue is the lack of a way to create refraction. We can easily control the transparency of the water by changing the opacity, but how can we distort what we see through the water? I can't find any obvious solution for that.
In Unity, they provide a node called HD Scene Color which is basically the scene rendered to an offscreen buffer which you can apply to the water and then distort to get a refraction effect. I guess the Background Blur node could be used for something like this if we could turn off the blur and distort it, but there's no control for the blur and no control for the texture coordinates.
Am I missing something? Any ideas are welcome :)
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Hello,
I checked following documentations.
Vision | Apple Developer Documentation
Discover Swift enhancements in the Vision framework - WWDC24 - Videos - Apple Developer
I saw Vision Framework is available on visionOS.
So I want to know that if it's possible using Vision Framework on visionOS for tracking human and animal body poses. Or are there some limits to use this on visionOS?
Dear Apple Engineers,
I am working on a project in visionOS and need to implement a curved surface effect for video playback, where the width of the surface can be dynamically adjusted. Specifically, I want the video to be displayed on a curved surface (similar to a scroll unfolding), and the user should be able to adjust the width of this surface.
I have the following specific questions:
How can I implement a curved surface for video playback and ensure the video content is not stretched or distorted on the surface?
How can I create a dynamic curved surface (such as a bending plane) in RealityKit or visionOS, where the width can be adjusted by the user?
Is it possible to achieve more complex curved surface effects (such as scroll unfolding or bending) using Shaders or other techniques?
Thank you very much for your help!
I have been experimenting some experiences in which I would like to use SharePlay to allow the app to be used by multiple users.
Currently I achieved sharing a volume containing a Reality Composer Pro scene inside of it, the scene contains some entities with an animation.
So far I have been able to correctly share the volume and its content, with the animation playing without problems, but once I activate SharePlay different users see different moments of the animation if no animation at all.
Is there a way to synchronize animations between all the users, no matter when someone entered the SharePlay session, aside from communicating the animation time once someone joins?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Group Activities
Reality Composer Pro
visionOS