Hello,
I am trying to develop an app that broadcasts what the user sees via Apple Vision Pro.
I have applied for and obtained the Enterprise API and actually can stream via the "Main camera access" API, as reported on https://developer.apple.com/videos/play/wwdc2024/10139/.
My problem is that I have not found any reference to how to integrate the "Passthrough in screen capture" API into my project.
Have any of you been able to do this?
Thank you
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
I noticed that tracking moving images is super slow on visionOS.
Although the anchor update closure is called multiple times per second, the anchor's transform seems to be updated only once in a while. Another issue might be that the SwiftUI isn't updating more often.
On iOS, image tracking is pretty smooth.
Is there a way to speed this up somehow on visionOS, too?
Hello,
I have rendered an usdz File using sceneKit's .write() method on the displayed scene. Once I load it on another RealityKit's ARView using the .nonAR mode of the camera, I am trying to use the view's raycast(from:,allowing:,alignment:) method, to get the coordinates on the model. I have applied the collisionComponents when loading the model using the .generateCollisionShapes() function to be able to interact with the modelEntity.
However, the raycast result returns nothing.
Is there something I am missing for it to work?
Thanks!
Anybody try hand tracking provider in 2.0? I'm getting them in 11ms interval, as advertised, but they are duplicate. Here's a print of the timestamps. Problematic for me because I am tracking the last 5 position for a calculation and expect them to be unique. Can't find docs on this anywhere.
I understand it's not truly 90 updates a second but predicted pose, however I expected the updates to include predicted poses.
I have an immersive space that is rendered using metal. Is there a way that I can position swiftUI views at coordinates relative to positions in my immersive space?
I know that I can display a volume with RealityKit content simultaneously to my metal content. The volume's coordinate system specifically, it's bounds, does not, coincide with my entire metal scene.
One approach I thought of would be to open two views in my immersive space. That way, I could simply add Attachment's to invisible RealityKit Entities in one view at positions where I have content in my metal scene.
unfortunately it seems that, while I can declare an ImmersiveSpace be composed of multiple RealityViews
ImmersiveSpace(){
RealityView { content in
// load first view
} update: { content in
// update
}
}
RealityView{ content in
//load second view
}
} update: { content in
//update
}
}
That results in two coinciding realty kit views in the immersive space.
I can not however do something like this:
ImmersiveSpace(){
CompositorLayer(configuration: ContentStageConfiguration()){ layerRenderer in
//set up my metal renderer and stuff
}
RealityView{ content in
//set up a view where I could use attachments
} update: { content in
}
}
Hello,
We are wondering how and if we can save/render the view that a PerspectiveCamera sees in VisionOS.
Ideally, the goal is to use a PerspectiveCamera in a Mixed Immersive space and render out what it sees.
We don't see any methods exposed for rendering the view.
We would truly appreciate any advice on this!
Thank you.
As the question suggests, I would like to use environmental awareness and item placement functions in Unity. Does have any related example projects?
I learned Sharplay from the WWDC video. I understand the creation of seats, but I can't learn some of the following contents well, so I hope you can help me. The content is as follows: I have set up the seats.
struct TeamSelectionTemplate: SpatialTemplate {
let elements: [any SpatialTemplateElement] = [
.seat(position: .app.offsetBy(x: 0, z: 4)),
.seat(position: .app.offsetBy(x: 1, z: 4)),
.seat(position: .app.offsetBy(x: -1, z: 4)),
.seat(position: .app.offsetBy(x: 2, z: 4)),
.seat(position: .app.offsetBy(x: -2, z: 4)),
]
}
It was mentioned in one of my previous posts: "I hope you can give me a SharePlay Button. After pressing it, it will assign all users in Facetime to a seat with elements quantified in TeamSe lectionTemplate.", and someone replied to me and asked me to try systemCoordinator.configuration.spatialTemplatePreference = .custom (TeamSelectionTemplate()), however, Xcode error Cannot find 'systemCoordinator' in scope How to solve it? Thank you!
When I wanted to call the Reality Composer Pro scene containing Object Tracking, I tried the following code:
RealityView { content in
if let model = try? await Entity(named: "Scene", in: realityKitContentBundle) {
content.add(model)
}
}
Obviously, this is wrong. We need to add some configurations that can enable Object Tracking to Reality View. What do we need to add?
Note:I have seen https://developer.apple.com/videos/play/wwdc2024/10101/, but I don't know much about it.
I found that the app AirDraw can export users' draw scenes to a USDZ file. So how can I implement this function using RealityKit?
The project was developed using Unity, and the requirement is to place a virtual model in the real world. When the user leaves the environment or the machine is turned off and then on again, the virtual model is still in its original real position. I found that the worldtracking function of Arkit is useful, but I don't know how to use it in Unity. Is that have any related example projects?
I am trying to make the immersive version of AVplayerViewController bigger, but I can't find any information on how I can go about it. It seems that if I wanted to change immersive video viewing experience, only thing I can do is using VideoMaterial and put it on ModelEntity with .generatePlane. is there a way to change video size on immersive mode for AVplayerViewController?
hello
i wanna play mp4 file in VideoMaterial avPlayer.
so first i make to use reality composer pro.
I created matterial using the sphere provided by default in Reality Composer Pro and exported it to usdz.
and when i play mp4 file in sphere matterial, it's good play
But i wanna custom created matterial (ex. shaper3d create 3d modeling) not good play.
i make custom created matterial - it's curved matterial
curved matterial in shaper3d and exported it to usdz.
curved matterial in Reality Composer Pro Scene and exported it to usdz.
when i play mp4 file in curved matterial, it's not good play
-> not adjust screen play
How can I adjust and display the video in a custom usda file?
For visionOS 2.0+, it has been announced the object tracking feature. Is there any support for PolySpatial in Unity or is it only available in Swift and Xcode?
Hello,
I am currently developing an application using RealityKit and I've encountered a couple of challenges that I need assistance with:
Capturing Perspective Camera View: I am trying to render or capture the view from a PerspectiveCamera in RealityKit/RealityView. My goal is to save this view of a 3D model as an image or video using a virtual camera. However, I'm unsure how to access or redirect the rendered output from a PerspectiveCamera within RealityKit. Is there an existing API or a recommended approach to achieve this?
Integrating SceneKit with RealityKit: I've also experimented with using
SCNNode and SCNCamera to capture the camera's view, but I'm wondering if SceneKit is directly compatible within a RealityKit scene, specifically within a RealityView.
I would like to leverage the advanced features of RealityKit for managing 3D models. Is saving the virtual view of a camera supported, and if so, what are the best practices?
Any guidance, sample code, or references to documentation would be greatly appreciated.
Thank you in advance for your help!
I just follow the video and add the codes, but when I switch to spatial video capturing, the videoPreviewLayer shows black.
<<<< FigCaptureSessionRemote >>>> Fig assert: "! storage->connectionDied" at bail (FigCaptureSessionRemote.m:405) - (err=0)
<<<< FigCaptureSessionRemote >>>> captureSessionRemote_getObjectID signalled err=-16405 (kFigCaptureSessionError_ServerConnectionDied) (Server connection was lost) at FigCaptureSessionRemote.m:405
<<<< FigCaptureSessionRemote >>>> Fig assert: "err == 0 " at bail (FigCaptureSessionRemote.m:421) - (err=-16405)
<<<< FigCaptureSessionRemote >>>> Fig assert: "msg" at bail (FigCaptureSessionRemote.m:744) - (err=0)
Did I miss something?
We are developing apps for visionOS and need the following capabilities for a consumer app:
access to the main camera, to let users shoot photos and videos
reading QR codes, to trigger the download of additional content
So I was really happy when I noticed that visionOS 2.0 has these features.
However, I was shocked when I also realized that these capabilities are restricted to enterprise customers only:
https://developer.apple.com/videos/play/wwdc2024/10139/
I think that Apple is shooting itself into the foot with these restrictions. I can understand that privacy is important, but these limitations restrict potential use cases for this platform drastically, even in consumer space.
IMHO Apple should decide if they want to target consumers in the first place, or if they want to go the Hololens / MagicLeap route and mainly satisfy enterprise customers and their respective devs. With the current setup, Apple is risking to push devs away to other platforms where they have more freedom to create great apps.
Hello,
I am new to swiftUI and VisionOS but I developed an app with a window and an ImmersiveSpace. I want the Immersive space to be dismissed when the window/app is closed.
I have the code below using the state of ScenePhase and it was working fine in Vision OS 1.1 but it stopped working with VisionOS 2.0.
Any idea what I am doing wrong? Is there another way to handle the dismissal of ImmersiveSpace when my main Window is closed?
@main
struct MyApp: App {
@State private var viewModel = ViewModel()
var body: some Scene {
@Environment(\.scenePhase) var scenePhase
@Environment(\.dismissImmersiveSpace) var dismissImmersiveSpace
WindowGroup {
SideBarView()
.environment(viewModel)
.frame(width: 1150,height: 700)
.onChange(of: scenePhase, { oldValue, newValue in
if newValue == .inactive || newValue == .background {
Task {
await dismissImmersiveSpace()
viewModel.immersiveSpaceIsShown = false
}
}
})
}.windowResizability(.contentSize)
ImmersiveSpace(id: "ImmersiveSpace") {
ImmersiveView(area: viewModel.currentModel)
.environment(viewModel)
}
}
}
I tested the new visionOS object tracking and it worked really well.
I have created a reference object using Create ML and it really detected the object.
My question is: does it work also with iOS and, if not right now, is it planned to work in mobile iOS in the future?
hello
i watched WWDC24,
Ultra-Wide Mac Display.
i wanna use to my player like that Ultra-wide mac Display.
i wanna play for mp4 movie file in Ultra-wide mode (like that curved mode)
Can i use Ultra-Wide AVKit, VisionOS 2 ?
when i check in Apple documentation, AVExperienceController.Experience.expanded,
Is this the function(Ultra-wide mode) I think it is?
(https://developer.apple.com/documentation/avkit/avexperiencecontroller/experience-swift.enum/expanded#discussion)