Hi everyone,
I need to synchronize the playback of RealityKit Timelines via SharePlay.
To do this I am trying to get the references of the timelines using "AnimationPlaybackController" and "AnimationResource". In my realitykit scene I have configured both an animation (with blender), and a timeline, the animation starts correctly when the realitykit scene starts, the timeline not.
Below the code:
struct ContentView: View {
@State private var subscriptions = [EventSubscription]()
@Environment(AppModel.self)
private var appModel
let rootEntity = Entity()
@State var testEntity: Entity?
@State var testAnimation: AnimationResource?
@State var testController: AnimationPlaybackController?
init() {
CubeComponent.registerComponent()
}
var body: some View {
RealityView { content in
content.add(rootEntity)
if let scene = try? await Entity(named: "Room", in: realityKitContentBundle) {
rootEntity.addChild(scene)
playAnimations(from: content)
}
}
.gesture(SpatialTapGesture().targetedToAnyEntity()
.onEnded({ value in
_ = value.entity.applyTapForBehaviors()
if let testEntity, let testAnimation {
testController = testEntity.playAnimation(testAnimation.repeat())
}
})
)
}
func playAnimations(from content: RealityViewContent) {
subscriptions.append(content.subscribe(to: ComponentEvents.DidAdd.self, componentType: AnimationLibraryComponent.self, { event in
let entity = event.entity
entity.components[AnimationLibraryComponent.self]?.animations.forEach({ (key, value) in
if value.definition is AnimationGroup {
if key == "/Room/TestTimeline" {
let controller = entity.playAnimation(value.repeat())
testEntity = entity
testAnimation = value
appModel.syncronizedAnimations[key] = .init(name: key, animationController: controller, entityName: entity.name)
}
} else {
if entity.name == "SphereInteractable" {
let controller = entity.playAnimation(value.repeat())
appModel.syncronizedAnimations[key] = .init(name: key, animationController: controller, entityName: entity.name)
}
}
})
}))
}
}
the variables testEntity, testAnimation and testController are for testing purposes only. If I try to start the animations in the playAnimations function, only the animation created via blender starts (the one related to the object "SphereInteractable"), the Timeline starts only if I save a reference and I play it with a tap gesture or with a delay of ! seconds with DispatchQueue.asyncAfter called in the onAppear.
is there a better way to handle this? The goal is to have a reference of the AnimationPlaybackController of the timeline, in order to sync the animation via shareplay.
Thanks
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi,
On visionOS to manage entity rotation we can rely on RotateGesture3D. We can even with the constrainedToAxis parameter authorize only rotation on an x, y or z axis or even make combinations.
What I want to know is if it is possible to constrain the rotation on axis automatically.
Let me explain, the functionality that I would like to implement is to constrain the rotation on an axis only once the user has started his gesture. The initial gesture the user makes should let us know which axis they want to rotate on.
This would be equivalent to activating a constraint automatically on one of the axes, as if we were defining the gesture on one of the axes.
RotateGesture3D(constrainedToAxis: .x)
RotateGesture3D(constrainedToAxis: .y)
RotateGesture3D(constrainedToAxis: .z)
Is it possible to do this?
If so, what would be the best way to do it?
A code example would be greatly appreciated.
Regards
Tof
I would like to visualize a point cloud taken from a lidar. Assuming I can get the XYZ values of every point (of which there may be hundreds or thousands), what is the most efficient way for me to create a point cloud using this information?
Hi,
Following the wwdc24 video - What’s new in Quick Look for visionOS, I've managed to open a 3D model using PreviewApplication by calling
let previewItem = PreviewItem(url: modelURL, displayName: "Easter", editingMode: .disabled)
_ = PreviewApplication.open(items: [previewItem])
However, the "Save to Downloads" option is aways there(see attached screenshot). As the models are user generated content, and I don't want the download option to be available to all users. How to disable it?
I tried "WWDC24: Build compelling spatial photo and video experiences | Apple" and it can successfully capture spatial video.
But I found the video by my app differs from the iPhone build-in camera app in:
Videos captured with the iPhone's build-in camera app tend to have a more natural or warmer tone, while videos taken with my app appear whiter or cooler in color temperature.
In videos recorded using the iPhone's built-in camera app, the left eye image is typically sharper than the right eye image. However, in my app, this is reversed: the right eye image is clearer than the left eye image.
I've noticed that when I cover the wide-angle lens while shooting, the entire preview screen in my app becomes brighter. However, this doesn't occur when using the iPhone's built-in camera app.
Is there any api or parameters to make my app more close to the iPhone build-in app? I have tried "whiteBalanceMode" and "exposureMode" but no luck.
A popover presented from a view that is used as an attachment is properly displayed in preview mode in the canvas but not so at runtime. I was wondering if it is supported at all.
Hi Nathaniel,
I spoke with you yesterday in the WWDC lab. Thanks for chatting with me! Is it possible to get a link to a doc that has some key metrics I'd find in a RealityKit trace so I know if that metric is exceeding limits and probably causing a problem? Right now, I just see numbers and have no idea if a metric is high or low :). This is specifically for a VisionOS app.
Thanks,
Bob
We developed an app for VisionOS 2.0 Beta in XCode 16 Beta. The development was done in the beta versions since we needed features for our app which were not available in VisionOS 1.0, which was the most recent stable release at the time we developed the app. The app was fully functional on our AVP running VisionOS 2.0 Beta Version 5, and we never had any errors. We did not publish the app to the app store since we are a research lab using the app for teleoperating a custom robot.
Last week, we upgraded the AVP from VisionOS 2.0 Beta Version 5 to VisionOS 2.0 (stable release). Unfortunately, once we upgraded to 2.0, we began to have an issue with the app. While the app is running, at seemingly random times, without any new functionality being used within the app (no new buttons being pressed, etc), we encounter the following console error:
assertion failure: 'index < m_size' (operator[]:line 1011) Index out of range. index = 18446744073709551615, size = 0
We could re-upload the app to the AVP and successfully operate the app for several minutes until the same error occurred again. We thought to use Apple Configurator to flash VisionOS 2.0 Beta 5 to the AVP since the error wasn't happening on the previous firmware, but we were unable to flash a beta version of VisionOS via Apple Configurator, so we simply performed a factory reset of the device (on VisionOS2.0, by pressing restore in Apple Configurator with the AVP connected via the developer strap) to see if this might fix the issue.
After doing a factory reset, we thought the console error completely went away. We were able to operate the app for ~3 hours on Sunday with no issues. Then, yesterday (Monday) we operated the app for another 2 hours, and at the very end of using the app, it crashed with the same error. We re-uploaded the app with XCode, and the error occurred again after about 20 mins of using the app. This cycle repeated, and every time we re-uploaded the app, the time it took for the error to occur decreased, until we uploaded the app and the error occurred in <20 seconds.
We decided to test our hypothesis by upgrading VisionOS to 2.1 and using XCode 16. Similarly, we were able to run the app on the AVP for 2 hours, then the error occurred. The next time we ran the app, the error occurred within 20 minutes, then after reloading, occurred within 5 mins, then 2 mins, etc.
We are pretty stumped on why the app would work after a factory reset or a firmware upgrade for hours, then fail faster and faster every time we re-upload the app from Xcode. We are not experienced in debugging Swift and ObjC, so we wanted to inquire if this is an issue that you have ran into before to point us in the right direction. We think that it could be a problem with the cached memory that persists on the device across uploads from Xcode, but that's the extent of our understanding.
P.S., we also experienced this error during some of the app failures, but the one above is the most common:
assertion failure: Index out of range (operatorl]:line
858) index = 576460752303423487, max = 1
Topic:
Spatial Computing
SubTopic:
General
I have a visionOS app that plays audio using AVAudioEngine and presents both a window and an immersive space. If I close the window, the audio session gets interrupted and attempting to restart the session and audio engine has no effect. I need to dismiss the app, then reopen it, which reopens the main window, in order for audio to start playing again.
This is in all visionOS 2 betas. Note that I have background audio enabled for my app.
Hi,
I was wondering if the Enterprise API for visionOS 2 includes access to the raw Lidar data from the Apple Vision Pro, or any intermediate data representation (like the depthMap as shown in this post)? Or if there would be any way to get access to this data?
Thanks in advance!
I am following this example to create a stereoscopic image: https://developer.apple.com/documentation/visionos/creating-stereoscopic-image-in-visionos
I would also like to add corner radius to the stereoscopic RealityView. With ordinary SwiftUI views, we typically just use .clipShape(RoundedRectangle(cornerRadius: 32)):
struct StereoImage: View {
var body: some View {
let spacing: CGFloat = 10.0
let padding: CGFloat = 40.0
VStack(spacing: spacing) {
Text("Stereoscopic Image Example")
.font(.largeTitle)
RealityView { content in
let creator = StereoImageCreator()
guard let entity = await creator.createImageEntity() else {
print("Failed to create the stereoscopic image entity.")
return
}
content.add(entity)
}
.frame(depth: .zero)
}
.padding(padding)
.clipShape(RoundedRectangle(cornerRadius: 32)) // <= HERE!
}
}
This doesn't seem to actually clip the RealityView shown in the sample above. I am guessing this is due to the fact that the box in the RealityView has a non-zero z scale, which means it isn't on the same "layer" as its SwiftUI containers, and thus isn't clipped by the modifiers apply to the containers.
How can I properly apply a clipshape to RealityViews like this? Thanks!
If I correctly understand, a new Enterprise API has been introduced In visionOS 26 allowing to fix windows to the user frame of reference, implementing a something like an "head up display", with the window tracking the user movements.
Is this API only available to enterprise applications, and if so is there a plan to make it available for every kind of app?
Hi, I am trying to update what entities are visible in my RealityView. After the SwiftData set is updated, I have to restart the app for it to appear in the RealityView.
Also, the RealityView does not close when I move to a different tab. It keeps everything on and tracking, leaving the model in the same location I left it.
import SwiftUI
import RealityKit
import MountainLake
import SwiftData
struct RealityLakeView: View {
@Environment(\.modelContext) private var context
@Query private var items: [Item]
var body: some View {
RealityView { content in
print("View Loaded")
let lakeScene = try? await Entity(named: "Lake", in: mountainLakeBundle)
let anchor = AnchorEntity(.plane(.horizontal, classification: .any, minimumBounds: SIMD2<Float>(0.2, 0.2)))
@MainActor func addEntity(name: String) {
if let lakeEntity = lakeScene?.findEntity(named: name) {
// Add the Cube_1 entity to the RealityView
anchor.addChild(lakeEntity)
} else {
print(name + "entity not found in the Lake scene.")
}
}
addEntity(name: "Island")
for item in items {
if(item.enabled) {
addEntity(name: item.value)
}
}
// Add the horizontal plane anchor to the scene
content.add(anchor)
content.camera = .spatialTracking
} placeholder: {
ProgressView()
}
.edgesIgnoringSafeArea(.all)
}
}
#Preview {
RealityLakeView()
}
Topic:
Spatial Computing
SubTopic:
General
Tags:
Swift Packages
RealityKit
Reality Composer Pro
SwiftData
Since only the user can take a screenshot using the Apple Vision Pro's top buttons, the only workaround available to an immersive app that needs a screenshot to document the user's creative interior design choices is
ask the user to take a screenshot
wait until the user taps a button indicating the screenshot has been taken
then the app asks the user to select the screenshot when the app opens the PhotoPicker
when the user presses Done, the screenshot is handed off to the app.
One wonders why there is no Apple Api for doing this in a simple privacy protective way such as:
When called, the Apple api captures the screenshot in Apple secured memory
The api displays the screenshot to the user with appropriate privacy warnings and asks if the user wants to
a. share this screenshot with the app, or
b. cancel,
c. retake the screenshot
If the user approves, the app receives the screenshot
I have a created an AnchorEntity for my index finger tip and then created a model entity (A sphere) as a child of it. This model entity has a collision component and a physics body component. I tried using dynamic and kinematic modes for the physics body component.
I have created a plane from a cube that has collision component and a static physics body.
I have subscribed to the CollisionEvents.Began on this plane. I have also stored it in a EventSubscription state variable.
@State private var collisionSubscription: EventSubscription?
The I subscribed as follows
collisionSubscription = content.subscribe(to: CollisionEvents.Began.self,
on: self.boxTopCollision, { collisionEvent in
print("something collided with the box top")
})
The collision event fires when I directly put the sphere above the plane and let gravity do the collision, but when the the sphere is the child of the anchor entity, the collision events don't happen.
I tried adding collision and physics body component directly to the anchor entity and that doesn't work too.
I created another sphere with a physics body and a collision component and input target component and manipulate it with a drag gesture.
When the manipulation is happening and collide the plane and the sphere the events don't happen when my sphere is touching the plane, but when the gesture end and the sphere is in contact with the plane, the event gets fired. I am confused as to why this is happening.
All I want to do is have a collider on my finger tip and want to detect the collision with this plane. How can I make this work?
Is there some unstated rule somewhere as where a physics body is manipulated manually it cannot trigger collision events?
For more context. I am using SpatialTrackingSession with the tracking configuration of .hand. I am successfully able to track the finger tip.
Hi everyone,
I'm creating an educational App that allows doing computational design in an immersive environment with the Vision Pro. The App is free and can be found here:
https://apps.apple.com/us/app/arcade-topology/id6742103633
The problem I have is that the mesh of voxels I currently create use ModelEntity and I recently read that this is horrible for scalability. I already start to see issues when I try to use thousands of voxels. I also read somewhere that I should then take advantage of GPUs and use metal to that end. I was wondering if someone could point me to a tutorial or article that discusses this. In essence, I need to create a 3D voxel mesh, and those voxels have to update their opacity within an iterative loop.
Thanks!
—Alejandro
Is there any way to detect if an entity is being looked at in a RealityView. I know it is possible to add a "HoverEffectComponent()" which will highlight the entity a little when you gaze on it, but there doesn't seem to be any way to call a function from this. There is also no GazeGesture or anything similar.
Hi I am trying to implement something simple as people can share their Spatial Photos with others (just like this post). I encountered the same issue with him, but his answer doesn't help me out here.
Briefly speaking, I am using CGImgaeSoruce to extract paired leftImage and rightImage from one fetched spatial photo
let photos = PHAsset.fetchAssets(with: .image, options: nil)
// enumerating photos ....
if asset.mediaSubtypes.contains(PHAssetMediaSubtype.spatialMedia) {
spatialAsset = asset
}
// other code show below
I can fetch left and right images from native Spatial Photo (taken by Apple Vision Pro or iPhone 15+), but it didn't work on generated spatial photo (2D -> 3D feat in Photos).
// imageCount is 1 when it comes to generated spatial photo
let imageCount = CGImageSourceGetCount(source)
I searched over the net and someone says the generated version is having a depth image instead of left/right pair. But still I cannot extract any depth image from imageSource.
The full code below, the imagePair extraction will stop at "no groups found":
func extractPairedImage(phAsset: PHAsset, completion: @escaping (StereoImagePair?) -> Void) {
let options = PHImageRequestOptions()
options.isNetworkAccessAllowed = true
options.deliveryMode = .highQualityFormat
options.resizeMode = .none
options.version = .original
return PHImageManager.default().requestImageDataAndOrientation(for: phAsset, options: options) {
imageData, _, _, _ in
guard let imageData,
let imageSource = CGImageSourceCreateWithData(imageData as CFData, nil)
else {
completion(nil)
return
}
let stereoImagePair = stereoImagePair(from: imageSource)
completion(stereoImagePair)
}
}
}
func stereoImagePair(from source: CGImageSource) -> StereoImagePair? {
guard let properties = CGImageSourceCopyProperties(source, nil) as? [CFString: Any] else {
return nil
}
let imageCount = CGImageSourceGetCount(source)
print(String(format: "%d images found", imageCount))
guard let groups = properties[kCGImagePropertyGroups] as? [[CFString: Any]] else {
/// function returns here
print("no groups found")
return nil
}
guard
let stereoGroup = groups.first(where: {
let groupType = $0[kCGImagePropertyGroupType] as! CFString
return groupType == kCGImagePropertyGroupTypeStereoPair
})
else {
return nil
}
guard let leftIndex = stereoGroup[kCGImagePropertyGroupImageIndexLeft] as? Int,
let rightIndex = stereoGroup[kCGImagePropertyGroupImageIndexRight] as? Int,
let leftImage = CGImageSourceCreateImageAtIndex(source, leftIndex, nil),
let rightImage = CGImageSourceCreateImageAtIndex(source, rightIndex, nil),
let leftProperties = CGImageSourceCopyPropertiesAtIndex(source, leftIndex, nil),
let rightProperties = CGImageSourceCopyPropertiesAtIndex(source, rightIndex, nil)
else {
return nil
}
return (leftImage, rightImage, self.identifier)
}
Any suggestion? Thanks
visionOS 2.4
Hi guys,
I noticed that Apple created a really engaging visual effect for browsing spatial videos in the app. The video appears embedded in glass panel with glowing edges and even shows a parallax effect as you move around. When I tried to display the stereo video using RealityView, however, the video entity always floats above the panel.
May I ask how does VisionOS implement this effect? Is there any approach to achieve this effect or example code I can use in my own code.
Thanks!
Hello,
To me, it does not seem to be entirely clear why, when I'm trying to display my attachment, no matter the positioning, it will always be hidden/covered by my visionOS app window. I'm trying to achieve displaying the attachment one layer above/in front of the window. When my head isn't directed towards the window I can see the attachment but else it's covered by it.
I appreciate any help!
ContentView.swift
import SwiftUI
import RealityKit
struct ContentView: View {
@Environment(\.openImmersiveSpace) private var openImmersiveSpace
public var body: some View {
VStack {
Text("Hello World")
.font(.largeTitle)
Button("Start") {
Task {
await openImmersiveSpace(id: "AppSpace")
}
}
}
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
var loader: EnvironmentLoader
public var body: some View {
RealityView { content, attachments in
content.add(try! await loader.getEntity())
let headEntity = AnchorEntity(.head)
content.add(headEntity)
if let text = attachments.entity(for: "at01") {
text.position = [0, 0, -0.25]
headEntity.addChild(text)
}
}
attachments: {
Attachment(id: "at01") {
Text("Hello World!")
.font(.extraLargeTitle)
.padding()
}
}
}
}
App.swift
import SwiftUI
@main
private struct App: App {
@State var loader = EnvironmentLoader()
public var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "AppSpace") {
ImmersiveView(loader: loader)
}
.immersionStyle(selection: .constant(.progressive), in: .progressive)
}
}