Hi, currently tinkering with a little shareplay app for the Vision Pro that allows people to facetime and shareplay to play with random 3d models (as well as move them around, which should sync the model positions for everyone in relative space).
When the users start their facetime call, then open the immersive space to see the 3d models, the models load in properly in context of the group immersive space's coordinate system, and moving the models reflects the new positions real-time for each participant.
The main issue comes if/when users use the digital crown to re-center their view. It appears to re-center the model and view, which is expected. However, it also seems to re-position the model/root entity to match the user's origin. Not sure if this is intentional or not, but this essentially makes it so that it "de-syncs" the model (so me moving the model next to someone does not reflect it 1:1 - it still moves properly, but the new "initial" position after re-centering makes it offset).
Is there a potential solution or work-around for this such that re-centering the view doesn't de-sync the model/entity's position?
Rough code for my RealityView component is below:
RealityView { content, attachments in
content.add(appModel.originEntity)
appModel.originEntity.addChild(appModel.modelContainerEntity)
appModel.setInitialModelPosition()
configureGestures(forModel: appModel.modelContainerEntity)
configureToolbarAttachment(content: content, attachments: attachments)
} update: { content, _ in
// I have modified the Apple provided gesture components to
// send the app model the new positions/rotations
// as well as broadcast the position/rotation to shareplay participants
// When user re-centers view, it seems to also re-position the model
// so that its origin is at the local user's origin, rather than
// the original origin
// Can we receive a notification that user has re-centered view?
// Or some other work-around?
appModel.modelContainerEntity.setPosition(appModel.modelState.position, relativeTo: nil)
appModel.modelContainerEntity.setOrientation(.init(appModel.modelState.rotation3d), relativeTo: nil)
} attachments: {
Attachment(id: "customViewAttachment") {
CustomView()
}
}
.installGestures()
Please let me know if anything wasn't clear or if more information is needed. Thanks!
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
I’ve updated to iOS 18.1 public beta when it released and I got the problem when I was in public beta 4 to rc and the problem never fixed with the time limit when I ask for more time it never give me more time it’s just says waiting for parent approval I have this problem until ios 18.1 RC can u fix it I’m on iPhone 11 pro max
I'm writing code using ObjectAnchor for Vision OS. If an object is tracked, and then becomes not visible (either because the user looked in a different direction, or because the tracked object was occluded by another object), it is still tracked and you get anchor updates (e.g., object permanence).
For my application, it would be very helpful if I could determine if the object is currently being observed, or it is not currently observed and just assumed to be in the same location as seen previously.
ObjectAnchor.isTracked just seems to indicate whether it is getting anchor updates. I don't see anything in the ObjectAnchor or AnchorUpdate that would allow me to determine if the object is currently observed. Does anyone know of a way to do this, or would this be a feature request?
I am working on adding synchronized physical properties to EntityEquipment in TableTopKit, allowing seamless coordination during GroupActivities sessions between players.
Current Approach and Limitations
I have tried setting EntityEquipment's state to DieState and treating it as a TossableRepresentation object. This approach achieves basic physical properties synchronized across players. However, it has several limitations:
No Collision Detection Between Dice: Multiple dice do not collide with each other.
Shape Limitations: Custom shapes, like parallelepipeds, cannot be configured.
Below is my existing code for Base Entity Equipment without physical properties:
struct CubeWithPhysics: EntityEquipment {
let id: ID
let entity: Entity
var initialState: BaseEquipmentState
init(id: ID, entity: Entity) {
self.id = id
self.entity = entity
initialState = .init(parentID: .tableID, pose: .init(position: .zero, rotation: .zero), entity: self.entity)
}
}
I’d appreciate any guidance on the recommended approach to adding synchronized physical properties to EntityEquipment.
Is it possible to manage the behavior of timeline totally from code?
I am exploring the Compose interactive 3D content in Reality Composer Pro sample project after seeing the related video, but the example shows only the use of Behaviors from RCP to activate timelines actions.
I was wondering if it is possible to, somehow, retrieve some kind of timeline controller that allows me access to its informations just like the AnimationPlaybackController does with single animations.
What I would like to achieve is being able to play/pause/retrieve timestamp from them in order to allow synchronization between different users on SharePlay
Regarding the Apple Vision Pro, is there a possibility to get a kind of rental device or leasing device from Apple?
Background:
I'm a fresh VisionOS dev.
I have a low budget.
I'm living in Japan. That means Vision Pro is roughly 150% the USD price.
Regards
I created an Object & Hand Tracking app based on the sample code released here by Apple.
https://developer.apple.com/documentation/visionos/exploring_object_tracking_with_arkit
The app worked great and everything was fine, but I realized I was coding on Xcode 16 beta 3, so I installed the latest Xcode 16 from the App Store and tested by app there, and it completely crashed. No idea why. Here is the console
dyld[1457]: Symbol not found: _$ss13withTaskGroup2of9returning9isolation4bodyq_xm_q_mScA_pSgYiq_ScGyxGzYaXEtYas8SendableRzr0_lF
Referenced from: <3AF14FE4-0A5F-381C-9FC5-E2520728FC65> /private/var/containers/Bundle/Application/F74E88F2-874F-4AF4-9D9A-0EFB51C9B1BD/Hand Tracking.app/Hand Tracking.debug.dylib
Expected in: <2F158065-9DC8-33D2-A4BF-CF0C8A32131B> /usr/lib/swift/libswift_Concurrency.dylib
It was working perfectly fine on Xcode 16 beta 3, which makes me think it's an Xcode 16 issue, but no idea how to fix this. I also installed Xcode 16.2 beta (the newest beta) but same error.
Please help if anyone knows what is wrong!
As the title states, this severely limits the flexibility of multi-window applications in creating a good user experience.
Even effects like the ones shown below cannot be achieved.
Im asking myself we are the limits of RealityView. For example is it possible to place an entity on postion (x=800m,y=0,z=-900m) What happens if i walk from my (0,0,0) to this point, will i see the entity then ? Does someone know where are the limits ?
As mentioned in https://forums.developer.apple.com/forums/thread/756736?answerId=810096022#810096022
Is there any update about the full support to WebXR AR Module, which should enable immersive-ar mode?
Are the features such as DOM overlays and WebGPU bindings on the roadmap?
Is it possible to capture stereoscopic video either internally or externally or via airplay for debugging purposes?
Thanks
I have an app with a visionOS target, and I want to add an iOS target. Both are based on RealityKit.
I want to use a SpatialTapGesture to get the tap coordinate local to the entity tapped.
In visionOS this is easy:
SpatialTapGesture(coordinateSpace: .local)
.targetedToAnyEntity()
.onEnded { tap in
let entity = tap.entity
let localPoint3D = tap.convert(tap.location3D, from: .local, to: entity)
// …
}
However, according to the docs, the convert function seems to exist only in visionOS, not in iOS.
So how can I do this conversion in iOS?
PS: This was already posted on StackOverflow without success. There, I tried to find a workaround, but I failed.
Hi, I am trying to stream spatial video in realtime from my iPhone 16.
I am able to record spatial video as a file output using:
let videoDeviceOutput = AVCaptureMovieFileOutput()
However, when I try to grab the raw sample buffer, it doesn't include any spatial information:
let captureOutput = AVCaptureVideoDataOutput()
//when init camera
session.addOutput(captureOutput)
captureOutput.setSampleBufferDelegate(self, queue: sessionQueue)
//finally
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
//use sample buffer (but no spatial data available here)
}
Is this how it's supposed to work or maybe I am missing something?
this video: https://developer.apple.com/videos/play/wwdc2023/10071 gives us a clue towards setting up spatial streaming and I've got the backend all ready for 3D HLS streaming. Now I am only stuck at how to send the video stream to my server.
Hello Developers,
I am currently in the initial planning stages of my bachelor thesis in computer science, where I will be developing an application in collaboration with a manufacturer of large-scale machinery. One of the core features I aim to implement is the ability for multiple Apple Vision Pro users to view the same object in augmented reality simultaneously, each from their respective positions relative to the object.
I am still exploring how best to achieve this feature. My initial approach involves designating one device as the host of a "room" within the application, allowing other users to join. If I can accurately determine the relative positions of all users to the host device, it should be possible to display the AR content correctly in terms of angle, size, and location for each user.
Despite my research, I haven't found much information on similar projects, and I would appreciate any insights or suggestions. Specifically, I am curious about common approaches for synchronizing AR experiences across multiple devices. Given that the Apple Vision Pro does not have a GPS sensor, I am also looking for alternative methods to precisely determine the positions of multiple devices relative to each other.
Any advice or shared experiences would be greatly appreciated!
Best regards,
Revin
I found that there is such a click-to-expand horizontally and smoothly effect in the system application called "message", which is good. I wonder if I can add a similar effect to my own app. If possible, are there any implementation ideas or examples that I can refer to? Thanks!
On visionOS, I have discovered that if dismissWindow is followed immediately by a call to openWindow, the new window does not open where the user is looking at. Instead, it appears at the same location as the dismissed window. However, if I open the new window after a small delay, or after UIScene's willDeactivateNotification, the new window correctly opens in front of the user. (I tested this within a opened immersive space.)
Does this imply that dismissWindow is actually asynchronous, in the sense that it requires extra time to reset certain internal states before the next openWindow can be called? What is the best practice to close a window, then open a new window in front of the user's current head position?
I am trying to achieve an effect such that the particles of a particle system are attracted to my hand entity. The hand entity is essentially an AnchorEntity that is tracking my right hand.
let particleEmitterEntities = context.entities(matching: particleEmitterQuery, updatingSystemWhen: .rendering)
for particleEmitterEntity in particleEmitterEntities {
if var particleEmitter = particleEmitterEntity.components[ParticleEmitterComponent.self] {
particleEmitter.mainEmitter.attractionCenter = rightHandEntity.position(relativeTo: nil)
// trying to get the world space position of the hand
// I also tried relative to particleEmitterEntity
particleEmitterEntity.components[ParticleEmitterComponent.self] = particleEmitter
} else {
fatalError("Cannot find particle emitter")
}
}
The particle attraction center doesn't seem to update
Another issue I am noticing here that My particle system doesn't show the particle image a lot of times and just renders a placeholder square when I do this, when I comment this code out I get the right particle image. I believe this is due to the number of times this loop runs to update the position of the attraction center.
What is the right way to do an effect where the particles are attracted to my hand.
We have a native iOS app that supports the upload and display of USDZ files. It has been working great since beta (late 2022) and live launch (late 2023) until now. But recently we had reports from some users on Max model phones (14 + 15 Pro Max) at least. When they launch tap and launch a 3D file the Quick Look player is triggered. So far so good. But for affected users the controls along the top of the player - X (close) AR | Object (toggle) and share button - are moving too high up the phone screen and getting stuck (untappable) behind the phone's top status bar (time, camera bug, connection, battery).
This means that when they open a USDZ file in AR or 3D view they have to hard-close the app to get out of it again. This doesn't happen when they open a USDZ file from Files, Dropbox etc on their phone (which also uses the Quick Look player). The controls only move up and get stuck when launching a USDZ from within our app.
I'm at a loss to figure out what might be causing this on some phones and not all others. And why only when opening a USDZ file from our app! So far we have replicated this issue on a single iphone 14 Pro Max and a 15Pro Max, both running iOS18+
We have tested on other 15Pro Max's on same OS, and Pros, normal iPhones, Minis and are not experiencing the issue. You would think that a USDZ file is a USDZ file and that your iPhone knows what to do with it and open it in the Quick Look player regardless of where you open the file from. Why would the navigation items be moving if you open the USDZ file from within our app, and why only for some select users?
We will continue to troubleshoot and test but I wanted to throw this out to the community in case anyone had experienced this or if anyone had any theories that would expedite our testing. Your thoughts are most appeciated!
Here is a video showing the expected (correct) behaviour: https://www.dropbox.com/scl/fi/0sp8s4opaf2m4gukkcbrk/How-opening-a-USDZ-should-behave_correct-behaviour.MP4?rlkey=tzzau9x91mwox66gsgguryhep&st=qiykmne9&dl=0 and a screenSHOT attached below of what is happening on one of the affected user's iPhone 15Pro Max.
Can we access a Vision pro's spatial persona in application's view without using SharePlay or group activity like any other 3d Avatar?
I want to use that persona in app and without live rendering I just want to pass some voice commands like Avatar is speaking.
Hi,
since RealityKit 4 now supports Blend Shapes I was wondering if there are any workflow or tooling recommendations to bake/export them into a USDZ.
Are Blender or Cinema4D capable to do that out of the box? Should we look into NVIDIA omniverse (https://docs.omniverse.nvidia.com/connect/latest/blender/manual.htm)
So far this topic seems very sparsely documented and I would appreciate any hints. Thank you!
In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?