Hi,
I am in the process of implementing SharePlay into our app. The shared experience opens an Immersive Space and we set systemCoordinator.configuration.supportsGroupImmersiveSpace = true
Now visionOS establishes a shared coordinate space for the immersive space.
From the docs:
To achieve consistent positioning of RealityKit entities across multiple devices in an immersive space during a SharePlay session
There are cases where we want to position content in front of the user (independent of the shared session, and for each user individually). Normally to do that we use the transform retrieved via worldTrackingProvider.queryDeviceAnchor.originFromAnchorTransform
to position content in front of the user (plus some Z Offset and smooth interpolation).
This works fine in non-SharePlay instances and the device transform is where I would expect it to be but during the FaceTime call deviceAnchor.originFromAnchorTransform seems to use the shared origin of the immersive space and then I end up with a transform that might be offset.
Here is a video of the issue in action: https://streamable.com/205r2p
The blue rect is place using AnchorEntity(.head, trackingMode: .continuous). This works regardless of the call and the entity is always placed based on the head position.
The green rect is adjusted on every frame using the transform I get from worldTrackingProvider.queryDeviceAnchor. As you can see it's offset.
Is there any way I can query query this transform locally for the user during a FaceTime call?
Also I would like to know if it's possible to disable this automatic entity transform syncing behavior?
Setting entity.synchronization = nil results in the entity not showing up at all.
https://developer.apple.com/documentation/realitykit/synchronizationcomponent
Is SynchronizationComponent only relevant for the legacy MultiPeerConnectivity approach?
Thank you!
Group Activities
RSS for tagIntegrate your app into FaceTime to share its contents with groups of people.
Posts under Group Activities tag
14 Posts
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Hi everyone,
I’m building a visualization app for VisionPro that uses SharePlay and GroupActivities to explore datasets collaboratively.
I’ve successfully implemented the new SharedWorldAnchor feature, and everything works well with nearby, local participants.
However, I’m stuck on one point:
How can I share a world anchor with remote participants who join via FaceTime as spatial personas?
Apple’s demo app (where multiple users move a plane model around) seems to suggest that this is possible.
For context, I’m building an immersive app with Metal rendering.
Any guidance or examples would be greatly appreciated!
Thanks,
Jens
Hello,
For GuessTogether source code, it seems like the code assumes that you're already in a FaceTime call before pressing the custom SharePlay button (labeled "Play Guess Together"). If not already on a FaceTime call, my Apple Vision Pro and the visionOS simulator both do nothing after throwing warnings. Is this intended behavior?
If so, how do I make it so that pressing the button can also initiate FaceTime calls? Is this allowed?
Thank you!
When building a multiplayer Tabletop game, the documentation includes how to attach a custom TabletopNetworkSessionCoordinator, which could be used in addition to TabletopGame.MultiplayerDelegate. But so far, we have been unable to create these types of custom coordinators or have a delegate that works.
Our current setup with our generic GroupActivity works by sending the session to TabletopGame's coordinateWithSession method (like in the current sample project), but we didn't find a way to access and control, for example, the arbiter, seats, player events, among other features mentioned on https://developer.apple.com/documentation/tabletopkit/tabletopnetworksession.
Is correct to expect having access to the participants, messenger, or journal without having to maintain a parallel coordinator?
possibly we are missing something here; any suggestions?
I’m encountering an issue while reading/writing shared preferences using UserDefaults with an App Group in my iOS Message Extension. The following error appears in the console:
`Couldn't read values in CFPrefsPlistSource<0x3034e7f80> (Domain: [MyAppGroup], User: kCFPreferencesAnyUser, ByHost: Yes, Container: (null), Contents Need Refresh: Yes): Using kCFPreferencesAnyUser with a container is only allowed for System Containers, detaching from cfprefsd.
I have correctly enabled the App Group in both my containing app and the Message Extension, and I am using UserDefaults(suiteName:) to access shared preferences. However, I keep getting this error when trying to read/write values.
Has anyone encountered this before? How can I properly configure my app group preferences to avoid this issue? Any help would be greatly appreciated!
Topic:
App & System Services
SubTopic:
General
Tags:
Extensions
Messages
Group Activities
Foundation
Hello Community,
I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars.
I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat.
I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well.
When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template."
So my questions are:
What needs to be changed so the TabletopKit can handle seating correctly?
How can I correctly use immersive scenes in combination with the TabletopKit?
I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now.
I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
I am encountering the following issue while working with app group preferences in my Safari web extension:
Couldn't read values in CFPrefsPlistSource<0x3034e7f80> (Domain: [MyAppGroup], User: kCFPreferencesAnyUser, ByHost: Yes, Container: (null), Contents Need Refresh: Yes): Using kCFPreferencesAnyUser with a container is only allowed for System Containers, detaching from cfprefsd.
I am trying to read/write shared preferences using UserDefaults with an App Group but keep running into this error. Any guidance on how to resolve this would be greatly appreciated!
Has anyone encountered this before? How can I properly configure my app group preferences to avoid this issue?
Topic:
Accessibility & Inclusion
SubTopic:
General
Tags:
Swift Packages
Messages
Xcode
Group Activities
I'm testing using Group Activities and having no trouble iOS<->iOS or starting an activity on macOS and joining via iOS. However, when I start an activity and then try to join it from another macOS client, the starting side joins the session just fine, but the receiving side acts like I don't have the required app, even when it is already running.
I see the active SharePlay icon in the menu bar, and the Current Activity is shown, but instead of an "Open" button there is a "MyApp Required" string and a "View" button that goes to the App Store. (Where the app is not available yet, as expected, since I'm still working on it.) There is no GroupSession started on that Mac yet, obviously.
I'm looking for any hints to help debug what is going on. How does Group Activities find the app for the activity on macOS and how can I figure out why it isn't finding mine?
Thanks!
Hi folks,
First - I'm not a developer.
Some time ago I came across Sequoia Beta and installed it thinking it was just a usage data collection thing.
I recently wanted to delete it because I suspected it was causing other issues, but was only offered to have updates stopped.
My MacBook Pro M2 still says I'm running Sequoia 15.3 (Beta 24D5034f).
I have another, maybe unrelated issue and posted a question about it on Apple Community but the question was deleted and I was pointed here because I mentioned the Beta!
This was my question:
I can FaceTime from the iPhone but when I try to FaceTime from the Mac, I get a notification on the Mac saying:
"iPhone calls not available. Your iPhone is not configured to allow calls using this Mac".
My iPhone is a 14Plus on iOS 18.2.1 (22C161).
I've tried the iPhone and Mac fixes suggested by Apple but no luck there - that was all about turning iMessages and FaceTime off or signing out for both devices, then power cycling both devices.
I don't understand how this is supposed to work.
Could my issue be related to the Beta?
If so, and in any case, how do I remove the Beta?
TIA
I am working on adding synchronized physical properties to EntityEquipment in TableTopKit, allowing seamless coordination during GroupActivities sessions between players.
Treating EntityEquipment's state to DieState is not a way, because it doesn't support custom collision shapes.
I have also tried adding PhysicsBodyComponent and CollisionComponent to EntityEquipment's Entity. However, the main issue is that the position of EntityEquipment itself does not synchronize with the Entity's physics body, resulting in two separate instances of one object.
struct PlayerPawn: EntityEquipment {
let id: ID
let entity: Entity
var initialState: BaseEquipmentState
init(id: ID, entity: Entity) {
self.id = id
let massProperties = PhysicsMassProperties(mass: 1.0)
let material = PhysicsMaterialResource.generate(friction: 0.5, restitution: 0.5)
let shape = ShapeResource.generateBox(size: [0.4, 0.2, 0.2])
let physicsBody = PhysicsBodyComponent(massProperties: massProperties, material: material, mode: .dynamic)
let collisionComponent = CollisionComponent(shapes: [shape])
entity.components.set(physicsBody)
entity.components.set(collisionComponent)
self.entity = entity
initialState = .init(parentID: .tableID, pose: .init(position: .init(), rotation: .zero), entity: self.entity)
}
}
I’d appreciate any guidance on the recommended approach to adding synchronized physical properties to EntityEquipment.
PLATFORM AND VERSION
Xcode Version 16.2 beta 3 (16C5023f)
macOS 15.1.1 (24B91)
Run-time configuration: iOS 18.0
DESCRIPTION OF PROBLEM
We are currently testing the functionality of AirDrop by bringing iPhones close to each other.
I am trying to transfer the activityItemsConfiguration set in the modal screen via AirDrop.
However, if presentationStyle is fullscreen, it succeeds, but otherwise the connection is successful but no item is displayed on the screen.
STEPS TO REPRODUCE
Open my project.
Run on device
Tap Present with toggle is off.
ModalViewController presented as sheet.
Bring another iPhone closer
Play connection animation, then just display connected.
Can apple vision pro can shareplay differents apps at the same time? How about this on the iphone or ipad?
Hi!
I'm planning to make visionOS multiplayer app for people in same space(a room). I wanna know that if it's possible to use TabletopKit, Group Activities to create an app that becomes multiplayer(synchronize) with the people who are using it as soon as the app is opened without using SharePlay.
I have been experimenting some experiences in which I would like to use SharePlay to allow the app to be used by multiple users.
Currently I achieved sharing a volume containing a Reality Composer Pro scene inside of it, the scene contains some entities with an animation.
So far I have been able to correctly share the volume and its content, with the animation playing without problems, but once I activate SharePlay different users see different moments of the animation if no animation at all.
Is there a way to synchronize animations between all the users, no matter when someone entered the SharePlay session, aside from communicating the animation time once someone joins?
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
RealityKit
Group Activities
Reality Composer Pro
visionOS