I am trying to attach a button to user's left hand. the position is tracked.
the button stays above the user's left hand. but it doesn't face the user. or it doesn't even face where the wrist is pointing. this is the main code snippet:
if (model.editWindowAdded) {
let originalMatrix = model.originFromWristLeft
let theattachment = attachments.entity(for: "sample")!
entityDummy.addChild(theattachment)
let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312, entityDummy.orientation.imag.y, 0.025926698))
entityDummy.orientation = testrotvalue
theattachment.position = [0, 0.1, 0]
var timer = Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) {_ in
let originalMatrix = model.originFromWristLeft
print(originalMatrix.columns.0.y)
let testrotvalue = simd_quatf(real: 0.9906431, imag: SIMD3<Float>(-0.028681312,0.1, 0.025926698))
entityDummy.orientation = testrotvalue
}
}
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Here is what I have discovered:
When I have window A pushes window B, and then B's onAppear dismisses A by its id. In this case, A will not appear if B later dismisses itself unless B calls open/pushWindow(id: A)
However, if I then open an immersive space by A and dismiss it, there will be several B appearing depending on how many times the process I mentioned above was repeated
It does make no sense using onAppear to dismiss A while we later want to reuse it, but is this feature expected?
In the apple map of some areas, there will be a very realistic real-life 3D map. And now I want to call it through 3d in visionOS (like model3d). How can I call it?
Note: What I ask for is not to have an effect similar to 3d on a flat screen like in iOS, but to display the USDZ model in visionOS.
Hello,
is it possible to take a screenshot of the whole immersive view, including or excluding SwiftUI components? ARView has a snapshot method for this, but it seems there's no equivalent for RealityView.
I've tried to use ImageRenderer on a parent of RealityView, but I'm only getting plain white bitmap so far.
Thanks in advance,
Rlu
I have an application made from Flutter, which is possible to run on VisionOS by running as design to Ipad, and I would like that inside this application would be possible to go to mixed reality somehow. I am trying to do so far was to embedded the vision project that I have inside the swift application that flutter generates, but in this attempt I got an error from Xcode telling me that this way is not possible. I wonder if is there an another way that I could achieve my goal?
This is a visionOS App. I added contextMenu under a combination view, but when I pressed the view for a long time, there was no response. I tried to use this contextMenu in other views, which can be used normally, so I think there is something wrong with this combination view, but I don't know what the problem is. I hope you can remind me. Thank you!
Views with problems:
struct NAMEView: View {
@StateObject private var placeStore = PlaceStore()
var body: some View {
ZStack {
Group {
HStack(spacing: 2) {
Image(systemName: "mappin.circle.fill")
.font(.system(size: 50))
.symbolRenderingMode(.multicolor)
.accessibilityLabel("your location")
.accessibilityAddTraits([.isHeader])
.padding(.leading, 5.5)
VStack {
Text("\(placeStore.locationName)")
.font(.title3)
.accessibilityLabel(placeStore.locationName)
Text("You are here in App")
.font(.system(size: 13))
.foregroundColor(.secondary)
.accessibilityLabel("You are here in App")
}
.hoverEffect { effect, isActive, _ in
effect.opacity(isActive ? 1 : 0)
}
.padding()
}
}
.onAppear {
placeStore.updateLocationName()
}
.glassBackgroundEffect()
.hoverEffect { effect, isActive, proxy in
effect.clipShape(.capsule.size(
width: isActive ? proxy.size.width : proxy.size.height,
height: proxy.size.height,
anchor: .leading
))
.scaleEffect(isActive ? 1.05 : 1.0)
}
}
}
}
In RealityKit using visionOS, I scan the room and use the resulting mesh to create occlusion and physical boundaries. That works well and iI can place cubes (with physics on) onto that too.
However, I also want to update the mesh with versions from new scans and that make all my cubes jump.
Is there a way to prevent this? I get that the inaccuracies will produce slightly different mesh and I don’t want to anchor the objects so my guess is I need to somehow determine a fixed floor height and alter the scanned meshes so they adhere that fixed height.
Any thoughts or ideas appreciated
/Andreas
Hi,
My goal is to obtain the device location (6 DoF) of the Apple Vision Pro and I find a function that might satisfy my need:
final func queryDeviceAnchor(atTimestamp timestamp: TimeInterval) -> DeviceAnchor?
which returns a device anchor (containing the position and orientation of the headset).
However, I couldn't find any document specify where does the device anchor exactly locate on the headset.
Does it locate at the midpoint between the user's eyes? Does it locate at the centroid of the six world facing tracking cameras?
It would be really helpful if someone can provide a local transformation matrix (similar to a camera extrinsic) from a visible rigid component (say the digital crown, top button, or the laser scanner) to the device anchor.
Thanks.
Hi, I'm working on visionOS and find I can't get onDisappear event just on the first window after app launch. It comes like that:
WindowGroup(id:"WindowA"){
MyView()
.onDisappear(){
print("WindowA disappear")
}
}
WindowGroup(id:"WindowB"){
MyView()
.onDisappear(){
print("WindowB disappear")
}
}
WindowGroup(id:"WindowC"){
MyView()
.onDisappear(){
print("WindowC disappear")
}
}
When the app first launch, it will open WindowA automatically
And then I open WindowB and WindowC programatically.
Then I tap the close button on window bar below window.
If I close WindowB/WindowC, I can receive onDisappear event
If I close WindowA, I can't receive onDisappear event
If I reopen WindowA after it is closed and then close it again by tap the close button below window, I can receive onDisappear event
Is there any logic difference for the first window on app launch? How can I get onDisappear Event for it.
I'm using Xcode 16 beta 2
Platform and Version
Development Environment: Xcode 16 Beta 3
visionOS 2 Beta 3
Description of Problem
I am currently working on integrating SharePlay into my visionOS 2 application. The application features a fully immersive space where users can interact. However, I have encountered an issue during testing on TestFlight.
When a user taps a button to activate SharePlay via the GroupActivity's activate() method within the immersive space, the immersive space visually disappears but is not properly dismissed. Instead, the immersive space can be made to reappear by turning the Digital Crown. Unfortunately, when it reappears, it overlaps with the built-in OS immersive space, resulting in a mixed and confusing user interface. This behavior is particularly concerning because the immersive space is not progressive and should not work with the Digital Crown being turned.
It is important to note that this problem is only present when testing the app via TestFlight. When the same build is compiled with the Release configuration and run directly through Xcode, the immersive space behaves as expected, and the issue does not occur.
Steps to Reproduce
Build a project that includes a fully immersive space and incorporates GroupActivity support.
Add a button within a window or through a RealityView attachment that triggers the GroupActivity's activate() method.
Upload the build to TestFlight.
Connect to a FaceTime call.
Open the app and enter a immersive space then press the button to activate the Group Activity.
https://developer.apple.com/documentation/avfoundation/media_reading_and_writing/converting_side-by-side_3d_video_to_multiview_hevc_and_spatial_video
While using this sample project to convert SBS video into Spatial MVHEVC video, it cannot be recognized as spatial video on visionOS 2.0 Beta 3.
I want to create framework for this repo:https://github.com/BradLarson/GPUImage
but failed.
1.I downloaded this repo and run below:
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS"
-archivePath "archives/GPUImage"
xcodebuild archive
-project GPUImage.xcodeproj
-scheme GPUImage
-destination "generic/platform=iOS Simulator"
-archivePath "archivessimulator/GPUImage"
xcodebuild -create-xcframework
-archive archives/GPUImage.xcarchive -framework GPUImage.framework
-archive archivessimulator/GPUImage.xcarchive -framework GPUImage.framework
-output xcframeworks/GPUImage.xcframework
there is error :cryptexDiskImage' is an unknown content type
and com.apple.platform.xros' is an unknown platform identifier
I've been having some networking issues since updating to Sequoia and AVP 2.0. This problem has existed on all betas released so far up to Beta 3.
My session will connect (though it will on occasion fail). I will often get a frozen screen on the Vision. The screen always toggles between that and a grey connection issue screen. And finally will often disconnect with a miscellaneous error -455.
I've tried logging out and back in of my Apple ID on both devices. I've tried creating a new Mac user. I've even changed the network Wifi to match and to differ. Nothing seems to remedy this issue.
This was working under Sonoma. And continues to work from the AVP b3 to Sonoma on a different Mac.
I have reported to radar/feedback (FB13888947). I am curious if anyone else is seeing this and whether or not the symptoms are the same. Obviously, if you are seeing it, please make sure you report feedback to for traction on the back end.
Thanks much.
I want to automate tests for my iOS app and start writing UITests.
Sometimes system alerts appear, and my tests have to simulate button tapping.
In iOS and iPadOS these alerts are available via the system Springboard application:
let springboard = XCUIApplication(bundleIdentifier: "com.apple.springboard")
let cancelButton = springboard.alerts.buttons["Cancel"].firstMatch
if cancelButton.exists {
cancelButton.tap() // <-- cancel button taps, and test continue working
}
But when I launch my test in the Vision Pro simulator, the springboard is not available:
let springboard = XCUIApplication(bundleIdentifier: "com.apple.springboard")
print(springboard.exists) // <-- "false" will be printed, springboard does not exist
It means that I can't automate button tapping in system alerts.
So, my question is:
How can I access system alerts in VisionOS and tap buttons by UITests?
I am building the VisionPro project in Unity.And tried to add GroupActivity to it.
But I found that the VisionOS simulator project generated by Unity didn't have any items under Signing&Capabilities. The Signing configuration item is not displayed also.
Even when I try "+Capability", it says "Capabilities are not supported for Project-Name".
Thanks for any help.
Hey, is there a way to create a good ground shadow shader? I'm using a ground with an unlit material and I can't get the ground shadow to work properly. If I use a PBR texture it works better, but i can barely see it and I want to control the intensity more.
Hey, I need help achieving realistic fog and clouds for immersive spaces. Making 3D planes with transparent fog/cloud textures work, but they create issues when there are a lot of them overlapping each other. Also I can't get a good result with particles either.
Thanks in advance!
I had got the Enterprise Developer Account , manage entitlements(com.apple.developer.arkit.barcode-detection.allow)
Use WWDC24‘s Spatial barcode & QR code scanning example‘s code.
When I run my project, my BarcodeDetectionProvider is ok, but at(for await anchorUpdate in barcodeDetection.anchorUpdates) is break ,I try more times call them ,but is useless.
@Example I call this func startBarcodeScanning at ContentView
var barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce])
var arkitSession = ARKitSession()
public func startBarcodeScanning() {
Task {
do {
barcodeDetection = BarcodeDetectionProvider(symbologies: [.code39, .qr, .upce])
await arkitSession.queryAuthorization(for: [.worldSensing])
do
{
try await arkitSession.run([barcodeDetection])
print("arkitSession.run([barcodeDetection])")
}
catch
{
return
}
for await anchorUpdate in barcodeDetection.anchorUpdates
{
switch anchorUpdate.event {
case .added:
print("addEntity(myAnchor: anchorUpdate.anchor)")
addEntity(myAnchor: anchorUpdate.anchor)
case .updated:
print("UpdateEntity")
updateEntity(myAnchor: anchorUpdate.anchor)
case .removed:
print("RemoveEntity")
removeEntity()
}
}
//await loadInfo()
}
}
}
Based on info online I'm under the impression we can add spatial audio to USDZ files using Reality Composer Pro, however I've been unable to hear this audio outside of the preview audio in the scene inspector. Attached is a screenshot with how I've laid out the scene.
I see the 3D object fine on mobile and Vision Pro, but can't get audio to loop. I have ensured the audio file is in the scene linked as the resource for the spatial audio node. Am I off on setting this up, it's broken or this simply isn't a feature to save back to USDZ? In the following link they note their USDZ could "play an audio track while viewing the model", but the model isn't there anymore.
Can someone confirm where I might be off please?
Hello,
I'm experimenting with the PortalComponent and clipping behaviors. My belief was that, with some arbitrary plane mesh, I could have the entire contents of a single world entity that has a PortalCrossingComponent clipped to the boundaries of the plane mesh.
Instead, what I seem to be experiencing is that the mesh in the target world of the portal will actually display outside the plane boundaries.
I've attached a video that shows the boundaries of my world escaping the portal clipping / transition plane, and also show how, when I navigate below a certain threshold in the scene, I can see what appears to be the "clipped" world ( here, it is obvious to see the dimensions of the clipping plane ), but when I move above a certain level, it appears that the world contents "escape" the clipping behavior.
https://scale-assembly-dropbox.s3.amazonaws.com/clipping.mov
( I would have made the above a link but it is not a permitted domain - you can follow that link to see the behavior tho )
It almost seems as if "anything" with PortalCrossingComponent is allowed to appear in the PortalComponent 's parent scene, rather than being clipped by the PortalComponent 's boundary.
For reference, the code I'm using is almost identical to the sample code in this document:
https://developer.apple.com/documentation/realitykit/portalcomponent
with the caveat that I'm using a plane that has .positiveY clipping and portal crossing behaviors, and the clipping plane mesh is as seen in the video.
Do I misunderstand how PortalComponent is meant to be used? Or is there a bug in how it currently behaves?