HoverEffectComponent on macOS 15 and iOS 18 works fine using RealityView, but seems to be ignored when ARView (even with a SwiftUI UIViewRepresentable) is used.
Feedback ID: FB15080805
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
I want to see the vision pro camera view in my application window. I had write some code from apple, I stuck on CVPixelBuffer , How to convert pixelbuffer to video frame?
Button("Camera Feed") {
Task{
if #available(visionOS 2.0, *) {
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
await arKitSession.queryAuthorization(for: [.cameraAccess])
do {
try await arKitSession.run([cameraFrameProvider])
} catch {
return
}
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
return
}
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
//====
print("=========================")
print(mainCameraSample.pixelBuffer)
print("=========================")
// self.pixelBuffer = mainCameraSample.pixelBuffer
}
} else {
// Fallback on earlier versions
}
}
}
I want to convert "mainCameraSample.pixelBuffer" in to video. Could you please guide me!!
I am working on a small side project for the Apple Vision pro. One thing I'm trying to figure out is can I open another app while having the immersive space open from my original app? As an example I want to present a fully immersed view displaying a 360 degree photo. I then want to allow the user to open up safari or any other app of their choice and use the immersive environment as a background? Is this possible? Everything I've read so far seems to say no but I wasn't sure if someone found out how to make this possible.
In visionOS 2 beta, I have a character loaded from a Reality Composer Pro scene standing on the floor, but he isn't casting a shadow on the floor.
I added a GroundingShadowComponent in RealityView, and he does cast shadows on himself (e.g., his hands cast shadows on his shoes), but I don't see any shadow on the floor.
Do I need to enable something to have my character cast a show on the real-world floor?
The RoomPlan API makes it possible to serialize and de-serialize CapturedRoom objects. This opens up the possibility to modify a CapturedRoom (e.g. deleting surfaces/objects) in a de-serialized state and serialize it as a new CapturedRoom. All modified attributes are loaded accordingly, so far so good.
My problem starts with the StructureBuilder and it's merge function capturedStructure().
This function ignores any modifications to attributes of a CapturedRoom. The only data that is considered is encoded in the CoreModel attribute (which is not mentioned in the official documentation).
If someone has more information or a working solution about how to modify CapturedRooms please let me know.
Additionally if there is somewhere a documentation about the CoreModel-attribute please post a link here.
I have searched everywhere for examples to replicate this awesome feature. Specifically I am talking about the tab overview in safari. How they achieve the tilt/angle of the windows/views towards the user and how they are placed. Is this a volumetric window? some sort of spatial lazy grid?
anyone knows how they achieved this?
Thanks
Hi everyone,
I'm currently developing an app for Vision Pro using SwiftUI, and I've encountered an issue when testing on the Vision Pro device. The app works perfectly fine on the Vision Pro simulator in Xcode, but when I run it on the actual device, it gets stuck on the loading screen. The logo appears and pulsates when it loads, as expected, but it never progresses beyond that point.
Issue Details:
The app doesn't crash, and I don't see any major errors in the console. However, in the debug logs, I encounter an exception:
Thread 1: "*** -[NSProxy doesNotRecognizeSelector:plane] called!"
I’ve searched through my project, but there’s no direct reference to a selector named plane. I suspect it may be related to a framework or system call failing on the device.
There’s also this warning:
NSBundle file:///System/Library/PrivateFrameworks/MetalTools.framework/ principal class is nil because all fallbacks have failed.
What I’ve Tried:
Verified that all assets and resources are properly bundled and loading (since simulators tend to be more forgiving with file paths).
Tested the app with minimal UI to isolate potential causes, but the issue persists.
Checked the app's Info.plist configuration to ensure it’s properly set up for Vision Pro.
No crashes, just a loading screen hang on the device, while the app works fine in the Vision Pro simulator.
Additional Info:
The app’s UI consists of a loading animation (pulsating logo) before transitioning to the main content.
Using Xcode 16.1 Beta, VisionOS SDK.
The app is based on SwiftUI, with Vision Pro optimizations for immersive experience.
Has anyone experienced something similar when moving from the simulator to the Vision Pro hardware? Any help or guidance would be appreciated, especially with regards to the exception or potential resource loading issues specific to the device.
Thanks in advance!
I have a simple example of a motion matching (MxM for Unity) character controller that uses Unity's input system and gamepad support. In editor the scene and inputs work as expected. When I build to headset the app stops at an initialization step where my game controller should kick in. The app doesn't crash but my character is frozen in A-Pose and doesn't respond to input.
I'm wondering if this error I'm seeing in the logs is what's causing it? And if so how do I fix it?
error 15:56:11.724200-0700 PolySpatialProjectTemplate NSBundle file:///System/Library/Frameworks/GameController.framework/ principal class is nil because all fallbacks have failed
I'm using Xcode 16 beta 6
Unity 6000.0.17f1
VisionOS 2.0 beta 9
Hello!
We're having this issue in our app that is implementing multi room scan via RoomPlan, where the ARSession world origin is shifted to wherever the RoomCaptureSession is ran again (e.g in the next room)
To clarify a few point
We are using the RoomCaptureView, starting a new room using roomCaptureView.captureSession.run(configuration: captureSessionConfig) and stopping the room scan via roomCaptureView.captureSession.stop(pauseARSession: false)
We are re-using the same ARSession and, which is passed into the RoomCaptureView as so:
arSession = ARSession()
roomCaptureView = RoomCaptureView(frame: .zero, arSession: arSession)
Any clue why the AR world origin is reset? I need it to be consistent for storing frame camera position
Thanks!
Hi,
I've tried to implement collision detection between the left index finger through a sphere and a simple 3D rectangle box. The sphere of my left index finger goes through the object, but no collision seems to take place. What am I missing?
Thank you very much for your consideration!
Below is my code;
App.swift
import SwiftUI
@main
private struct TrackingApp: App {
public init() {
...
}
public var body: some Scene {
WindowGroup {
ContentView()
}
ImmersiveSpace(id: "AppSpace") {
ImmersiveView()
}
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@State private var subscriptions: [EventSubscription] = []
public var body: some View {
RealityView { content in
/* LEFT HAND */
let leftHandIndexFingerEntity = AnchorEntity(.hand(.left, location: .
let leftHandIndexFingerSphere = ModelEntity(mesh: .generateSphere(radius: 0.01), materials: [SimpleMaterial(color: .orange, isMetallic: false)])
leftHandIndexFingerEntity.addChild(leftHandIndexFingerSphere)
leftHandIndexFingerEntity.generateCollisionShapes(recursive: true)
leftHandIndexFingerEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateSphere(radius: 0.01)])
leftHandIndexFingerEntity.name = "LeftHandIndexFinger"
content.add(leftHandIndexFingerEntity)
/* 3D RECTANGLE*/
let width: Float = 0.7
let height: Float = 0.35
let depth: Float = 0.005
let rectangleEntity = ModelEntity(mesh: .generateBox(size: [width, height, depth]), materials: [SimpleMaterial(color: .red.withAlphaComponent(0.5), isMetallic: false)])
rectangleEntity.transform.rotation = simd_quatf(angle: -.pi / 2, axis: [1, 0, 0])
let rectangleAnchor = AnchorEntity(world: [0.1, 0.85, -0.5])
rectangleEntity.generateCollisionShapes(recursive: true)
rectangleEntity.components[CollisionComponent.self] = CollisionComponent(shapes: [.generateBox(size: [width, height, depth])])
rectangleEntity.name = "Rectangle"
rectangleAnchor.addChild(rectangleEntity)
content.add(rectangleAnchor)
/* Collision Handling */
let subscription = content.subscribe(to: CollisionEvents.Began.self, on: rectangleEntity) { collisionEvent in
print("Collision detected between \(collisionEvent.entityA.name) and \(collisionEvent.entityB.name)")
}
subscriptions.append(subscription)
}
}
}
I have read the Converting side-by-side 3D video to multi-view HEVC and spatial video, now I want to convert back to side-by-side 3D video. On iPhone 15 Pro MAX, the converting time is about 1:1 as the original video length.
I do almost the same as the article mentioned above, the only difference is I get the frames from Spatial video, merging into Side-by-side. Currently my code merging the frame wrote as below. Is any suggestion to speed up the process? Or in the official article, is there anything that we can do to speed up the conversion?
// Merge frame
let leftCI = resizeCVPixelBufferFill(bufferLeft, targetSize: targetSize)
let rightCI = resizeCVPixelBufferFill(bufferRight, targetSize: targetSize)
let lbuffer = convertCIImageToCVPixelBuffer(leftCI!)!
let rbuffer = convertCIImageToCVPixelBuffer(rightCI!)!
pixelBuffer = mergeFrames(lbuffer, rbuffer)
Hello!
I'm trying to play an animation with a toggle button. When the button is toggled the animation either plays forward from the first frame (.speed = 1) OR plays backward from the last frame (.speed = -1), so if the button is toggled when the animation is only halfway through, it 'jumps' to the first or last frame. The animation is 120 frames, and I want the position in playback to be preserved when the button is toggled - so the animation reverses or continues forward from whatever frame the animation was currently on.
Any tips on implementation? Thanks!
import RealityKit
import RealityKitContent
struct ModelView: View {
var isPlaying: Bool
@State private var scene: Entity? = nil
@State private var unboxAnimationResource: AnimationResource? = nil
var body: some View {
RealityView { content in
// Specify the name of the Entity you want
scene = try? await Entity(named: "TestAsset", in: realityKitContentBundle)
scene!.generateCollisionShapes(recursive: true)
scene!.components.set(InputTargetComponent())
content.add(scene!)
} .installGestures()
.onChange(of: isPlaying) {
if (isPlaying){
var playerDefinition = scene!.availableAnimations[0].definition
playerDefinition.speed = 1
playerDefinition.repeatMode = .none
playerDefinition.trimDuration = 0
let playerAnimation = try! AnimationResource.generate(with: playerDefinition)
scene!.playAnimation(playerAnimation)
} else {
var playerDefinition = scene!.availableAnimations[0].definition
playerDefinition.speed = -1
playerDefinition.repeatMode = .none
playerDefinition.trimDuration = 0
let playerAnimation = try! AnimationResource.generate(with: playerDefinition)
scene!.playAnimation(playerAnimation)
}
}
}
}
Thanks!
I seem to be running into an issue in an app I am working on were I am unable to update the IBL for entity more than once in a RealityKit scene. The app is being developed for visionOS.
I have a scene with a model the user interacts with and 360 panoramas as a skybox. These skyboxes can change based on user interaction. I have created an IBL for each of the skyboxes and was intending to swap out the ImageBasedLightComponent and ImageBasedLightReceiverComponent components when updating the skybox in the RealityView's update closure.
The first update works as expected but updating the components after that has no effect. Not sure if this is intended or if I'm just holding it wrong. Would really appreciate any guidance. Thanks
Simplified example
// Task spun up from update closure in RealityView
Task {
if let information = currentSkybox.iblInformation, let resource = try? await EnvironmentResource(named: information.name) {
parentEntity.components.remove(ImageBasedLightReceiverComponent.self)
if let iblEntity = content.entities.first(where: { $0.name == "ibl" }) {
content.remove(iblEntity)
}
let newIBLEntity = Entity()
var iblComponent = ImageBasedLightComponent(source: .single(resource))
iblComponent.inheritsRotation = true
iblComponent.intensityExponent = information.intensity
newIBLEntity.transform.rotation = .init(angle: currentPanorama.rotation, axis: [0, 1, 0])
newIBLEntity.components.set(iblComponent)
newIBLEntity.name = "ibl"
content.add(newIBLEntity)
parentEntity.components.set([
ImageBasedLightReceiverComponent(imageBasedLight: newIBLEntity),
EnvironmentLightingConfigurationComponent(environmentLightingWeight: 0),
])
} else {
parentEntity.components.remove(ImageBasedLightReceiverComponent.self)
}
}
Hi,
I'm experimenting with how my visionOS app interacts with the Mac Virtual Display while the immersive space is active. Specifically, I'm trying to find out if my app can detect key presses or trackpad interactions (like clicks) when the Mac Virtual Display is in use for work, and my app is running in the background with an active immersive space.
So far, I've tested a head-tracking system in my app that works when the app is open with an active immersive space, where I just moved the Mac Virtual Display in front of the visionOS app window.
Could my visionOS app listen to keyboard and trackpad events that happen in the Mac Virtual Display environment?
Hello everyone,
I'm working on developing an app that allows users to share and enjoy experiences together while they are in the same physical locations. Despite trying several approaches, I haven't been able to achieve the desired functionality. If anyone has insights on how to make this possible or is interested in joining the project, I would greatly appreciate your help!
VStack(spacing: 8) {
}
.padding(20)
.frame(width: 320)
.glassBackgroundEffect()
.cornerRadius(10)
Hello.
When displaying a simple app like this:
struct ContentView: View {
var body: some View {
EmptyView()
}
}
And run the Leaks app from the developer tools in Xcode, I see a memory leak which I don't see when running the same application on iOS.
You can simply run the app and it will show a memory leak. And this is what I see in the Leaks application.
Any ideas on what is going on?
Thanks!
I would like to drag two different objects simultaneously using each hand.
In the following session (6:44), it was mentioned that such an implementation could be achieved using SpatialEventGesture():
https://developer.apple.com/jp/videos/play/wwdc2024/10094/
However, since targetedEntity.location3D obtained from SpatialEventGesture is of type Point3D, I'm having trouble converting it for moving objects. It seems like the convert method in the protocol linked below could be used for this conversion, but I'm not quite sure how to implement it:
https://developer.apple.com/documentation/realitykit/realitycoordinatespaceconverting/
How should I go about converting the coordinates?
Additionally, is it even possible to drag different objects with each hand?
.gesture(
SpatialEventGesture()
.onChanged { events in
for event in events {
if event.phase == .active {
switch event.kind {
case .indirectPinch:
if (event.targetedEntity == cube1){
let pos = RealityViewContent.convert(event.location3D, from: .local, to: .scene) //This Doesn't work
dragCube(pos, for: cube1)
}
case .touch, .directPinch, .pointer:
break;
@unknown default:
print("unknown default")
}
}
}
}
)
I have a quiet big USDZ file which have my 3d model that I run on Realityview Swift Project and it takes sometime before I can see the model on the screen, So I was wondering if there is a way to know how much time left for the RealityKit/RealityView Model to be loaded or a percentage that I can add on a progress bar to show for the user how much time left before he can see the full model on screen. and if there is a way how to do this on progress bar while loading.
Something like that
This effect was mentioned in https://developer.apple.com/wwdc24/10153 (the effect is demonstrated at 28:00), in which the demonstration is you can add coordinates by looking somewhere on the ground and clicking., but I don't understand his explanation very well. I hope you can give me a basic solution. I am very grateful for this!