I am using RealityView for an iOS program.
Is it possible to turn off the camera passthrough, so only my virtual content is showing? I am looking to create VR experience.
I have a work around where I turn off occlusion and then create a sphere around me (e.g., with a black texture), but in the pre-RealityView days, I think I used something like this:
arView.environment.background = .color(.black)
Is there something similar in RealityView for iOS?
Here are some snippets of my current work around inside RealityView.
First create the sphere to surround the user:
// Create sphere
let blackMaterial = UnlitMaterial(color: .black)
let sphereMesh = MeshResource.generateSphere(radius: 100)
let sphereModelComponent = ModelComponent(mesh: sphereMesh, materials: [blackMaterial])
let sphereEntity = Entity()
sphereEntity.components.set(sphereModelComponent)
sphereEntity.scale *= .init(x: -1, y: 1, z: 1)
content.add(sphereEntity)
Then turn off occlusion:
// Turn off occlusion
let configuration = SpatialTrackingSession.Configuration(
tracking: [],
sceneUnderstanding: [],
camera: .back)
let session = SpatialTrackingSession()
await session.run(configuration)
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Post
Replies
Boosts
Views
Activity
Hi,
We are a team of student working on a project featuring the Vision Pro, and we'd simply like to know if a 3rd party app can access the video stream of the front cameras?
From our tests, FaceTime for exemple, is able to screen share the entire stream that the user is seeing. (real world + app windows), but apps as Discord, are only able to share app windows, the real world is fully black.
Is it a privacy security, or is it because 3rd parties app doesn't yet support the stream of the front cameras?
To give some more context, we'd like to screenshot an area of the view (real world), with a pinch and drag gesture, and then access the screenshot to work on it. How would we be able to access the video stream?
Thanks in advance for your help,
MrCubic
Hi, from the 2023 WWDC video on RoomPlan, they mention that it should be possible to integrate photo / video with RoomPlan: https://developer.apple.com/videos/play/wwdc2023/10192/ (at ~2:30)
However, when I attempt to use AVFoundation and AVCaptureSession with RoomPlan, I get the simple error of "Cannot Record".
So I'm not sure if there is something wrong with my setup/code, or if these two libraries are actually incompatible. Are there any kinds of guides for doing things like this? Am I going in the right direction or should I try a different approach? Happy to share code if necessary. Thanks
Hello everyone, I'm a Computer Science student. My supervisor has given me some topics for my final year project, and one of them involves using Vision Pro for facial recognition—specifically, identifying a designated face to display specific information.
As a developer, my understanding of Vision Pro is quite limited. I've done some research online and found that Unity and Xcode are used as development tools. Traditionally, facial recognition is done using OpenCV.
However, I've come across articles stating that Apple, due to security reasons, cannot implement facial recognition. I’d like to ask if that’s true. Also, with VisionOS 2 featuring object tracking and image tracking, could these methods potentially replace facial recognition?
In iOS, to display a RealityView, you can assign a value to the content.camera property:
content.camera = .virtual
However, how can this be implemented in macOS and tvOS?
Hello,
Would it be possible to use any of the available visionOS environments when I use an app that requires me to be in an immersive space? I'm developing an app where users can start the immersive space experience by pressing a button. In my case, it would be helpful if the user could still choose a visionOS environment using the Digital Crown, but currently, it seems to be unavailable after opening an immersive space.
Thank you very much in advance!
When opening the Game Center dashboard via the Access Point, the Game Center dashboard appears BEHIND any content in the window with z depth (default type not volumetric). It obscures the dashboard and this makes it unusable.
Alerts have the same placement.
The new defaultWindowPlacement would probably suffice, but I don't think there's a way to apply that to the Game Center window.
What to do?
Thanks.
Hi. I recently added SwiftUI context menus and picker menus to my app, but when they are activated they flicker rapidly, and it is impossible to select anything (there is no hover effect either). When these menus are activated, the console prints lots of warning messages similar to this:
[NetworkComponent] Cannot find component's entity (guid=14395713952467043328, typeID=295756909031380028, type=CustomComponentRCPInputTargetComponent, entity=0x1047c6750).
This issue doesn't seem to happen on visionOS 1.2 simulator, but is reliably reproducible on visionOS 2.0 simulator and device.
Any idea what this might be related to? I am attempting to narrow down on the issue but it's challenging to do so without knowing what the error is about. Thanks!
The set up:
I am developing a visionOS app that uses an immersive space.
The user sees a board with entities put onto it. My app places the board in front of the default camera and entities with a certain position and orientation relative to the board. Placement and rotation should be animated.
The problem:
If I place the entities by assigning a Transform to the transform property of the entity directly, i.e. without animation, the result is correct.
However I have to use the entity's move(to: function to animate it. And move(to: works in an unexpected way.
I thus wrote a little test app, based on Apple's visionOS immersive app template (below). There, the following 5 cases are treated:
Set transform directly (without animation). This gives the correct result, and works as expected (without animation).
Set transform using move relative to world (without animation). This gives the correct result, although it does not work as expected. I expected "relative to world" means translation and rotation is relativ to world. This seems wrong for translation and right for rotation.
Set transform using move relative to parentEntity (without animation). This gives a wrong result, although translation and rotation are defined relative to the parentEntity.
Set transform using move relative to world with animation. This gives also a wrong result, and without animation.
Set transform using move relative to parentEntity with animation. This gives also a wrong result, and without animation.
Here are the screen shots for the cases 1...5:
Cases 1 & 2
Case 3
Cases 4 & 5
The question:
So, obviously, I don't understand what move(to: does. I would be happy to get any advice what is wrong and how to do it right.
Here is the code:
import SwiftUI
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
let boardHeight: Float = 0.1
let boxHeight: Float = 0.3
var body: some View {
RealityView { content in
let boardEntity = makeBoard()
content.add(boardEntity)
let boxEntity = makeBox(parentEntity: boardEntity)
boardEntity.addChild(boxEntity)
}
}
func makeBoard() -> ModelEntity {
let mesh = MeshResource.generateBox(width: 1.0, height: boardHeight, depth: 1.0)
var material = UnlitMaterial(); material.color.tint = .red
let boardEntity = ModelEntity(mesh: mesh, materials: [material])
boardEntity.transform.translation = [0, 0, -3]
return boardEntity
}
func makeBox(parentEntity: Entity) -> ModelEntity {
let mesh = MeshResource.generateBox(width: 0.3, height: boxHeight, depth: 0.3)
var material = UnlitMaterial(); material.color.tint = .green
let boxEntity = ModelEntity(mesh: mesh, materials: [material])
// Set position and orientation of the box
// To put the box onto the board, move it up by half height of the board and half height of the box
let y_up = boardHeight/2.0 + boxHeight/2.0
let translation = SIMD3<Float>(0, y_up, 0)
// Turn the box by 45 degrees around the y axis
let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0))
let transform = Transform(rotation: rotationY, translation: translation)
// Do the actual move
// 1) Set transform directly (without animation)
boxEntity.transform = transform // Translation and rotation correct
// 2) Set transform using move relative to world (without animation)
// boxEntity.move(to: transform, relativeTo: nil) // Translation and rotation correct
// 3) Set transform using move relative to parentEntity (without animation)
// boxEntity.move(to: transform, relativeTo: parentEntity) // Translation incorrect, rotation correct
// 4) Set transform using move relative to world with animation
// boxEntity.move(to: transform,
// relativeTo: nil,
// duration: 1.0,
// timingFunction: .linear) // Translation incorrect, rotation incorrect, no animation
// 5) Set transform using move relative to parentEntity with animation
// boxEntity.move(to: transform,
// relativeTo: parentEntity,
// duration: 1.0,
// timingFunction: .linear) // 5) Translation incorrect, rotation incorrect, no animation
return boxEntity
}
}
I tried to use the application icon from sample project https://developer.apple.com/documentation/visionos/diorama, but the 3 layers of the app icon are not separated when I hover on the icon in the Vision Pro simulator. Could you please advise how to fix the problem? I am using the latest Xcode Version 15.4 (15F31d). Thank you.
Does anyone have a fix for this or is this a bug? I just updated to Xcode 16 and simulator 2.0 yesterday and running my app previously on 1.2 took just a few seconds to load. With 2.0 it takes several minutes. Even if I launch any of the small Developer apps available in the Vision samples section they all take at least 5 minutes to run.
How do I fix this? If I open in device it still launches in less than 10 seconds but that's not always convenient for me.
MacBook Pro, M3, 18G, Sonoma 14.6.1
Hi all,
I’m quite new to XR development in general and need some guidance.
I want to create a function that simply tells me if my palm is facing me or not (returning a bool), but I honestly have no idea where to start.
I saw an earlier Reddit post about 6 months that essentially wanted the same thing I need, but the only response was this:
Consider a triangle made up of the wrist, thumb knuckle, and little finger metacarpal (see here for the joints, and note that naming has changed slightly since this WWDC video): the orientation of this triangle (i.e., whether the front or back is visible) seen from the device location should be a very exact indication of whether the user’s palm is showing or not.
While I really like this solution, I genuinely have no idea how to code it, and no further code was provided. I’m not asking for the entire implementation, but rather just enough to get me on the right track.
Heres basically all I have so far (no idea if this is correct or not):
func isPalmFacingDevice(hand: HandSkeleton, devicePosition: SIMD3<Float>) -> Bool {
// Get the wrist, thumb knuckle and little finger metacarpal positions as 3D vectors
let wristPos = SIMD3<Float>(hand.joint(.wrist).anchorFromJointTransform.columns.3.x,
hand.joint(.wrist).anchorFromJointTransform.columns.3.y,
hand.joint(.wrist).anchorFromJointTransform.columns.3.z)
let thumbKnucklePos = SIMD3<Float>(hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.x,
hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.y,
hand.joint(.thumbKnuckle).anchorFromJointTransform.columns.3.z)
let littleFingerPos = SIMD3<Float>(hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.x,
hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.y,
hand.joint(.littleFingerMetacarpal).anchorFromJointTransform.columns.3.z)
}
Hi,
I'm currently working on some messages that should appear in front of the user depending on the system's state of my visionOS app. How am I able to change the distance of the appearing message relative to the user if the message is displayed as a View. Or is this only possible if I would create an enitity for that message, and then set apply .setPosition() and .relativeTo() e.g. the head anchor? Currently I can change the x and y coordinates of the view as it works within a 2D space, but as I'm intending to display that view in my immersive space, it would be cool if I can display my message a little bit further away in the user's UI, as it currently is a little bit to close in the user's view. If there is a solution without the use of entities I would prefer that one.
Thank you for your help!
Below an example:
Feedback.swift
import SwiftUI
struct Feedback: View {
let message: String
var body: some View {
VStack {
Text(message)
}
}
.position(x: 0, y: -850) // how to adapt distance/depth relative to user in UI?
}
}
ImmersiveView.swift
import SwiftUI
import RealityKit
struct ImmersiveView: View {
@State private var feedbackMessage = "Hello World"
public var body: some View {
VStack {}
.overlay(
Feedback(message: feedbackMessage)
)
RealityView { content in
let configuration = SpatialTrackingSession.Configuration(tracking: [.hand])
let spatialTrackingSession = SpatialTrackingSession.init()
_ = await spatialTrackingSession.run(configuration)
// Head
let headEntity = AnchorEntity(.head)
content.add(headEntity)
}
}
}
Hi! Now I am making a visionOS program. I have an idea that I want to embed spatial videos or pictures into my UI, but now I have encountered problems and have no way to implement my idea. I have tried the following work:
Use AVPlayerViewController to play a spatial video, but it is only display spatial video when modalPresentationStyle =.fullscreen. Once embedded in swiftUI's view, it shows it as a normal 2D image.
The method of https://developer.apple.com/forums/thread/733813 I also tried, using a shadergraph to realize the function of the spatial images displaying, but the material can only be attached on the entity, I don't know how to make it show up in view.
I also tried to use CAMetalLayer to implement this function and write a custom shader to display spatial images, but I couldn't find a function like unity_StereoEyeIndex in unity to render binocular switching.
Does anyone have a good solution to my problem? Thank you!
I’m encountering a 1-meter size limit on the visual presentation of objects presented in an immersive environment in vision os, both in the simulator and in the device
For example, if I load a USDZ object that’s 1.0x0.5x0.05 meters, all of the 1.0x0.5 meter side is visible.
If I scale it by a factor of 2.0, only a 1.0x1.0 viewport onto the object is shown, even though the object size reads out as scaled when queried by
usdz.visualBounds(relativeTo: nil).extents
and if the USDZ is animated the animation, the animation reflects the motion of the entire object
I haven’t been able to determine why this is the case, nor any way to adjust/mitigate it.
Is this a wired constraint of the system or is there a workaround.
Target environment is visionos 1.2
In my visionOS app I am attempting to get the location of a finger press (not a tap, but when the user first presses their fingers together). As far as I can tell, the only way to get this event is to use a SpatialEventGesture.
I've currently got a DragGesture and I am able to use the convert functions in the passed in EntityTargetValue to convert the location3D from the DragEvent to my hit tested entity. But as far as I can tell the SpatialEventGesture doesn't use an EntityTargetValue. I've tried using the convert functions in my targeted entity (ie, myEntity.convert(position: from:)) but these do not return valid values.
My questions are:
Is SpatialEventGesture the correct way to get notified of finger presses?
How do I convert the location3D in the SpatialEventGesture to my entity space?
When I load some usdz file , it crash 100%, Why ?
It crash in simulate , but not crash in Vision Pro
-[MTLDebugDevice newBufferWithBytes:length:options:]:723: failed assertion `Buffer Validation
newBufferWith*:length 0x100fff80 must not exceed 256 MB.
I don't get cameraFrame from cameraFrameUpdates in vision pro app, why it's no getting , where I am doing wrong in code please guide me.
for await cameraFrame in cameraFrameUpdates { print("cameraFrame:: (cameraFrame)") }
var body: some View {
VStack {
image
.resizable()
.scaledToFit()
if(self.finalImage != nil){
self.finalImage!
.resizable()
.scaledToFit()
}else{
image
.resizable()
.scaledToFit()
}
}
.task {
if #available(visionOS 2.0, *) {
guard CameraFrameProvider.isSupported else {
print("CameraFrameProvider not supported.")
return
}
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions: [CameraFrameProvider.CameraPosition.left])
let cameraFrameProvider = CameraFrameProvider()
do {
try await arkitSession.run([cameraFrameProvider])
} catch {
guard let sessionError = error as? ARKitSession.Error else {
preconditionFailure("ARKitSession.run() returned a non-session error: \(error)")
print("ARKitSession.run() returned a non-session error: \(error)")
}
}
guard let cameraFrameUpdates = cameraFrameProvider.cameraFrameUpdates(for: formats[0]) else {
preconditionFailure("Failed to get an async sequence for the first format.")
print("Failed to get an async sequence for the first format.")
}
print("cameraFrameUpdates:: \(cameraFrameUpdates)")
for await cameraFrame in cameraFrameUpdates {
print("cameraFrame:: \(cameraFrame)")
print("Camera Frame ::: LEFT :: \(cameraFrame.sample(for: .left))")
guard let leftSample = cameraFrame.sample(for: .left) else {
print("CameraFrameProviderSample - Nil camera frame left sample")
print("CameraFrameProviderSample - Nil camera frame left sample")
continue
}
self.pixelBuffer = leftSample.pixelBuffer
print(" ======== PIXEL BUFFER ::: \(self.pixelBuffer) ========")
self.finalImage = self.setImage()
}
} else {
// Fallback on earlier versions
}
}
}
Hi,
I have a RealityKit app that I am building with Xcode 16. The app has a minimum deployment target of iOS 17. If I run it on an iOS 17 device the app crashes:
dyld[15716]: Symbol not found: _$s10RealityKit13ShapeResourceC14generateConvex4fromAcA04MeshD0C_tYaKFZ
Referenced from: …
Expected in: …/System/Library/Frameworks/RealityFoundation.framework/RealityFoundation
My code looks something like this:
@available(iOS, introduced: 13.0, obsoleted: 18.0)
@MainActor @preconcurrency func generateNonAsyncConvexShapeResource(from meshResource: MeshResource) throws -> ShapeResource {
ShapeResource.generateConvex(from: meshResource)
}
@available(iOS 18.0, *)
func generateConvexShapeAsync(from meshResource: MeshResource) async throws -> ShapeResource {
// This will only be available for iOS 18 and above
return try await ShapeResource.generateConvex(from: meshResource)
}
if let meshResource = try? modelEntity.model?.mesh.applying(transform: transform.matrix) {
if #available(visionOS 1.0, iOS 18.0, *) {
try? await generateConvexShapeAsync(from: meshResource)// await shapeResources.append(.generateConvex(from: meshResource))
} else {
try? generateNonAsyncConvexShapeResource(from: meshResource)
}
}
So I actually do check for the system and only call the async variant on iOS 18.
Any hints how to fix that?
Thanks!
Using OcclusionMaterial on macOS and iOS works fine in Non-AR mode when I set the background to just a simple color (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/color) but when I set a custom skybox (https://developer.apple.com/documentation/realitykit/arview/environment-swift.struct/background-swift.struct/skybox(_:)) the OcclusionMaterial renders as fully black. I would expect it to properly occlude the content and show through the skybox behind it.
This happens with box ARView and RealityView. On current iOS/macOS Betas as well as on older systems, e.g iOS 17 and macOS Sonoma.
Feedback ID: FB15081053