According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred.
Here’s the test code:
struct ContentView: View {
var body: some View {
VStack(spacing: 20) {
Text("Above the RealityView")
.font(.title)
RealityView { content, attachments in
if let text = attachments.entity(for: "2dView") {
text.position.y = 0.1
content.add(text)
}
let box = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(color: .red, isMetallic: true)]
)
content.add(box)
} attachments: {
Attachment(id: "2dView") {
Text("Above the Box")
.font(.title)
}
}
.frame(width: 300, height: 300)
.border(.blue)
.blur(radius: 99) // Has no visual effect
Text("Below the RealityView")
.font(.subheadline)
}
.padding()
}
}
My question:
How can I make .blur(radius:) visually affect the content rendered in a RealityView?
Can you provide a working example that .blur() to visually affect any part of a RealityView?
Thanks!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I believe I have created a videoMaterial and assigned it to a mesh with code I found in the Developer's Documentation but Im getting this error.
"Trailing closure passed to parameter of type 'String' that does not accept a closure"
I have attached a photo of the code and where the error happens.
Any help will greatly be appreciated.
Hi,
after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue?
Thank you
Topic:
Spatial Computing
SubTopic:
General
Environment
Xcode: 16.2
VisionOS SDK 2.4
Swift 6.1
Targets: Apple Vision Pro (immersive space)
Frameworks: ARKit, RealityKit, SwiftUI
What I’m Trying to Do
I have a view-model class PlacementManager that holds two AR providers:
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
I want to dynamically replace these providers in a setEnvironment(_:) method (so I can save/clear a JSON scene and restart ARKit).
What’s Happening
If I declare them as :
private let worldTracking = WorldTrackingProvider()
private let planeDetection = PlaneDetectionProvider()
I get compile-errors when I later do:
self.worldTracking = newWorldTracking // Cannot assign to property: 'worldTracking' is a 'let' constant
If I change them to un-initialized vars:
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
then in my init() I get:
self used in property access 'worldTracking' before all stored properties are initialized
Code snipet
@Observable
final class PlacementManager : ObservableObject {
private var worldTracking: WorldTrackingProvider
private var planeDetection: PlaneDetectionProvider
// … other props …
@MainActor
init() {
// error: self.worldTracking used before init…
planeAnchorHandler = PlaneAnchorHandler(rootEntity: root)
persistenceManager = PersistenceManager(
worldTracking: worldTracking,
rootEntity: root
)
// …
}
@MainActor
func setEnvironment(env: Environnement) async {
let newWorldTracking = WorldTrackingProvider()
let newPlaneDetection = PlaneDetectionProvider()
try await appState!.arkitSession.run(
[ newWorldTracking, newPlaneDetection ]
)
self.worldTracking = newWorldTracking
self.planeDetection = newPlaneDetection
// …
}
}
What I’ve Tried
Giving them default values at declaration (= WorldTrackingProvider())
Initializing them at the top of init() before any use
Passing the new providers into arkitSession.run(...)
My Question
What is the recommended Swift-style pattern to declare and reassign these ARKit provider properties so that:
They’re fully initialized before use in init(), and
I can swap them out later in setEnvironment(...) without compiler errors?
Any pointers (or links to forum threads / docs) would be greatly appreciated!
With iOS26 unveiled, has anyone noticed or found any changes related to RoomPlan?
I can't find anything myself, which is disappointing.
Has anyone found any improvements or changes?
We have successfully obtained the permissions for "Main Camera access" and "Passthrough in screen capture" from Apple. Currently, the video streams we have received are from the physical world and do not include the digital world. How can we obtain video streams from both the physical and digital worlds?
thank you!
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Enterprise
Swift
Reality Composer Pro
visionOS
I am developing a Unity application for the Apple Vision Pro using PolySpatial and RealityKit integration.
The goal is to create a graspable object (for example, a handheld cube) that includes a secondary camera. When the user grabs and moves the object, the secondary camera should render its view to a RenderTexture, which is displayed on a quad attached to the object, simulating a live camera screen.
In the Unity Editor, this setup works correctly. The RenderTexture updates in real time, and the quad displays the camera’s view as expected.
However, when building and running the application on the Vision Pro, the quad only displays the clear background color of the secondary camera. No scene content appears. The graspable interaction itself works fine: the object can be grabbed and moved as intended.
Steps I have taken:
Created a new layer (CameraFeed) and assigned the relevant objects to it.
Set the secondary camera’s culling mask to render only the CameraFeed layer.
Assigned the RenderTexture as the camera’s target texture.
Applied the RenderTexture to an Unlit/Texture material on a quad.
Confirmed the camera is active and correctly positioned relative to the object.
From my research, it appears that once objects are managed by RealityKit through PolySpatial (for example, made graspable), they are no longer rendered through Unity's normal camera pipeline. Only the main XR camera (managed by RealityKit) seems able to see these objects. Secondary Unity cameras cannot render RealityKit-synced content to a RenderTexture. If this is correct, it seems there is currently no way to implement a true live secondary camera feed showing graspable objects on Vision Pro using Unity PolySpatial.
My questions are:
Is there any official way to enable multiple camera rendering of RealityKit-managed objects through PolySpatial?
Are there known workarounds to simulate a live camera feed that still allows objects to be grabbed?
Has anyone found alternative design patterns or methods for this kind of interaction?
Environment: Unity 6.0 , PolySpatial 2.2.4, Apple Vision OS XR 2.2.4
Any insight or suggestions would be greatly appreciated.
Thank you.
Hope to achieve stable transmission
And the colors are different. The colors in the glasses are not consistent with the colors projected on the screen.
After implementing the method of obtaining video streams discussed at WWDC in the program, I found that the obtained video stream does not include digital models in the digital space or related videos such as the program UI. I would like to ask how to obtain a video stream or frame that contains only the physical world?
let formats = CameraVideoFormat.supportedVideoFormats(for: .main, cameraPositions:[.left])
let cameraFrameProvider = CameraFrameProvider()
var arKitSession = ARKitSession()
var pixelBuffer: CVPixelBuffer?
var cameraAccessStatus = ARKitSession.AuthorizationStatus.notDetermined
let worldTracking = WorldTrackingProvider()
func requestWorldSensingCameraAccess() async {
let authorizationResult = await arKitSession.requestAuthorization(for: [.cameraAccess])
cameraAccessStatus = authorizationResult[.cameraAccess]!
}
func queryAuthorizationCameraAccess() async{
let authorizationResult = await arKitSession.queryAuthorization(for: [.cameraAccess])
cameraAccessStatus = authorizationResult[.cameraAccess]!
}
func monitorSessionEvents() async {
for await event in arKitSession.events {
switch event {
case .dataProviderStateChanged(_, let newState, let error):
switch newState {
case .initialized:
break
case .running:
break
case .paused:
break
case .stopped:
if let error {
print("An error occurred: \(error)")
}
@unknown default:
break
}
case .authorizationChanged(let type, let status):
print("Authorization type \(type) changed to \(status)")
default:
print("An unknown event occured \(event)")
}
}
}
@MainActor
func processWorldAnchorUpdates() async {
for await anchorUpdate in worldTracking.anchorUpdates {
switch anchorUpdate.event {
case .added:
//检查是否有持久化对象附加到此添加的锚点-
//它可能是该应用程序之前运行的一个世界锚。
//ARKit显示与此应用程序相关的所有世界锚点
//当世界跟踪提供程序启动时。
fallthrough
case .updated:
//使放置的对象的位置与其对应的对象保持同步
//世界锚点,如果未跟踪锚点,则隐藏对象。
break
case .removed:
//如果删除了相应的世界定位点,则删除已放置的对象。
break
}
}
}
func arkitRun() async{
do {
try await arKitSession.run([cameraFrameProvider,worldTracking])
} catch {
return
}
}
@MainActor
func processDeviceAnchorUpdates() async {
await run(function: self.cameraFrameUpdatesBuffer, withFrequency: 90)
}
@MainActor
func cameraFrameUpdatesBuffer() async{
guard let cameraFrameUpdates =
cameraFrameProvider.cameraFrameUpdates(for: formats[0]),let cameraFrameUpdates1 =
cameraFrameProvider.cameraFrameUpdates(for: formats[1]) else {
return
}
for await cameraFrame in cameraFrameUpdates {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
self.pixelBuffer = mainCameraSample.pixelBuffer
}
for await cameraFrame in cameraFrameUpdates1 {
guard let mainCameraSample = cameraFrame.sample(for: .left) else {
continue
}
if self.pixelBuffer != nil {
self.pixelBuffer = mergeTwoFrames(frame1: self.pixelBuffer!, frame2: mainCameraSample.pixelBuffer, outputSize: CGSize(width: 1920, height: 1080))
}
}
}
Hi, I'm trying to place an object in front of AVPlayer that is docked in VideoDockingRegion, but when launched in immersive space, the video passes through the objects placed in front of. How do I make sure these objects are visible?
image for reference
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
Shader Graph Editor
Hello Community,
I’m currently working with the sample code “CapturingDepthUsingTheLiDARCamera” and using it to capture the depth map of an image taken with the iPhone 14 Pro.
From this depth map, I generate a point cloud using the intrinsic camera parameters.
I've noticed that objects not facing the camera directly appear distorted in the resulting point cloud.
For example: An object with surfaces that are perpendicular to each other appears with a sharper angle in the point cloud — around 60° instead of 90°.
My question is:
Is this due to the general accuracy limitations of the LiDAR sensor? Or could it be related to the sample code?
To obtain the depth map, I’m using:
AVCapturePhoto.depthData.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32)
Thanks in advance for your help!
Wondering if this is even possible without using CVImageBuffer and passing each frame as an image which I imagine will be very expensive.
Have a PoC of a shader graph that applies a radial zoom effect to an image. In RealityKit I'm passing the image as a resource:
if let textureResource = try? await TextureResource(named: "fuji") {
let value = MaterialParameters.Value.textureResource(textureResource)
try? material.setParameter(name: "MyImage", value: value)
model.model?.materials = [material]
}
Thanks in advance
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Reality Composer Pro
Shader Graph Editor
visionOS
Hi there
I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However by moving the referenceobject quickly, it causes tracking to stop. (I know this is a limitation so im trying to make it a feature)
IS there a way to play a USDZ animation at the last known location, after detecting that reference object is no longer being tracked? is it possible to set this up in Reality Composer pro?
Nearly everything is set up in Reality Composer pro with my immersive.scene just anchoring virtual content to the Reference object in the RCP Scene, so my immersive view just does this -
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
& this
.onAppear {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
I have tried Using SpatialTracking & WorldTrackProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and if this is actually the right way to do it.
Apologies for my lack of knowledge.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
RealityKit
Reality Composer Pro
visionOS
I want to pursue for a project involving 3D VR visualisation. I would like to know if there is a 3D stereoscopic camrea setup that is able to connect straight to Apple Vision Pro display yet. Of course, with no compatibility issues with MV-HEVC. Any recommendation is appreciated.
Topic:
Spatial Computing
SubTopic:
General
当我进入混合空间时,出现一个模型,但模型后面有一个 windowGroup,无法完全查看。如果我想点击进入 mix 空间,我需要使用代码将 windowGroup 移动到另一个位置,而不是手动移动

Topic:
Spatial Computing
SubTopic:
General
Platform: iOS18
Tech: RealityView
Hi! I was wondering if RealityView now provides ways for their session to persist Anchor data in a world such that the anchor locations in one session can be saved and loaded in a another session that persists the exact same anchor positions.
I know that ARWorldMap in ARKit does that, but I was not able to find a way to use it with RealityView. I think it's because RealityView has ARKit under its hood but does not expose the ARKit session info publicly to the client code.
So I was wondering if there's a SwiftUI + RealityView approach that can help me to achieve a similar goal: Come back to the same location and see the object in exactly the same place.
Thanks!
I have tested the MagnifyGesture code below on multiple devices:
Vision Pro - working
iPhone - working
iPad - working
macOS - not working
In Reality Composer Pro, I have also added the below components to the test model entity:
Input Target
Collision
For macOS, I tried the touchpad pinch gesture and mouse scroll wheel, but neither approach works. How to resolve this issue? Thank you.
import SwiftUI
import RealityKit
import RealityKitContent
struct ContentView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
}
.gesture(MagnifyGesture()
.targetedToAnyEntity()
.onChanged(onMagnifyChanged)
.onEnded(onMagnifyEnded))
}
func onMagnifyChanged(_ value: EntityTargetValue<MagnifyGesture.Value>) {
print("onMagnifyChanged")
}
func onMagnifyEnded(_ value: EntityTargetValue<MagnifyGesture.Value>) {
print("onMagnifyEnded")
}
}
Topic:
Spatial Computing
SubTopic:
General
I'm using Unity 2022.3.56f, with Apple VisionOS App Mode set to 'Virtual Reality - Fully Immersive Space'.
It seems that the render resolution of my game in the Apple Vision Pro when I build is well below the native resolution of the AVP displays.
I can't see a setting in XR Plug-in Management Apple visionOS options, or in Quality settings, to increase the render resolutions. Is this possible?
I tried setting:
UnityEngine.XR.XRSettings.eyeTextureResolutionScale= 2.0f
For example, but this doesn't seem to do anything to the render resolution in the build.
Topic:
Spatial Computing
SubTopic:
General
I am using ARKit to detect image in visionPro. However I met some question about adding the reference image.
Some of my images can not be added correctly sometimes. (As you can see in the picture above, the 'orange' can not be added correctly, but the 'cup' can). However, sometimes they will be added without any problem. I do not know why it will happen. And I want they all be added steadily.
Hi there
I'm using Reality Composer Pro to anchor virtual content to a .referenceobject. However moving the referenceobject quickly causes tracking to stop. (I know this is a limitation and I am trying to embrace it as a feature)
Is there a way to play a USDZ animation at the last known location, after detecting that the reference object is no longer tracked? is it possible to set this up in Reality Composer pro?
I'm trying to get the USDZ to play before the Virtual Content disappears (due to reference object not being located). So that it smooths out the vanishing of the content.
Nearly everything is set up in Reality Composer pro with my immersive.scene just adding virtual content to the reference object which anchors it in the RCP Scene, so my immersive view just does this -
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
& this
.onAppear {
appModel.immersiveSpaceState = .open
}
.onDisappear {
appModel.immersiveSpaceState = .closed
}
I have tried Using SpatialTracking & WorldTrackingProvider, but I'm still quite new to Swift and coding in general so im unsure how to implement in conjunction with my RCP scene and/or if this is the right way to go about it.
Also I have implemented this at the beginning of object tracking.
All I had to do was add a onAppear behavior to the object to play a USDZ and that works.
Doing it for disappearing (due to loss of reference object) seems to be a lot harder.
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
ARKit
AR / VR
RealityKit
Reality Composer Pro