Spatial photo in RealityView has a default corner radius. I made a parallel effect with spatial photos in ScrollView(like Spatial Gallery), but the corner radius disappeared on left and right spatial photos. I've tried .clipShape and .mask modifiers, but they did't work. How to clip or mask spatial photo with corner radius effect?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
直播过程中需同时操作 Vision Pro(拍摄)、Mac(推流)、中控台(画面切换),无统一控制入口,调节 3D 模型、校准画质等操作需在多设备间切换,易出错且效率低。
期望
针对直播场景,提供桌面端专属控制软件,支持一站式管理 Vision Pro 的拍摄参数、3D 模型切换、虚实融合效果等,实现多设备协同操作的可视化、便捷化。
Hi,
after upgrading to 2.4.1 (from 1.0) my vision stucks on "Retrieving configuration" screen. Apple Store didn't support my case since it has been sold in USA and the product isn't still present in italian market. I don't have dev strap, how can I manage the issue?
Thank you
Topic:
Spatial Computing
SubTopic:
General
I'm currently implementing 180° / 360° immersive video for my app.
I easily implemented 360° by just applying VideoMaterial to flipped sphere.
But I'm stuck at 180°. I'm trying to implement by applying VideoMaterial to hemisphere (half sphere). I want to make VideoMaterial to be visible half front sphere and half back sphere transparent / clear.
Would there be any advice / information / idea to implement this? Your help would be grateful.
Environment Versions
・macOS15.6.1
・visionOS26.0.1
・Xcode16.1 or 26.0.1
・unity6000.2.9f1
・Apple.core3.2.0
・Apple.PHASE1.2.7
・polyspatial2.4.2
With the above environment, after installing Apple.PHASE into unity and building to a visionOS device, Audio is available and distance attention works, but Early Reflection and Late Reverb produce no audible change even when checked and their parameters are adjusted.
What is required to make Early Reflection and Late Reverb take effect on a visionOS device build?
action taken
・created a SoundEvent.
・in composer, created a Sampler and a SpatialMixer; attached an AudioClip to the Sampler; enabled Direct Path, Early Reflection, and Late Reverb on the SpatialMixer.
・attached a PHASE Source to the object to be played, attached the created SoundEvent to it, and set non-zero values for Early Reflection and Late Reverb.
・attached a PHASE Listener to the mainCamera and set the ReverbPreset to a value other than None.
・in project settings > Audio, set Spatializer plugin to PHASE Spatializer.
・from there, build for visionOS.
I am running a Spatial Rendering App template demo, it shows “No People Found ” “There is no one nearby to share with”.
How can I stream videos rendered by Mac to my vision pro
I am using macOS 26.0, visionOS 26, Xcode 26
Topic:
Spatial Computing
SubTopic:
General
Apple's WWDC video What’s new for the spatial web says the spatial-backdrop markup may change as it goes through the standards process (at 27:26 mark).
I have started adding spatial-backdrops to web pages, so I want to keep an eye out for status updates by Apple and follow the standards progress.
Is there any place I can keep an eye on this standards process?
Has Apple announced any feature updates or news on spatial-backdrops?
Hi. I am mixing content destined for Vision Pro. Locked to video. I have the AAX installer and the ASAF video player demonstrated in the quicktimes is nit included in the install package for pro tools. Would it be possible to post a link ?
VideoMaterial Black Screen on Vision Pro Device (Works in Simulator)
App Overview
App Name: Extn Browser
Bundle ID: ai.extn.browser
Purpose: A visionOS web browser that plays 360°/180° VR videos in an immersive sphere environment
Development Environment & SDK Versions
Component
Version
Xcode
26.2
Swift
6.2
visionOS Deployment Target
26.2
Swift Concurrency
MainActor isolation enabled
App is released in the TestFlight.
Frameworks Used
SwiftUI - UI framework
RealityKit - 3D rendering, MeshResource, ModelEntity, VideoMaterial
AVFoundation - AVPlayer, AVAudioSession
WebKit - WKWebView for browser functionality
Network - NWListener for local proxy server
Sphere Video Mechanism
The app creates an immersive 360° video experience using the following approach:
// 1. Create sphere mesh (10 meter radius for immersive viewing)
let mesh = MeshResource.generateSphere(radius: 10.0)
// 2. Create initial transparent material
var material = UnlitMaterial()
material.color = .init(tint: .clear)
// 3. Create entity and invert sphere (negative X scale)
let sphere = ModelEntity(mesh: mesh, materials: [material])
sphere.scale = SIMD3<Float>(-1, 1, 1) // Inverts normals for inside-out viewing
sphere.position = SIMD3<Float>(0, 1.5, 0) // Eye level
// 4. Create AVPlayer with video URL
let player = AVPlayer(url: videoURL)
// 5. Configure audio session for visionOS
let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(.playback, mode: .moviePlayback, options: [.mixWithOthers])
try audioSession.setActive(true)
// 6. Create VideoMaterial and apply to sphere
let videoMaterial = VideoMaterial(avPlayer: player)
if var modelComponent = sphere.components[ModelComponent.self] {
modelComponent.materials = [videoMaterial]
sphere.components.set(modelComponent)
}
// 7. Start playback
player.play()
ImmersiveSpace Configuration
// browserApp.swift
ImmersiveSpace(id: appModel.immersiveSpaceID) {
ImmersiveView()
.environment(appModel)
}
.immersionStyle(selection: .constant(.mixed), in: .mixed)
Entitlements
<!-- browser.entitlements -->
<key>com.apple.security.app-sandbox</key>
<true/>
<key>com.apple.security.network.client</key>
<true/>
<key>com.apple.security.network.server</key>
<true/>
Info.plist Network Configuration
<key>NSAppTransportSecurity</key>
<dict>
<key>NSAllowsArbitraryLoads</key>
<true/>
</dict>
The Issue
Behavior in Simulator: Video plays correctly on the inverted sphere surface - 360° video is visible and wraps around the user as expected.
Behavior on Physical Vision Pro: The sphere displays a black screen. No video content is visible, though the sphere entity itself is present.
Important: Not a DRM/Licensing Issue
This issue is NOT related to Digital Rights Management (DRM) or FairPlay. I have tested with:
Unlicensed raw MP4 video files (no DRM protection)
Self-hosted video content with no copy protection
Direct MP4 URLs from CDN without any licensing requirements
The same black screen behavior occurs with all unprotected video sources, ruling out DRM as the cause.
(Plain H.264 MP4, no DRM)
Screen Recording: Working in Simulator
The following screen recording demonstrates playing a 360° YouTube video in the immersive sphere on the visionOS Simulator:
https://cdn.commenda.kr/screen-001.mov
This confirms that the VideoMaterial and sphere rendering work correctly in the simulator, but the same setup shows a black screen on the physical Vision Pro device.
Observations
AVPlayer status reports .readyToPlay - The video appears to load successfully
VideoMaterial is created without errors - No exceptions thrown
Sphere entity renders - The geometry is visible (black surface)
Audio session is configured - No errors during audio session setup
Network requests succeed - The video URL is accessible from the device
Same result with local/unprotected content - DRM is not a factor
Console Logs (Device)
The logging shows:
Sphere created and added to scene
AVPlayer created with correct URL
VideoMaterial created and applied
Player status transitions to .readyToPlay
player.play() called successfully
Rate shows 1.0 (playing)
Despite all success indicators, the rendered output is black.
Questions for Apple
Are there known differences in VideoMaterial behavior between the visionOS Simulator and physical Vision Pro hardware?
Does VideoMaterial(avPlayer:) require specific video codec/format requirements that differ on device? (The test video is a standard H.264 MP4)
Is there a required Metal capability or GPU feature for VideoMaterial that may not be available in certain contexts on device?
Does the immersion style (.mixed) affect VideoMaterial rendering on hardware?
Are there additional entitlements required for video texture rendering in RealityKit on physical hardware?
Attempted Solutions
Configured AVAudioSession with .playback category
Added delay before player.play() to ensure material is applied
Verified sphere scale inversion (-1, 1, 1)
Tested multiple video URLs (including raw, unlicensed MP4 files)
Confirmed network connectivity on device
Ruled out DRM/FairPlay issues by testing unprotected content
Environment Details
Device: Apple Vision Pro
visionOS Version: 26.2
Xcode Version: 26.2
macOS Version: Darwin 25.2.0
Is there any interest in this forum for those developing for the spatial web and safari. I can't seem to find any posts that are relevant here.
Hello, I am trying to build an AVP app for real-time "zero-latency" spatial video streaming. I am trying to figure out, on a high level, the best way to do this.
Currently this is my method:
Server sends stereo images via a WebRTC service (ie, livekit)
The WebRTC stream is converted to a CVPixelBuffer, writes them to file, plays via AVPlayer, and applies a VideoMaterial to a plane entity.
However, this is a bit hacky and it seems like this won't be compatible with Apple's spatial experinces. To my understanding, Apple supports HLS streaming for spatial experiences and APMP content. However, HLS (and even Low Latency HLS) introduces a second or more of latency, likely do to the segmentation nature of HLS. Thus, HLS will not work for us.
Some other alternatives I've thought of are streaming the live stream video via webrtc from the server to a local computer in the AVP's network, and then using LL-HLS to stream from the local computer to the vision pro. Still, it seems like this would introduce latency on the order of seconds.
Is my current approach the best way to implement this? Or could anyone suggest a better way, perhaps something compatible with AVP's spatial experiences
Topic:
Spatial Computing
SubTopic:
General
Hi, I'm currently implementing 180° / 360° property for immersive video in my app.
I was able to implement 360° easily by just giving VideoMaterial to flipped sphere.
However, I'm bit stuck at 180°. I want to implement by setting VideoMaterial to hemisphere mesh. But since RealityKit doesn't provide default function such like MeshResource.generateHemisphere yet, I just want to apply VideoMaterial half front visible, and half back transparent. I thought this would make my sphere looks like hemisphere.
But I can't find my way to implement this method.. I would appreciate any advice / idea / information that might help.
I have an entity that was created using Mixamo, and it has an animation.
after the animation completes the mesh of the robot is not where the entity is positioned.
I want to do something like when the animation finishes, I set the root entity's transform to the mesh's transform. There are no transformations applied to any of the children of this root of the model, which means that the transformations are applied to the skeleton due the the playing of animations.
Is there a way where I can apply the final position of the root of the skeleton to the root entity to make sure to position the entity where the animation has ended just before the next animation plays?
Hello,
I'm developing a visionOS application for Apple Vision Pro that aims to scan unknown physical objects, capture their 3D data (such as meshes or point clouds), and export them as 3D models. Ideally, I'd also like to visualize these reconstructions in real-time within the headset.
This functionality is similar to what's available in Reality Composer on iPad and iPhone, but I'm seeking to implement it natively on Vision Pro.
I've reviewed the visionOS documentation but haven't found clear guidance on accessing LiDAR depth data or performing scene reconstruction.
Specifically, I'm interested in:
1.Accessing LiDAR or depth data from Vision Pro's sensors.
2.Utilizing ARKit's scene reconstruction capabilities on visionOS.
3.Exporting captured 3D data as models (e.g., USDZ or OBJ formats).
Are there APIs or frameworks in visionOS that support these features?
Topic:
Spatial Computing
SubTopic:
General
Hello,
There are odd artifacts (one looks like an image placeholder) appearing when dismissing an immersive space which is displaying an ImagePresentationComponent. Both artifacts look like widgets..
See below our simple code displaying the ImagePresentationComponent and the images of the odd artifacts that appear briefly when dismissing the immersive space.
import OSLog
import RealityKit
import SwiftUI
struct ImmersiveImageView: View {
let logger = Logger(subsystem: AppConstant.SUBSYSTEM, category: "ImmersiveImageView")
@Environment(AppModel.self) private var appModel
var body: some View {
RealityView { content in
if let currentMedia = appModel.currentMedia,
var imagePresentationComponent = currentMedia.imagePresentationComponent {
let imagePresentationComponentEntity = Entity()
switch currentMedia.type {
case .iphoneSpatialMovie:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .twoD:
logger.info("\(#function) \(#line) spatial3DImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatial3DImmersive
case .visionProConvertedSpatialPhoto:
logger.info("\(#function) \(#line) spatialStereoImmersive display for \(String(describing: currentMedia))")
imagePresentationComponent.desiredViewingMode = .spatialStereoImmersive
default :
logger.error("\(#function) \(#line) Unsupported media type \(currentMedia.type)")
assertionFailure("Unsupported media type \(currentMedia.type)")
}
imagePresentationComponentEntity.components.set(imagePresentationComponent)
imagePresentationComponentEntity.position = AppConstant.Position.spacialImagePosition
content.add(imagePresentationComponentEntity)
}
let toggleViewAttachmentComponent = ViewAttachmentComponent(rootView: ToggleImmersiveSpaceButton())
let toggleViewAttachmentComponentEntity = Entity(components: toggleViewAttachmentComponent)
toggleViewAttachmentComponentEntity.position = SIMD3<Float>(
AppConstant.Position.spacialImagePosition.x + 1,
AppConstant.Position.spacialImagePosition.y,
AppConstant.Position.spacialImagePosition.z
)
toggleViewAttachmentComponentEntity.scale = AppConstant.Scale.attachments
content.add(toggleViewAttachmentComponentEntity)
}
}
}
佩戴者头部自然晃动时,设备拍摄的画面会出现明显抖动,导致观看直播的用户产生眩晕感,严重影响直播沉浸体验和购物决策效率。
希望
优化设备内置防抖算法,降低头部常规晃动对画面稳定性的影响,提升直播画面的流畅度。
According to the official documentation, the .blur(radius:) modifier could apply gaussian blur to a realityview. However, when applied directly to a RealityView, nothing inside it (neither 2D attachments nor 3D entities) appears to be blurred.
Here’s the test code:
struct ContentView: View {
var body: some View {
VStack(spacing: 20) {
Text("Above the RealityView")
.font(.title)
RealityView { content, attachments in
if let text = attachments.entity(for: "2dView") {
text.position.y = 0.1
content.add(text)
}
let box = ModelEntity(
mesh: .generateBox(size: 0.1),
materials: [SimpleMaterial(color: .red, isMetallic: true)]
)
content.add(box)
} attachments: {
Attachment(id: "2dView") {
Text("Above the Box")
.font(.title)
}
}
.frame(width: 300, height: 300)
.border(.blue)
.blur(radius: 99) // Has no visual effect
Text("Below the RealityView")
.font(.subheadline)
}
.padding()
}
}
My question:
How can I make .blur(radius:) visually affect the content rendered in a RealityView?
Can you provide a working example that .blur() to visually affect any part of a RealityView?
Thanks!
Hi,
I'm looking to build something similar to the header blur in the App Store and Apple TV app settings. Does anyone know the best way to achieve this so that when there is nothing behind the header it looks the same as the rest of the view background but when content goes underneath it has a blur effect. I've seen .scrollEdgeEffect on IOS26 is there something similar for visionOS?
Thanks!
This is a very exciting feature in 26.4 beta. But from the document, it seems can only integrate with NVIDIA CloudXR™ SDK.
I'm wondering if it's possible to use this tool to stream immersive video from Mac to Vision Pro?
I am building a 360 photo viewer in VisionOS 26. Which allows the user to choose a 2 by 1 jpg and then renders it with a sphere mesh entity. And I use: TextureResource(contentsOf: url, options: options).
I noticed two situations here in terms of mipmaps options.
When setting "mipmapsMode: .none":
The graphic quality within the "gaze area" looks sharp and clear
The two poles (top and bottom) are perfectly rendered
Massive shimmer around the "gaze area"
When setting "mipmapsMode: .allocateAndGenerateAll":
The graphic looks slightly blurrier than in ".none" within the "gaze area"
The two poles are very blurry and hard to recognize the texture
Much less shimmer around the "gaze area"
My question would be: Is there a way to have the perfect graphic quality in ".none" without the massive shimmer?
Thank you!
Screenshots:
mipmapsMode: .none
mipmapsMode: .allocateAndGenerateAll