I would like to translate info in a three.js based web app as a 3D model in a volumetric window. Is it possible to do this in a similar manner as loading a web page in a WKWebView?
General
RSS for tagDiscuss Spatial Computing on Apple Platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
let component = GestureComponent(DragGesture())
iOS: ☑️
visionOS: ❌
This bug from beta to public, please fix it.
Hello everyone,
I am currently developing an experience for visionOS using RealityKit and I would like to achieve volumetric light effects, such as visible light rays or shafts through fog or dust.
I found this GitHub project: https://github.com/robcupisz/LightShafts, which demonstrates the kind of visual style I am aiming for. I would like to know if there is a way to create similar effects using RealityKit on visionOS.
So far, I have experimented with DirectionalLight, SpotLight, ImageBasedLight, and custom materials (e.g., additive blending on translucent meshes), but none of these approaches can replicate the volumetric light shaft look shown in the repository above.
Questions:
Is there a recommended technique or workaround in RealityKit to simulate light shafts or volumetric lighting?
Is creating a custom mesh (e.g., cone or volume geometry with gradient alpha and additive blending) the only feasible method?
Are there any examples, best practices, or sample projects from Apple or other developers that showcase a similar visual style?
Any advice or hints would be greatly appreciated. Thank you in advance!
Topic:
Spatial Computing
SubTopic:
General
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
Hello Community,
I am currently developing an experimental VisionOS app, to investigate the social effects of the new Spatial Persona feature, for my bachelor thesis. My setup includes a simple board game for the participants, in which they can engage with their persona avatars.
I tried to use the TabletopKit for this setup, but ran into issues when starting the SharePlay session. When I testes my app, I couldn't see the other spatial persona anymore, despite the green SharePlay button indicating the session started. The other person can see my actions in their version of the app on the board, but can not interact with anything. Also, we are both seat on the default side of the seat.
I tried to remove the environment I added, because it doesn't seem to synch with the other player. When I tried the FaceTime feature in the simulator without the environment, I could then see the test robot avatar, but at a totally wrong place. It's seems like it isn't just my environment occluding the seats, but a flaw in the seating process as well.
When I tried the FaceTime feature in the simulator on the official test scene (TabletopKit Sample), I got the same incorrect placement and the warning "role(for:inSeatNumber:): The provided role identifier does not match a role in the current template."
So my questions are:
What needs to be changed so the TabletopKit can handle seating correctly?
How can I correctly use immersive scenes in combination with the TabletopKit?
I tried to keep the implementation of the TabletopKit example as close as possible, so I think it will enough to look into this codebase for now.
I debugged the position of seats and they are placed correctly in front of their equipment. The personas are just not placed on them.
I have discovered that RemoteImmersiveSpace is limited to utilizing the structure of the CompositorContent protocol, precluding direct invocation of RealityView. Consequently, I am interested in understanding the appropriate method for integrating CompositorContent within RemoteImmersiveSpace. Thanks.
Screenshot:
Specific error message:
validateComputeFunctionArguments:1149: failed assertion `Compute Function(textureShader): Shader uses texture(texture[0]) as read-write, but hardware does not support read-write texture of this pixel format.'
OS: visionOS 2.1 (22N5548c) simulator.
Link:
https://developer.apple.com/documentation/visionos/generating-procedural-textures-in-visionos
I use a AVplayer in a window view, I found that when I move the window to different positions, the default behavior is that the sound will change according to the window position. However, in some cases, I don't need this default behavior. I hope the sound doesn't change.
Hello,
I have an iOS app that is using SwiftUI but the gesture code is written using UIGestureRecognizer. When I run this app on visionOS using the "Designed for iPad" destination and try to use any of my gestures I see this warning in the console:
Trying to convert coordinates between views that are in different UIWindows, which isn't supported. Use convertPoint:fromCoordinateSpace: instead.
But I don't see any visible problems with the gestures.
I see this warning printed out after the gesture takes place but before any of our gesture methods get kicked off. So now I am wondering if this is something we need to deal with or some internal work that needs to happen in UIKit.
Does anyone have any thoughts on this?
I haven't found a way to programmatically position the main view in visionOS apps, which seems intentional. While this aligns with user-controlled window placement, it creates a challenge: as users move, they must constantly reposition the main window manually.
A potential solution could be a feature that quickly brings the window to the user, perhaps via a custom gesture. This might improve user experience significantly.
Given my current understanding of visionOS, I may be missing something. I'd appreciate any insights or alternative perspectives on this issue. Thoughts?
I'm developing an app in which I need to render pictures and contain some models in a RealityView. I want to set up a camera, intercept virtual content through the camera, and save it as an image.
Topic:
Spatial Computing
SubTopic:
General
Tags:
SwiftUI
RealityKit
Reality Composer Pro
Shader Graph Editor
i have normal application flow and at one place i have to open immesive space for image seen 360 view but currently when ever i run application it start with immersive space my app normal flow is not start.What the isssue here?
Hello everyone
I would like to create my own spatial video on my Apple Vision Pro. According to all the documentation from Apple, this requires two camera angles that enhance the spatial perception. I have purchased the Enterprise license with main camera access for this purpose. However, this only gives me access to the left main camera of the glasses. Is there a way to access the right camera as well? Or is the one camera image enough to create a spatial video by splitting the image, for example?
I am open to any help and ideas. My goal is to create the video with the cameras on the glasses, not externally.
We have a native iOS app that supports the upload and display of USDZ files. It has been working great since beta (late 2022) and live launch (late 2023) until now. But recently we had reports from some users on Max model phones (14 + 15 Pro Max) at least. When they launch tap and launch a 3D file the Quick Look player is triggered. So far so good. But for affected users the controls along the top of the player - X (close) AR | Object (toggle) and share button - are moving too high up the phone screen and getting stuck (untappable) behind the phone's top status bar (time, camera bug, connection, battery).
This means that when they open a USDZ file in AR or 3D view they have to hard-close the app to get out of it again. This doesn't happen when they open a USDZ file from Files, Dropbox etc on their phone (which also uses the Quick Look player). The controls only move up and get stuck when launching a USDZ from within our app.
I'm at a loss to figure out what might be causing this on some phones and not all others. And why only when opening a USDZ file from our app! So far we have replicated this issue on a single iphone 14 Pro Max and a 15Pro Max, both running iOS18+
We have tested on other 15Pro Max's on same OS, and Pros, normal iPhones, Minis and are not experiencing the issue. You would think that a USDZ file is a USDZ file and that your iPhone knows what to do with it and open it in the Quick Look player regardless of where you open the file from. Why would the navigation items be moving if you open the USDZ file from within our app, and why only for some select users?
We will continue to troubleshoot and test but I wanted to throw this out to the community in case anyone had experienced this or if anyone had any theories that would expedite our testing. Your thoughts are most appeciated!
Here is a video showing the expected (correct) behaviour: https://www.dropbox.com/scl/fi/0sp8s4opaf2m4gukkcbrk/How-opening-a-USDZ-should-behave_correct-behaviour.MP4?rlkey=tzzau9x91mwox66gsgguryhep&st=qiykmne9&dl=0 and a screenSHOT attached below of what is happening on one of the affected user's iPhone 15Pro Max.
This issue has been since visionOS 1 unless that is how it is supposed to work. As you can see in the screen capture the shadows from the top box are shown on all 3 boxes below.
This is a screen capture in composer pro but the same thing happens in the Vision Pro.
Is there any way to stop this behavior and just have shadows on the first object below the object that is casting the shadows ?
I am trying to get the new PresentationComponent working in VisionOS26 as seen in this WWDC video:
https://developer.apple.com/videos/play/wwdc2025/274/?time=962 (18:29 minutes into video)
Here is some other example code but it doesn't work either: https://stepinto.vision/devlogs/project-graveyard-devlog-002/
My simple Text view (that I am adding as a PresentationComponent) does not appear in my RealityView even though the entity is found. Here is a simple example built from an Xcode immersive view default project:
struct ImmersiveView: View {
@Environment(AppModel.self) var appModel
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
if let materializedImmersiveContentEntity = try? await Entity(named: "Test", in: realityKitContentBundle) {
content.add(materializedImmersiveContentEntity)
var presentation = PresentationComponent(
configuration: .popover(arrowEdge: .bottom),
content: Text("Hello, World!")
.foregroundColor(.red)
)
presentation.isPresented = true
materializedImmersiveContentEntity.components.set(presentation)
}
}
}
}
}
Here is the Apple reference: https://developer.apple.com/documentation/realitykit/presentationcomponent
Information is light on the new subdivision support for USD models in RealityKit, and I have been unable so far to get one of my models to actually subdivide within Reality Composer Pro or Quick Look (or when viewing on Vision Pro).
I've exported a few test models from Houdini and verified that it contains ' uniform token subdivisionScheme = "catmullClark"'. I've started with some very lightweight, basic meshes.
But, when viewing, they simply look like polygonal meshes. There's no 'subdividing' occurring at runtime when viewing the models.
Is there a trick to getting them to actually smooth-out?
Topic:
Spatial Computing
SubTopic:
General
I'm getting the following error in my swift build targeting VisionOS 2.0 :
" 'defaultDisplay' is unavailable in visionOS "
TLDR : how do I specify an initial window position in visionOS? The docs seem to be off? - see below.
The docs say it is available, but it is not, or at least my XCODE (Version 16.0 ) is throwing errors on it :
https://developer.apple.com/documentation/swiftui/scene/defaultwindowplacement(_:)
I know apple is opinionated about window placement in visionOS, and maybe it will never be available, but the docs say it is in visionOS 2.0+ and it sure would be nice to be able to specify a default position toward the bottom of one's FOV, etc .
Side-note -- the example code in that doc also has the issue that "Window" is not available in visionOS ( WindowGroup is ).
example code -- barely modified from example code in doc :
var body: some Scene {
WindowGroup("MyLilWindow", id: "MyLilWindow") {
TestView()
}
.windowResizability(.contentSize)
.defaultWindowPlacement { content, context in
let displayBounds = context.defaultDisplay.visibleRect
let size = content.sizeThatFits(.unspecified)
let position = CGPoint(
x: displayBounds.midX - (size.width / 2),
y: displayBounds.maxY - size.height - 140)
return WindowPlacement(position)
}
}
In RealityView, physical components are only applicable to certain solids. How can I simulate the physical effects of water and cloth?
enity.components.set(PhysicalComponent)
I'm doing a weather app, users can search locations for getting weather, but the problem is, the results only shows locations in my country, not in global. For example, I'm in China, I can't search New York, it just shows nothing. Here's my code:
@Observable
class SearchPlaceManager: NSObject {
var searchText: String = ""
let searchCompleter = MKLocalSearchCompleter()
var searchResults: [MKLocalSearchCompletion] = []
override init() {
super.init()
searchCompleter.resultTypes = .address
searchCompleter.delegate = self
}
@MainActor
func seachLocation() {
if !searchText.isEmpty {
searchCompleter.queryFragment = searchText
}
}
}
extension SearchPlaceManager: MKLocalSearchCompleterDelegate {
func completerDidUpdateResults(_ completer: MKLocalSearchCompleter) {
withAnimation {
self.searchResults = completer.results
}
}
}
Also, I've tried to set searchCompleter.region = MKCoordinateRegion( center: CLLocationCoordinate2D(latitude: 0, longitude: 0), span: MKCoordinateSpan(latitudeDelta: 180, longitudeDelta: 360) ), but it doesn't work.
So I am exporting a .usdc file from blender that already has some morph animations. The animations play well in blender but when I export I cannot seem to play them in RealityKit or RCP.
Entity.availableAnimations is an empty array.
Not of the child objects in the entity hierarchy has an animation library component with it.
Maybe I am exporting it wrong but I tried multiple combinations but doesn't seem to work.
Here are my export settings in blender
The original file I purchased is an FBX file that has the animation but when I try to directly get it in RealityConverter it doesn't seem to play animations.
Topic:
Spatial Computing
SubTopic:
General
Tags:
Reality Converter
RealityKit
Reality Composer Pro
visionOS