I am developing an visionos app. I load a .usdz file as a Reality Entity(such as a cabbage). And I want such an effect:
When I turn on a desk lamp in real world near the Entity, the surface of the Entity will correctly respond to the light in the real world.
I want an effect like this:
https://www.reddit.com/r/virtualreality/comments/1as01mm/shiny_disco_ball_reflecting_my_room/
I look up the api such as ImageBasedLightComponent andVirtualEnvironmentProbeComponent in RealityKit、EnvironmentLightEstimationProvider in ARKit,but I do not know how to code.
Besides, it will be better if the shadow will also respond to the light correctly.
visionOS
RSS for tagDiscuss developing for spatial computing and Apple Vision Pro.
Posts under visionOS tag
200 Posts
Sort by:
Post
Replies
Boosts
Views
Activity
Here is my code in visionOS 2.3
NavigationSplitView {
List {
}
.navigationTitle("Passwords")
} detail: {
Text("Hello")
.navigationTitle("All")
}
The font size of "Passwords" and "All" are smaller than the ones in Passwords app.
Following up on my previous question here: https://developer.apple.com/forums/thread/774262
Having solved the clipping problem, I am now trying to overlay some content in front of the RealityView. However, it looks like any content with transparency does not render in front of the RealityView, while opaque views seem to work; placing content with transparency like glassBackgroundEffect() behind the RealityView in a ZStack causes the entire window to flicker.
Additionally, my SwiftUI attachment placed in front of the stereoscopic image plane are invisible if the user look at it straight at 90 degrees. However, if the user look at it from increasing angles from the sides, the attachment gradually turns visible again.
Are these behaviors expected? What is a recommended approach to overlay content in front of a RealityView? Thanks!
I am experiencing an issue with Apple Sign-In on Vision Pro. When I build and run the app from Xcode, everything works fine—after signing in, the app returns to the foreground as expected.
However, when I launch the app directly on Vision Pro (not from Xcode), after completing the sign-in process, the app does not reopen from the background automatically. Instead, it closes, and I have to manually tap the app icon to reopen it.
Has anyone else encountered this issue? Is there a way to ensure the app properly resumes after sign-in without requiring manual intervention?
I have had my Apple Vision Pro headset working with Xcode before, but it has been a while since testing in the headset. Today, Xcode on my M1 Mac Studio could not see my Apple Vision Pro to run code on the headset.
I have updated my headset to visionOS 2.4 developer beta
Both Mac and headset are on the same Wi-Fi network (I am typing this on my Mac via Mac Virtual Display).
Headset has developer mode turned on.
Initially I tried on the latest macOS (15.3.x) an Xcode (16.2) releases, but Xcode failed to find the headset.
I installed the lated Xcode beta (16.3 beta), but it still failed to find the headset.
I installed the latest developer beta on my Mac (15.4), but neither Xcode (16.2) nor Xcode beta (16.3) can find the headset.
When I try to manage devices in Xcode, my Apple Vision Pro headset does not appear.
Any idea what I am missing?
I noticed the time sensitive entitlement says it's only for iOS and macOS. But without the entitlement, the time sensitive toggle doesn't show in my app's notification settings on visionOS.
When I archive my visionOS app for App Store Connect, the entitlement seems to be taken out as it doesn't show in my entitlement list for the build in App Store Connect.
I'm confused at this point if the entitlement is really necessary, since it seems to be needed to debug on the simulator at least. I don't have a physical device to test it on unfortunately.
I have a scene built up in RealityComposerPro, in which I've added a ParticleEmitter with isEmitting set to False and 'Loop' set to True.
In my app, when I toggle isEmitting to True there can be a delay of a few seconds before the ParticleEmitter starts.
However, if I programatically add the emitter in code at that point, it starts immediately.
To be clear, I'm seeing this on the VisionOS simulator - I don't have access to a device at this time.
Am I misunderstanding how to control the ParticleEmitter when I need precise control on when it starts.
Hi there,
I’m building a workplace experience that requires using virtual desktop, is there a way to launch it in my code, so user doesn’t have to do it manually?
Thanks in advance!
I am a newby of spatial computing and I am using ARKit and RealityKit to develop a visionPro app.
I want to accomplish such a goal: If the user's hand touchs an object(an entity in RealityView) on the table, it will post a Window. But I do not know how to handle the event "the user's hand touchs the object". Should I use hand tracking feature to do some computing by myself? Or is there some api to use directly?
Thank you!
We are now on Day 12, and my app remains stuck "In Review" with no progress. Despite my repeated attempts to get a real update—including daily support tickets, an expedited review request, a phone call with an Apple representative, and multiple forum posts—nothing has changed.
This is not a normal review delay. Based on backend logs, it is clear that no one has even opened the app for review. It is simply sitting in the queue, ignored, while my users are left without critical updates.
I demand a call back from an App Review manager today. Not a generic email, not another vague "it's still in review" response—I need a real explanation from someone who has actual insight into what is going on.
To reiterate:
My app is one of the most popular and highest-grossing indie apps on visionOS, with thousands of users and hundreds of subscribers.
The app follows a tight update schedule, which your inefficiency has completely disrupted.
This is actively harming my business and user experience.
Every proper channel I have followed has been ignored.
Apple preaches that it supports indie developers, but this level of disregard shows the opposite. If there is a legitimate reason for this delay, I expect clear communication, not silence. If there isn’t, I expect this review to be completed immediately.
Please schedule another call today. I will not accept another day of inaction.
App ID: 6737148404
I am a newby of spatial computing. Here I am learning how to use ARKit to capture the environment texture and apply it on a ModelEntity of RealityKit on Vision Pro. But I do not find a demo of how to use EnvironmentLightEstimationProvider.
After checking the documentation, I also have some questions:
EnvironmentProbeAnchor.environmentTexture is a MTLTexture, but EnvironmentResource needs a CGImage. How do I translate MTLTexture to CGImage(Forgive me that I do not know much about Metal or other framework, so It will be better if there is a code that I can copy and paste directly)
It seems that the EnvironmentProbeAnchor can only get the light information around the device. But what should I do if I want get the light information around the ModelEntity so that I can apply the environment texture on it.
It will be better if you can provide a code demo about how to use the new api.
Thank you!
Hello,
Let me ask you a question about Apple Immersive Video.
https://www.apple.com/newsroom/2024/07/new-apple-immersive-video-series-and-films-premiere-on-vision-pro/
I am currently considering implementing a feature to play Apple Immersive Video as a background scene in the app I developed, using 3DCG-created content converted into Apple Immersive Video format.
First, I would like to know if it is possible to integrate Apple Immersive Video into an app.
Could you provide information about the required software and the integration process for incorporating Apple Immersive Video into an app?
It would be great if you could also share any helpful website resources.
I am considering creating Apple Immersive Video content and would like to know about the necessary equipment and software for producing both live-action footage and 3DCG animation videos.
As I mentioned earlier, I’m planning to play Apple Immersive Video as a background in the app. In doing so, I would also like to place some 3D models as RealityKit entities and spatial audio elements.
I’m also planning to develop the visionOS app as a Full Space Mixed experience. Is it possible to have an immersive viewing experience with Apple Immersive Video in Full Space Mixed mode? Does Apple Immersive Video support Full Space Mixed?
I’ve asked several questions, and that’s all for now. Thank you in advance!
I’m working on a Vision Pro app using Metal and need to implement multi-pass rendering. Specifically, I want to render intermediate results to a texture, then use that texture in a second pass for post-processing before presenting the final output.
What’s the best approach in visionOS? Should I use multiple render passes in a single command buffer or separate command buffers? Any insights on efficiently handling this in RealityKit or Metal?
Thanks!
Using Unity to develop VisionOS program, pressing the right knob of VisionPro during use will exit the Unity space and destroy the model in the space. The model in the space has been disconnected from the SwiftUI interface. After clicking the right knob, return to the system main interface, and then click the right knob again to return to the inside of the program. However, Unity space cannot be restored, and calling the discisWindow method on the SwiftUI interface has no effect, so the interface cannot be destroyed. Is there any solution??
I already have an opinion ( I should never release to a platform without testing on a physical platform device ) on this but wanted to learn from experience and expertise and see if there were any viable options.
My hybrid casual puzzle game is released on the App Store for iOS. (Whew!) Apparently it is compatible to both Mac OS and VisionOS
I would love to make it available everywhere however, I am not sure it is best to do so without testing on these physical devices. Which could also mean making the design adjustments for those devices, having test devices ready etc. and I would have to update my Laptop to silicon.
Has anyone tried this without testing on physical devices? What are your thoughts/best suggestions? Thanks in advance!
Apple App Review Team, what is going on?
Yesterday, I was assured that my app review was being "expedited." Yet, here we are—Day 11—and absolutely nothing has changed. The app remains stuck "In Review" with zero progress.
This is beyond frustrating. It is actively damaging my business, my users, and the trust developers have in the App Store review process. I have followed every proper channel—daily support tickets, expedited review requests, even a direct call with an Apple representative—yet I am still met with the same vague, non-committal responses.
Let me be clear: This is not an acceptable way to handle app reviews. Developers cannot operate under these unpredictable and arbitrary delays. The app is one of the most successful indie apps on visionOS, and this treatment is completely unjustified.
I need real action. Today. Not another canned response, not another “we need additional time”—real progress and real accountability.
Do better!
App ID: 6737148404
Hello,
I'm writing an EntityAction that animates a material base tint between two different colours. However, the colour that is being actually set differs in RGB values from that requested.
For example, trying to set an end target of R0.5, G0.5, B0.5, results in a value of R0.735357, G0.735357, B0.735357. I can also see during the animation cycle that intermediate actual tint values are also incorrect, versus those being set.
My understanding is the the values of material base colour are passed as a SIMD4. Therefore I have a couple of helper extensions to convert a UIColor into this format and mix between two colours. Note however, I don't think the issue is with this functions - even if their outputs are wrong, the final value of the base tint doesn't match the value being set.
I wondered if this was a colour space issue?
import simd
import RealityKit
import UIKit
typealias Float4 = SIMD4<Float>
extension Float4 {
func mixedWith(_ value: Float4, by mix: Float) -> Float4 {
Float4(
simd_mix(x, value.x, mix),
simd_mix(y, value.y, mix),
simd_mix(z, value.z, mix),
simd_mix(w, value.w, mix)
)
}
}
extension UIColor {
var float4: Float4 {
var r: CGFloat = 0.0
var g: CGFloat = 0.0
var b: CGFloat = 0.0
var a: CGFloat = 0.0
getRed(&r, green: &g, blue: &b, alpha: &a)
return Float4(Float(r), Float(g), Float(b), Float(a))
}
}
struct ColourAction: EntityAction {
let startColour: SIMD4<Float>
let targetColour: SIMD4<Float>
var animatedValueType: (any AnimatableData.Type)? { SIMD4<Float>.self }
init(startColour: UIColor, targetColour: UIColor) {
self.startColour = startColour.float4
self.targetColour = targetColour.float4
}
static func registerEntityAction() {
ColourAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let interpolatedColour = event.action.startColour.mixedWith(event.action.targetColour, by: Float(animationState.normalizedTime))
animationState.storeAnimatedValue(interpolatedColour)
}
}
}
extension Entity {
func updateColour(from currentColour: UIColor, to targetColour: UIColor, duration: Double, endAction: @escaping (Entity) -> Void = { _ in }) {
let colourAction = ColourAction(startColour: currentColour, targetColour: targetColour, endedAction: endAction)
if let colourAnimation = try? AnimationResource.makeActionAnimation(for: colourAction, duration: duration, bindTarget: .material(0).baseColorTint) {
playAnimation(colourAnimation)
}
}
}
The EntityAction can only be applied to an entity with a ModelComponent (because of the material), so it can be called like so:
guard
let modelComponent = entity.components[ModelComponent.self],
let material = modelComponent.materials.first as? PhysicallyBasedMaterial else
{
return
}
let currentColour = material.baseColor.tint
let targetColour = UIColor(_colorLiteralRed: 0.5, green: 0.5, blue: 0.5, alpha: 1.0)
entity.updateColour(from:currentColour, to: targetColour, duration: 2)
Is it possible to create a button in my app that will turn on the spatial personas for the user? Currently the only way I know of turning on spatial personas is by selecting the cube icon in the FaceTime window which is quite clunky for people unfamiliar with the Vision Pro's UI. Any help would be appreciated.
Hi, we would like to create something where you can open multiple volumetric windows and place them in a room, our biggest issue is that we want these windows to be persistent, so when I close and reopen the app, the windows to be in the same position. We can't use immersive spaces because we also want to have the possibility to access the shared space.
Is it possible with the current features and capabilities to do that? If yes do you have some advices how can we achieve this?
The alternative is if is it possible to open the virtual display in immersive spaces or if we have the possibility to implement our own virtual display.
Dear Apple App Review Team,
I’m reaching out here as a last resort, as my previous attempts to get a response through the developer support portal have been met with only automated replies.
My app has been stuck in the "In Review" state for 10 days, and unfortunately, this is not the first time. Every time a new feature is added, the review process takes weeks, causing major disruptions to my development and release schedule. Based on backend logs, it appears that while the app is technically "In Review," no one has actually opened it, meaning it's sitting in a queue without active review.
I have submitted daily support tickets and expedited review requests, but I have yet to receive a response from a real person—only the standard copy-paste replies. This lack of transparency and responsiveness makes it extremely difficult to plan and operate efficiently.
Given that this is one of the most profitable indie apps on the visionOS App Store, I believe it deserves a more predictable and efficient review process. These extended delays are not just frustrating but actively harming the app’s growth and user experience.
I urgently request immediate action on this matter. Please do not reply with a generic link directing me to support—I’ve already exhausted that route without success. I need a real response and a clear resolution.
App ID: 6737148404