Hello. I am trying to load my own Image Based Lighting file in a visionOS RealityView. I used the code you get when creating a new project from scratch and selecting the immersive space to full when creating the project. With the sample file Apple provides, it works. But when I put my image in PNG, HEIC or EXR format in the same location the example file was in, it doesn't load and the error states:
Failed to find resource with name "SkyboxUpscaled2" in bundle
In this image you can see the file "ImageBasedLight", which is the one that comes with the project and the file "SkyboxUpscaled2" which is my own in the .exr format.
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
do{
let resource = try await EnvironmentResource(named: "SkyboxUpscaled2")
let iblComponent = ImageBasedLightComponent(source: .single(resource), intensityExponent: 0.25)
immersiveContentEntity.components.set(iblComponent)
immersiveContentEntity.components.set(ImageBasedLightReceiverComponent(imageBasedLight: immersiveContentEntity))
}catch{
print(error.localizedDescription)
}
Does anyone have an idea why the file is not found? Thanks in advance!
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
I am developing an iOS app intended to be used only a specific location (a campus). In this case, I'd like to use ARGeoAnchor to anchor content across a relatively large space, though in the pedestrian-only areas this is not supported and tracking begins to fail.
Is it possible to additionally use ARReferenceObject to re-localize to a specific location when I am walking in an unsupported area.
(FB13719373)
I setup an entity with a collision component on it. But it was hard to target the object for I tap gesture, until I increased the radius quite a bit. Now I am unsure if it is too large. Is there a way to visualize these components somehow, maybe even in a running scene?
Also, I find it pretty confusing that the size is given in cm. This made me wonder if this cm setting is affected by the entity's size at all? In Unity, it's just (local) "units".
I wanted to create a particle effect using particle images I copied from a Unity project. These images are PNGs with an alpha channel. In Unity, these look georgeous, but on visionOS, they look rather weird, since the alpha channel is not respected. All pixel which are not pitch black are full white. Is there a way to change this behavior?
Hi all,
I am trying to use ARWorldTrackingConfiguration to find any faces in my scene. However when I query the scene, using the same type of query one would use in ARFaceTrackingConfiguration, I don't get an Entity back. Here's my code:
var entityCollection : Set<Entity> = []
let faceEntity = scene.performQuery(query1).first {
$0.components[SceneUnderstandingComponent.self]?.entityType == .face
}
Every single time faceEntity returns as empty. Any help/pointers would be appreciated!
I am trying to change the color of usdz asset provide by my designer. But I am unable to do. Can some help me with some sample code
When isAutoFocusEnabled is set to true, the entity in the scene keeps shaking.
No focus when isAutoFocusEnabled is set to false.
How to set up to solve this problem.
override func viewDidLoad() {
super.viewDidLoad()
arView.session.delegate = self
guard let arCGImage = UIImage(named: "111", in: .main, with: .none)?.cgImage else { return }
let arReferenceImage = ARReferenceImage(arCGImage, orientation: .up, physicalWidth: CGFloat(0.1))
let arImages: Set<ARReferenceImage> = [arReferenceImage]
let configuration = ARImageTrackingConfiguration()
configuration.trackingImages = arImages
configuration.maximumNumberOfTrackedImages = 1
configuration.isAutoFocusEnabled = false
arView.session.run(configuration)
}
func session(_ session: ARSession, didAdd anchors: [ARAnchor]) {
anchors.compactMap { $0 as? ARImageAnchor }.forEach {
let anchor = AnchorEntity(anchor: $0)
let mesh = MeshResource.generateBox(size: 0.1, cornerRadius: 0.005)
let material = SimpleMaterial(color: .gray, roughness: 0.15, isMetallic: true)
let model = ModelEntity(mesh: mesh, materials: [material])
model.transform.translation.y = 0.05
anchor.children.append(model)
arView.scene.addAnchor(anchor)
}
}
I’d like to convert a Filemaker 18 Runtime to a Mac (Catalina) application package.
I found a youtube video that describes how to convert a shell script into a Mac OSX App.
This is the shell script:
I’ve had no luck adapting it to convert my Filemaker runtime into an app.
Thanks in advance for any advice.
Regards,
Lara
Every time I dismiss ImmersiveSpace with progressive ImmersionStyle and open another one I get about 30-40% of immersion level. Can immersion be set to 100% in progressive by default?
I am trying to determine the corners of a RoomPlan-detected wall using the information available in the ARView session's frame, but can't quite figure out what I'm doing wrong. The corners appear to be correct relative to each other, but the wall appears too large when I render it. (I'm also not sure I'm handling the image rotation correctly either, which may be compounding my problem). Here is the code I currently have, along with a sample image, and the resulting image when I pass it through the perspective filter. it is close but isn't cropping the walls and floors correctly.
func captureSession(_ session: RoomCaptureSession, didChange room: CapturedRoom) {
for surface in room.walls {
if let frame = self.arView.session.currentFrame {
var image: CGImage? = nil
VTCreateCGImageFromCVPixelBuffer(frame.capturedImage, options: nil, imageOut: &image)
let wallTransform = surface.transform
let cameraTransform = frame.camera.transform
let intrinsics = frame.camera.intrinsics
let projectionMatrix = frame.camera.projectionMatrix
let width = surface.dimensions.y
let height = surface.dimensions.x
let inverseCameraTransform = simd_inverse(cameraTransform)
let wallTopRight = simd_float4(width/2, height/2, 0, 1)
let wallTopLeft = simd_float4(-width/2, height/2, 0, 1)
let wallBottomRight = simd_float4(width/2, -height/2, 0, 1)
let wallBottomLeft = simd_float4(-width/2, -height/2, 0, 1)
let worldTopRight = wallTransform * wallTopRight
let worldTopLeft = wallTransform * wallTopLeft
let worldBottomRight = wallTransform * wallBottomRight
let worldBottomLeft = wallTransform * wallBottomLeft
let cameraTopRight = projectionMatrix * inverseCameraTransform * worldTopRight
let cameraTopLeft = projectionMatrix * inverseCameraTransform * worldTopLeft
let cameraBottomRight = projectionMatrix * inverseCameraTransform * worldBottomRight
let cameraBottomLeft = projectionMatrix * inverseCameraTransform * worldBottomLeft
let imageTopRight = intrinsics * simd_float3(cameraTopRight.x / cameraTopRight.w, cameraTopRight.y / cameraTopRight.w, cameraTopRight.z / cameraTopRight.w)
let imageTopLeft = intrinsics * simd_float3(cameraTopLeft.x / cameraTopLeft.w, cameraTopLeft.y / cameraTopLeft.w, cameraTopLeft.z / cameraTopLeft.w)
let imageBottomRight = intrinsics * simd_float3(cameraBottomRight.x / cameraBottomRight.w, cameraBottomRight.y / cameraBottomRight.w, cameraBottomRight.z / cameraBottomRight.w)
let imageBottomLeft = intrinsics * simd_float3(cameraBottomLeft.x / cameraBottomLeft.w, cameraBottomLeft.y / cameraBottomLeft.w, cameraBottomLeft.z / cameraBottomLeft.w)
let topRight = CGPoint(x: CGFloat(imageTopRight.x), y: CGFloat(imageTopRight.y))
let topLeft = CGPoint(x: CGFloat(imageTopLeft.x), y: CGFloat(imageTopLeft.y))
let bottomRight = CGPoint(x: CGFloat(imageBottomRight.x), y: CGFloat(imageBottomRight.y))
let bottomLeft = CGPoint(x: CGFloat(imageBottomLeft.x), y: CGFloat(imageBottomLeft.y))
if let image {
let filter = CIFilter.perspectiveCorrection()
filter.inputImage = CIImage(image: UIImage(cgImage: image))
filter.topRight = topRight
filter.topLeft = topLeft
filter.bottomRight = bottomRight
filter.bottomLeft = bottomLeft
let transformedImage = filter.outputImage
if let transformedImage {
let context = CIContext()
if let outputImage = context.createCGImage(transformedImage, from: transformedImage.extent) {
let wall = Wall(id: surface.identifier, image: outputImage, surface: surface)
self.walls.append(wall)
}
}
}
}
}
}
How to set the scale unit of an Entity in Reality Composer Pro, for example, if the scale value is 1 meter, then when this Entity is placed in RealityView, the displayed size will be 1 meter
If the unit of scale cannot be set in Reality Composer Pro, is there a way to specify the unit of scale in the code so that the Entity can be displayed in meters when added to RealityView
Thank you
We have a random issue that when ARKitSession.run() is called, monitorSessionEvents() receives .paused and it never transitions to .running If we exit Immersive Space and do ARKitSesssion.run() again it works fine.
Unfortunately this is very difficult to manage in the flow of our App.
Hello!
Is it possible to turn on hand pass-through (hand cutout, not sure what is the correct name, real hands from the camera) in WebXR when in immersive-vr and hand tracking feature is enabled? I only see my hands when I disable hand tracking feature in WebXR, but then I don't get the joints positions and orientations.
I am trying to map the 3D skeleton joint positions of an ARBodyAnchor to the real body on the camera image.
I know I could simply use the "detectedBody" of the ARFrame, which would already deliver the normalized 2D position of each joint, but what I am mostly interested in is the z-axis (the distance of each joint to the camera).
I am starting a ARBodyTrackingConfiguration, setting the world alignment to ARWorldAlignmentCamera (in which case the camera transform is an identity matrix) and multiplying each joint transform in model space (via modelTransformForJointName:) with the transform of the ARBodyAnchor. And then tried many different ways to get the joints to line up with the image, by for example multiplying the transforms with the projectionMatrix of the ARCamera. But whatever I do, it never lines up correctly.
For example, the doesn't really seem to be a scale factor in the projectionMatrix or the ARBodyAnchor transform, no matter the distance of the camera to the detected body, the scale of the body is always the same.
Which means I am missing something important, and I haven't figured out what. So does anyone have an example of how I can get the body align to the camera image? (or get the distance to each joint in any other way?)
Thanks!
I'm trying to control the LOD of textures for an app for vision pro, With the default image node in composer pro the UV's are correct but the LOD is not what I want, I would like to have control over it. I see there is a node called "RealityKitTexture2DLOD" but as soon as I try to use that one the UV's are all messed up. Am I missing something ? Do we need to do something specific to use this node ?
I tried to use the nodes "Place 2D" and "UsdTransform2d" but could not get the texture to align
Any help appreciated
I was executing some code from Incorporating real-world surroundings in an immersive experience
func processReconstructionUpdates() async {
for await update in sceneReconstruction.anchorUpdates {
let meshAnchor = update.anchor
guard let shape = try? await ShapeResource.generateStaticMesh(from: meshAnchor) else { continue }
switch update.event {
case .added:
let entity = ModelEntity()
entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform)
entity.collision = CollisionComponent(shapes: [shape], isStatic: true)
entity.components.set(InputTargetComponent())
entity.physicsBody = PhysicsBodyComponent(mode: .static)
meshEntities[meshAnchor.id] = entity
contentEntity.addChild(entity)
case .updated:
guard let entity = meshEntities[meshAnchor.id] else { continue }
entity.transform = Transform(matrix: meshAnchor.originFromAnchorTransform)
entity.collision?.shapes = [shape]
case .removed:
meshEntities[meshAnchor.id]?.removeFromParent()
meshEntities.removeValue(forKey: meshAnchor.id)
}
}
}
I would like to toggle the Occlusion mesh available on the dev tools below, but programmatically. I would like to have a button, that would activate and deactivate that.
I was checking .showSceneUnderstanding but it does not seem to work in visionOS. I get the following error 'ARView' is unavailable in visionOS when I try what is available Visualizing and Interacting with a Reconstructed Scene
In visionOS. mix mode, I place a virtual object on the floor and a chair in front of it, but the chair does not obstruct the virtual object, making the effect unrealistic. How to make chairs and other objects in reality cover virtual objects
Hi all, I need some help debugging some code I wrote. Just as a preface, I'm an extremely new VR/AR developer and also very new to using ARKit + RealityKit. So please bear with me :) I'm just trying to make a simple program that will track an image and place an entity on it. The image is tracked correctly, but the moment the program recognizes the image and tries to place an entity on it, the program crashes. Here’s my code:
VIEWMODEL CODE:
Observable class ImageTrackingModel {
var session = ARKitSession() // ARSession used to manage AR content
var imageAnchors = [UUID: Bool]() // Tracks whether specific anchors have been processed
var entityMap = [UUID: ModelEntity]() // Maps anchors to their corresponding ModelEntity
var rootEntity = Entity() // Root entity to which all other entities are added
let imageInfo = ImageTrackingProvider(
referenceImages: ReferenceImage.loadReferenceImages(inGroupNamed: "referancePaper")
)
init() {
setupImageTracking()
}
func setupImageTracking() {
if ImageTrackingProvider.isSupported {
Task {
try await session.run([imageInfo])
for await update in imageInfo.anchorUpdates {
updateImage(update.anchor)
}
}
}
}
func updateImage(_ anchor: ImageAnchor) {
let entity = ModelEntity(mesh: .generateSphere(radius: 0.05)) // THIS IS WHERE THE CODE CRASHES
if imageAnchors[anchor.id] == nil {
rootEntity.addChild(entity)
imageAnchors[anchor.id] = true
print("Added new entity for anchor \(anchor.id)")
}
if anchor.isTracked {
entity.transform = Transform(matrix: anchor.originFromAnchorTransform)
print("Updated transform for anchor \(anchor.id)")
}
}
}
APP:
@main
struct MyApp: App {
@State var session = ARKitSession()
@State var immersionState: ImmersionStyle = .mixed
private var viewModel = ImageTrackingModel()
var body: some Scene {
WindowGroup {
ModeSelectView()
}
ImmersiveSpace(id: "appSpace") {
ModeSelectView()
}
.immersionStyle(selection: $immersionState, in: .mixed)
}
}
Content View:
RealityView { content in
Task {
viewModel.setupImageTracking()
}
} //Im serioulsy so clueless on how to use this view
I am fairly new to 3D model rendering and do not know where to start.
I am trying to, ideally with ARKit & RealityKit or SceneKit, do a scan of an environment. This includes:
Applying realistic textures to the model.
Being able to save it as a .usdz file (to be able to open it within the App itself)
Once it is save do post-processing measurements within the model.
I would prefer to accomplish this feature by using a mesh, instead of the pointCloud that is used in the sample project of apple. Would this be doable using Apple's APIs and on a mobile device or would it be necessary to use a third party program?
I have managed to create a USDZ file using SceneKit's .scene.write(to:,delegate:) method. However the saved file is a "single object" and it is not possible to use raycasting to do post-processing measurements in the model.
Im trying to use a RealityView with attachments and this error is being thowen. Am i using the RealityView wrong? I've seen other people use a RealityView with Attachments in visionOS... Please let this be a bug...
RealityView { content, attachments in
contentEntity = ModelEntity(mesh: .generatePlane(width: 0.3, height: 0.5))
content.add(contentEntity!)
} attachments: {
Text("Hello!")
}.task {
await loadImage()
await runSession()
await processImageTrackingUpdates()
}