I'm developing a 3D scanner works on a iPad(6th gen, 12-inch).
Photogrammetry with ObjectCaptureSession was successful, but other trials are not.
I've tried Photogrammetry with URL inputs, these are pictures from AVCapturePhoto.
It is strange... if metadata is not replaced, photogrammetry would be finished but it seems to be no depthData or gravity info were used. (depth and gravity is separated files). but if metadata is injected, this trial are fails.
and this time i tried to Photogrammetry with PhotogrammetrySamples sequence and it also failed.
the settings are:
camera: back Lidar camera,
image format: kCVPicelFormatType_32BGRA(failed with crash) or hevc(just failed) image
depth format: kCVPixelFormatType_DisparityFloat32 or kCVPixelFormatType_DepthFloat32
photoSetting: isDepthDataDeliveryEnabled = true, isDepthDataFiltered = false, embeded = true
I wonder iPad supports Photogrammetry with PhotogrammetrySamples
I've already tested some sample codes provided by apple:
https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app
https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera
https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture
What should I do to make Photogrammetry successful?
Discuss spatial computing on Apple platforms and how to design and build an entirely new universe of apps and games for Apple Vision Pro.
Post
Replies
Boosts
Views
Activity
Hi. I want to make iOS app that when use camera AR, it can show someplace around me have annotations. ARGeoAnchor something but I don't have any idea. Can anyone give me some keywords? I can use Mapkit to search but don't know how to map it in AR.
Is it possible to render a Safari-based webview in full immersive space, so an app can show web pages there?
Suppose I want to use the Vision Pro device in multiple rooms in my home.
I have worn the device when I entered my home, checked some notifications on the device, closed the apps. With the device still on my head, I move to my bedroom. Now I want to open some other application without removing the headset and wearing it again. Is this possible?
Hi,
What are the limitations and capabilities of visionOS? I cannot find answers to the questions I have.
Let's say you have some USDZ files stored in a cloud service, there are so many of them that the app would be huge if you put them in assets. You want to fetch the one you are interested in and show it while an app is running. Is it possible to load USDZ files at runtime from the network?
Is there a limit to how many objects can be visible at once? Let's say I am in an open space, with no walls. I want to place 100 3D objects somewhere in space. Is it possible? What if I placed 500, 1000?
Is there a way to save the anchor point of the object? I want to open the app again and have an object in the same place I left it. I would like to arrange my space and have objects always in the same spots.
How does the OS behave if objects are in different rooms? Is it possible to walk around, visit different rooms, and have objects anchored there? Would it behave like real objects?
Is it possible to color a plane? Let's say there is a wall and it's black. I want this wall to be orange. Is it possible?
When i call queryDeviceAnchor in my Billboard system I get transform updates but I'm unsure how to process them (similar to the Diorama sample app).
Is it a bug that I recieve these updates? The documentation says that ARKit data is only provided in a full space so I would expect this not to work at all.
But if this is the case, why am I getting deviceAnchor values in this situation?
I want to open a view in my App that contains a Model3D view and I want that object to rotate continuously around the Y axis while it is visible. Is it possible to animate the rotation of a Model3D view?
I've tried this code, but the object just sits there and doesn't rotate.
import RealityKit
import RealityKitContent
import SwiftUI
struct QuantumComputerArea: View {
@State var degreesRotating = 0.0
var body: some View {
VStack {
Model3D(named: "quantumComputer") { phase in
switch phase {
case .empty:
ProgressView()
case let .failure(error):
Text(error.localizedDescription)
case let .success(model):
model
.resizable()
.scaledToFit()
.offset(x: -75, y: 0)
.rotation3DEffect(.degrees(degreesRotating), axis: (x: 0, y: 1, z: 0))
@unknown default:
fatalError()
} //phase
} //Model3D
.onAppear {
withAnimation(Animation.linear(duration: 10).repeatForever(autoreverses: false)) {
degreesRotating = 360
}
}
} //VStack
} //View
} //View
I'm probably missing something simple but if anyone has any suggestions (including use a RealityView) I'd be grateful for the advice.
I am new to visionOS development, just slowly figuring out the difference in immersion styles to figure out how I want my app to behave.
It seems that when you use a progressive immersive space the minimum immersion level (set via the digital crown) is not 0? Meaning, there is no way to go from mixed to full by using the Digital Crown. Even when I try to set it to 0 (such as in the Destination Video sample), it pops back up to around 30-40%, and I always see the background. Is this expected behavior, or are there some settings that allow me to change this minimum immersion level?
Further, in the video 'Meet ARKit for spatial computing', it is stated that to get access to ARKit tracking data you must use a 'Full Space', not the 'Shared Space'. This wording is confusing to me. Is an ImmersiveSpace set to the .mixed (or .progressive) immersion style still a 'Full Space' (because it isn't in the shared space, with other apps)? OR, is ARKit only available in an ImmersiveSpace with the .full immersion style? Just feels like maybe 'full' is being used in two different ways here...
Thanks in advance,
-pj
Hi folks!
I have been working with a team on a Vision Pro app using Reality Composer Pro. One thing we have found is that multiple developers editing the RCPro scene are a continuous problem, similar to when multiple developers edit a storyboard.
RC Pro maintains a SceneMetadataList.json file that indexes the file contents of the project that is updated even as the scene hierarchy is opened and closed, not to mention other changes to scene content. We are getting frequent continuous version control conflicts with this file as we each make changes and edits to the scene, or even browse the scene without making any substantive changes.
It seems like it would be safe to add the SceneMetadataList.json file in a RC Pro project to .gitignore. Is that recommended? Any downsides to that?
I am running a modified RoomPllan app in my test environment I get two ARSessions active, sometimes more. It appears that the first one is created by Scene Kit because it is related go ARSCNView. Who controls that and what gets processed through it? I noticed that I get a lot of Session Interruptions from Sensor Failure when I am doing World Tracking and the first one happens almost immediately.
When I get the room capture delegates fired up I start getting images to the delegate via a second session that is collecting images. How do I tell which session is the scene kit session and which one is the RoomCapture session on thee fly when it comes through the delegate? Is there a difference in the object desciptor that I can use as a differentiator? Relying on the Address of the ARSession buffer being different is okay if you get your timing right. It wasn't clear from any of the documentation that there would be TWO or more AR Sessions delivering data through the delegates. The books on the use of ARKIT are not much help in determining the partition of responsibilities between the origins. The buffer arrivals at the functions supported by the delegates do not have a clear delineation of what function is delivered through which delegate discernible from the highly fragmented documentation provided by the Developer document library. Can someone give me some guidance here? Are there sources for CLEAR documentation of what is delivered via which delegate for the various interfaces?
I tried to show spatial photo on my application by swiftUI's Image but it just show flat version of it even I Use Vision Pro,
so, how can I show spatial photo to users,
does there any options for this?
Hi guys,
if you started using Vision Pro, I'm sure you already found some limitations. Let's join forces and make feature requests. When creating Feedback, request from one guy may not get any attenption from Apple, but if we join and more of us make the same request, we might just push those ideas through. Feel free to add your ideas and don't forget to create feedback:
app windows can only be moved forward to a distance of about 20ft/6m. I'm pretty sure some users would like to push window as far as a few miles away and make the window large to be still legible. This would be very interesting especialy when using Environments and 360-degree view. I really want to put some apps up on the sky above the mountains and around me, even those iOS apps just made compatible with Vision Pro.
when capturing screen, I always get message "Video capture not possible due to insufficient lighting". Why? I have Environment loaded and extended 360 degrees with some apps opened, so there is no need for external lighting (at least I thing it's not needed). I just want to capture what I see. Imagine creating tutorials, recording lessons for learning various subjects, etc. Actual Vision Pro user might prefer loading their on environments an setup app in spatial domain, but for those that don't have it yet or when creating videos to be available on antique 2D computer screens , it may be useful to create 2D videos this way.
3D video recording is not very good, kind of shaky, not when Vision Pro is static, but when walking and especially when turning head left/right/up/down (even relatively slowly). I think hardware should be able to capture and create nice and smooth video. It's possible that Apple just designed simple camera app and wants to give developers a chance to create a better Camera app, but it still would be nice to have something better out of the box.
I would like to be able to walk through Environments. I understand safety of see-through effect, so users didn't hit any obstacles, but perhaps obstacles could be detected and when user gets to 6ft/2m from obstacle then it could present at first warning (there is already "You are close to and object" and then make surroundigns visible, but if there are no obstacles (user can be located in large space and can place a tape or a thread around the safe area), I should be able to walk around and take a look inside that crater on the Moon.
We need Environments, Environments, Environments and yet more of them, I was hoping for hundreds, so we could even pick some of them and use in our apps, like games where you want to setup a specific environment.
Well, that's just a beginning and I could go on and on and on, but tell me what you guys think.
Regards and enjoy new virtual adventure!
Robert
I have been digging into learning shader graphs by watching Unity shader graph content, cause lots of the same concepts apply.
One thing I noticed was that in Unity, each node in the shader graph has a little preview. I don't think this exists in Reality Composer Pro, but is there anyway to mimic it (like can I hook up a node that allows me to debug the graph at that point?)
If not, I'm happy to just file a feedback about it, but just thought I'd ask!
I am struggling to figure out how to make a shader to animate each vertex of a model separately using noise. I watched a video on how to do this in Unity, but I think something must be different with how Reality Composer Pro handles the noise nodes?
For example, in this graph I just hooked up the noise node directly to the geometry modifier:
In my output you can see the plane is adjust per-vertex using the noise node. My goal would be to animate this like waves, but moving the noise.
So in this graph I use time with sin to adjust the UV of the noise. This seems to change the noise node to output a single value (I guess that makes sense, since I modify the UV, it results in a single value, at that UV in the noise map). So then, I take that as the Y value and put it back into the geometry modifier. But now it doesn't work per-vertex, it moves the whole model up and down (based on the single value coming out of the noise map).
How do I make this apply to each vertex of the noise map individually?
This is an example of the output I want in Unity, the plane is being adjusted per-vertex by a scrolling 2d noise node:
Hello, I'm interested in using the iOS on-device object capture API for photogrammetry, however I would like to integrate it in a web app.
I understand that web apps cannot usually access system-level APIs, so I am unsure of whether or not this would be feasible to implement. I would greatly appreciate for any pointers in the right direction.
Thank you!
Why does PhotogrammetrySession.isSupported return true if Object Capture is supported?
It would be great if you could use PhotogrammetrySession on iOS devices without lidar and feed it a folder of pictures to make a 3D model.
Thanks!
Is it possible to use an image sequence, .mov or sprite sheet as a node source for a custom material in Reality Composer Pro?
I have noticed that in the particle emitter, the magic preset uses a 4x4 sprite sheet as a particle source. Can this be done within the shader graph for the diffuse or normal slot?
I'm following the Meet Reality Composer Pro walkthrough and ran into something that didn't function as expected.
When I got to the step where I add five "Bird_With_Audio.usda" references to the scene, I found they did not play audio. After some trial and error, I found that Preview > Resource in each of their Spatial Audio items was set to "None." If I click the dropdown menu, I see several "Bird_Calls" groups to pick from.
I checked the original Bird_With_Audio.usda that I had created, and the "Bird_Calls" audio group was correctly assigned and worked. I tried dragging a sixth Bird_With_Audio into the scene and confirmed that the Spatial Audio item suddenly empties, rendering the bird silent.
I was able to go through each of the five birds and set their Spatial Audio Resource to Bird_Calls, and the group worked like the video demonstrates.
While this fixed the issue, as a beginner I'd like to know why this happened. It doesn't seem right that I would build and item and then have to re-attach any sounds to it when I place it in the main scene. So…where did I mess up?
Currently, visionos is customizing immersive mode in 360-degree full, and I'm looking for a way to adjust it like Apple's basic immersive mode.
Hi!
I think this should be a pretty normal usage of ARKit / RealityKit
I have a static mesh for my environment, that I want to have static collision properties.
My options for making this interact with dynamic bodies are:
ShapeResource.generateConvex(...) -- which overshoots my shape dramatically.
Entity.generateCollisionShapes(...) which also overshoots.
I notice additional APIs around ShapeResource -- ShapeResource.generateStaticMesh(positions:faceIndices) seems to be exactly what I need.
So far, I haven't been able to invoke this successfully to set my collision box.
Questions:
Is this not, a completely normal thing for developers to want to do? Why is there no support for this out-of-the-box in RealityKit/ARKit?
To support this in my app, everywhere I've read has said I need to parse the .obj of my terrain manually, and find triangulated faces and pipe them into this function. That feels like a very standardized process -- and given that RealityKit is already forcing me to use .usdz, why should this not be a part of the SDK?
Regardless- I triangulated my terrain mesh, and have been working on parsing code to get the positions and faceIndices for this set up (as an extension on Entity).
Is this the right approach? Am I missing something more obvious?
Thanks,
Justin