Post not yet marked as solved
I'm really excited about the Object Capture APIs being moved to iOS, and the complex UI shown in the WWDC session.
I have a few unanswered questions:
Where is the sample code available from?
Are the new Object Capture APIs on iOS limited to certain devices?
Can we capture images from the front facing cameras?
Post not yet marked as solved
The sample application and code does not seem to be available in the WWDC app or in the documentation along with other Object Capture samples. Where/when will this be released?
Post not yet marked as solved
the video mentions that developers can download and work off of a sample app that implements object capture for iOS . . .
how can we download?
thanks!
Post not yet marked as solved
Object Capture for iOS can i send to the Robotic Operating System as Point Cloud
I followed the instruction in this session and tried to write a demo with Object Capture API. But there's a MTLDebugRenderCommandEncoder assertion fails every time I call session.startDetecting(), just after the bounding box shows up.
The error shows:
-[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5780: failed assertion `Draw Errors Validation
Fragment Function(fsRealityPbr): the offset into the buffer clippingConstants that is bound at buffer index 8 must be a multiple of 256 but was set to 128.
and my code is pretty simple:
var body: some View {
ZStack {
ObjectCaptureView(session: session)
if case .initializing = session.state {
Button {
session.start(imagesDirectory: getDocumentsDir().appendingPathComponent("Images/"), configuration: configuration) // 💥
} label: {
Text("Prepare")
}
} else if case .ready = session.state {
Button {
session.startDetecting()
} label: {
Text("Continue")
}
} else if case .detecting = session.state {
Button {
session.startCapturing()
} label: {
Text("Start Capture")
}
}
}
}
I'm wondering if anyone else is facing the same problem or if there's an error on my side?
Post not yet marked as solved
I am trying ObjectCaptureSession in my Xcode 14.2 and i am getting error like "Cannot find 'ObjectCaptureSession' in scope". So my question is this object capture session is support in Xcode 14.2?
Post not yet marked as solved
Hello, I am really excited about the new Object Capture API for iOS.
In the WWDC23 demo video, the user was rotating around the object. My question is does this API support taking photos from a fixed position (like on a tripod) and having a turntable to rotate the object?
Another question, if the user has already taken some photos with depth and gravity, can the Object Capture use these images to construct the 3D model instead of taking new ones?
Thank you.
Hey!
I'm trying to add this line to my project:
var session = ObjectCaptureSession()
But it automatically says "Cannot find 'ObjectCaptureSession' in scope"
I cannot get this error to go away, so I haven't continued trying the snippets provided on your video.
Is there anything else I need to install or configure before this is ready?
I'm importing:
import RealityKit
import SwiftUI
import Firebase
In your video you mentioned that there is a sample code, but I can only find the snippets rather than a project. It's fine if there is no sample code, just that on the video you mentioned it, so it's confusing to not have the code, when it's mentioned so much on the video.
Xcode 15- Beta
iOS17 Sim installed
MacOS Ventura 13.3.1 a
After getting around some issues we finally got "something" to run.
But we have 2 issues now
getDocumentsDir() is not found on scope.
I guess this function was only shown on the video, but not really implemented globally, now looking through some "eskimo" code I found docDir implementation.
Which doesn't work either in this case.
Because session.start() never runs, all I have is a Black Screen with an animation saying "Move iPad to start".
QUESTION:
A) How can I force this to work?
I'm hoping to get capture the images to later process at a Mac.
So I don't think I need the "checkpoint" part of configuration.
Since getDocumentsDir() or docDir are not working,
B) is there anyway to just select whatever folder we can, maybe hardcode it?
Here is my code:
import Foundation
import RealityKit
import SwiftUI
// import Firebase
struct CaptureView: View {
let docDir = try! FileManager.default.url(for: .documentDirectory, in: .userDomainMask, appropriateFor: nil, create: true)
var body: some View {
ZStack {
// Make the entire background black.
Color.black.edgesIgnoringSafeArea(.all)
if #available(iOS 17.0, *) {
var session = ObjectCaptureSession()
var configuration = ObjectCaptureSession.Configuration()
// configuration.checkpointDirectory = getDocumentsDir().appendingPathComponent("Snapshots/")
// session.start(imagesDirectory: docDir.appendingPathComponent("Images/"), configuration: configuration)
ObjectCaptureView(session: session)
} else {
Text("Unsupported iOS 17 View")
}
}
.environment(\.colorScheme, .dark)
}
}
Post not yet marked as solved
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS.
As I understand from the session: "Meet Object Capture for iOS", I realized that the API now accepts Point Cloud data from iPhone LiDAR sensor to create 3D assets. However, I was not able to find any source from official Apple Documentation on RealityKit and ObjectCapture that explains how to utilize Point Cloud data to create the session.
I have two questions regarding this API.
The original example from the documentation explains how to utilize the depth map from captured image by embedding the depth map into the HEIC image. This fact makes me assumed that PhotogrammetrySession also uses Point Cloud data that is embedded in the photo. Is this correct?
I would also like to use the photos captured from iOS(and Point Cloud data) to use in PhotogrammetrySession on MacOS for full model detail. I know that PhotogrammetrySession provides PointCloud request result. Will using this output be the same as the one being captured on-device by the ObjectCaptureSession?
Thanks everyone in advance and it's been a real pleasure working with the updated Object Capture APIs.
I am currently developing a mobile and server-side application using the new ObjectCaptureSession on iOS and PhotogrammetrySession on MacOS.
I have two questions regarding the newly updated APIs.
From WWDC23 session: "Meet Object Capture for iOS", I know that the Object Capture API uses Point Cloud data captured from iPhone LiDAR sensor. I want to know how to use the Point Cloud data captured on iPhone ObjectCaptureSession and use it to create 3D models on PhotogrammetrySession on MacOS.
From the example code from WWDC21, I know that the PhotogrammetrySession utilizes depth map from captured photo images by embedding it into the HEIC image and use those data to create a 3D asset on PhotogrammetrySession on MacOS. I would like to know if Point Cloud data is also embedded into the image to be used during 3D reconstruction and if not, how else the Point Cloud data is inserted to be used during reconstruction.
Another question is, I know that Point Cloud data is returned as a result from request to the PhtogrammetrySession.Request. I would like to know if this PointCloud data is the same set of data captured during ObjectCaptureSession from WWDC23 that is used to create ObjectCapturePointCloudView.
Thank you to everyone for the help in advance. It's a real pleasure to be developing with all the updates to RealityKit and the Object Capture API.
Post not yet marked as solved
I don't get a bounding box on my screen when I start to scan, and get the following error message when my laptop tries to process the files:
Post not yet marked as solved
I found out that it works well in this new api on beta.
But I want to make more customizable settings, so want to make one with AVFoundation.
I want to know the AVCaptureSession and AVCapturePhotoSettings that is applied on this new API.
Post not yet marked as solved
With code below, I added color and depth image from RealityKit ARView, and ran Photogrammetry on iOS device, the mesh looks fine, but the scale of the mesh is quit different with real world scale.
let color = arView.session.currentFrame!.capturedImage
let depth = arView.session.currentFrame!.sceneDepth!.depthMap
//😀 Color
let colorCIImage = CIImage(cvPixelBuffer: color)
let colorUIImage = UIImage(ciImage: colorCIImage)
let depthCIImage = CIImage(cvPixelBuffer: depth)
let heicData = colorUIImage.heicData()!
let fileURL = imageDirectory!.appendingPathComponent("\(scanCount).heic")
do {
try heicData.write(to: fileURL)
print("Successfully wrote image to \(fileURL)")
} catch {
print("Failed to write image to \(fileURL): \(error)")
}
//😀 Depth
let context = CIContext()
let colorSpace = CGColorSpace(name: CGColorSpace.linearGray)!
let depthData = context.tiffRepresentation(of: depthCIImage,
format: .Lf,
colorSpace: colorSpace,
options: [.disparityImage: depthCIImage])
let depth_dir = imageDirectory!.appendingPathComponent("IMG_\(scanCount)_depth.TIF")
try! depthData!.write(to: depth_dir, options: [.atomic])
print("depth saved")
And also tried this.
let colorSpace = CGColorSpace(name: CGColorSpace.linearGray)
let depthCIImage = CIImage(cvImageBuffer: depth,
options: [.auxiliaryDepth : true])
let context = CIContext()
let linearColorSpace = CGColorSpace(name: CGColorSpace.linearSRGB)
guard let heicData = context.heifRepresentation(of: colorCIImage,
format: .RGBA16,
colorSpace: linearColorSpace!,
options: [.depthImage : depthCIImage]) else {
print("Failed to convert combined image into HEIC format")
return
}
Does Anyone know why and how to fix this?
Post not yet marked as solved
Is there a way to access the dimensions of the bounding box that is displayed around the object in the ObjectCaptureView?
Post not yet marked as solved
I've saved a bunch of images on an iPhone XS without using the ObjectCapture API since it's not supported.
Then I tried to use the PhotogrammetrySession but it fails with
Error The operation couldn’t be completed. (CoreOC.PhotogrammetrySession.Error error 3.)
CorePG: Initialization error = sessionError(reason: "CPGSessionCreate failed!")
Internal photogrammetry session init failed for checkpointDirectory =
Any idea why this would be the case? I managed to use my iPhone 14 Pro to successfully create a USDZ file with the on-device Photogrammetry using only images and no lidar. But it seems that it's not working for iPhone XS.
Are there restrictions on Photogrammetry Session? I know it's iPhone 12 Pro and up for ObjectCapture, but what about photogrammetry on iOS 17? Thanks!
I saw the at the WWDC23 session "Meet Object Capture for iOS" that the new tool that was released today along with Xcode 15 beta 2 called "Reality Composer Pro" will be capable of creating 3D models with Apple's PhotogrammetrySession. However, I do not see any of its features on the tool. Has anyone managed to find the feature for creating 3D models as shown in the session?
Post not yet marked as solved
As the speaker mentions, the documentation contains source code for the sample app. But when I went there I just found the sample code from wwdc 2021. Is the code available yet?
Post not yet marked as solved
Now that we have the Vision Pro, I really want to start using Apple's Object Capture API to transform real objects into 3D assets. I watched the latest Object Capture vid from WWDC 23 and noticed they were using a "sample app".
Does Apple provide this sample app to VisionOS developers or do we have to build our own iOS app?
Thanks and cheers!
Post not yet marked as solved
If I make custom point cloud, how can I send this to photogrammetry session? Does it seperately saved to directory? Or does it saved into heic image?