Post not yet marked as solved
Hi, I'm using the provided CaptureSample together with HelloPhotogrammetry to create a 3D model.
Although I am making sure that there are depth images within the folder (TIF) - after building the model, the model size is tiny.
How can I make sure that the model is built in the real world size? or - how can i resize the model to real world size?
Thanks.
Post not yet marked as solved
Is there anywhere I can download the pizza sample used in the video?
Post not yet marked as solved
I tried the HelloPhotogrammetry sample app and ended up with the following errors.
'SampleOverlap' is not a member type of struct 'HelloPhotogrammetry.Configuration' (aka 'PhotogrammetrySession.Configuration')
Type 'HelloPhotogrammetry' does not conform to protocol 'Decodable'
'SampleOverlap' is not a member type of struct 'RealityFoundation.PhotogrammetrySession.Configuration
Value of type 'PhotogrammetrySession' has no member 'output'
Value of type 'PhotogrammetrySession.Configuration' has no member 'sampleOverlap'
Command EmitSwiftModule failed with a nonzero exit code
I am new to Xcode and have no experience with swift. Can anyone point me in the right direction?
Mac OS version: Monterey 12.3
Xcode version: 13.3 (13E113)
Post not yet marked as solved
Is it possible to have multiple photogrammetry session running in parallel. I would like to process multiple sets of photos at the same time.
Thank you for your help.
Can I take photos from an android device or any camera instead of an ipad or iphone while using the Object Capture API? What negative side effects can it have.
Note: I'm asking this question because I noticed that the size of the usdz model has grown too much in my experiments. (For example 200mb instead of 20mb)
Post not yet marked as solved
Problem
I'm trying to attach masks to PhotogrammetrySamples, but when I run a request on the samples, the app crashes with a Thread 5: signal SIGABRT and my console says
2022-01-31 13:38:13.575333-0800 HelloPhotogrammetry[5538:258947] [HelloPhotogrammetry] Data ingestion is complete. Beginning processing...
libc++abi: terminating with uncaught exception of type std::__1::bad_function_call: std::exception
terminating with uncaught exception of type std::__1::bad_function_call: std::exception
App works fine if I run it without attaching the masks, or if I turn off object masking in the configuration. I'm at a bit of a loss for why it is breaking.
Process
On my iPhone, I've modified the sample capture app. After a capture session, I run a VNGeneratePersonSegmentationRequest(). I save the resulting buffer to disk using
context.pngRepresentation(of: CIImage(cvPixelBuffer: maskPixelBuffer), format: .L8, colorSpace: CGColorSpace(name: CGColorSpace.linearGray)!, options: [:])
Then, I Airdrop my files over to my laptop and run a modified version of the HelloPhotogrammetry sample app. I prepare an array of PhotogrammetrySamples with the image, depth, gravity, and mask data.
I create a PhotogrammetrySession using the samples array and have object masking enabled in my configuration.
When I process my request on the session, it looks like the data is ingested just fine, but breaks with some bad function call.
Related code
Here is how I construct my PhotogrammetrySample sequence, shortened for brevity:
private func makeSequenceFromStructuredFolder(folder: URL) -> [PhotogrammetrySample]{
// Look in the sample folder, prepare arrays of capture data
// For each capture set
// Load color image, depth image, and mask as CIImages
// Load and decode gravity data
// Convert all CIImages into CVPixelBuffers
// kCVPixelFormatType_32BGRA, <CGColorSpace 0x108e04f80> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Display P3)
_image = pixelBufferFromCIImage(image, context: context, pixelFormat: kCVPixelFormatType_32BGRA, colorSpace: image.colorSpace!)
// kCVPixelFormatType_DepthFloat32, <CGColorSpace 0x100d1b000> (kCGColorSpaceICCBased; kCGColorSpaceModelMonochrome; Linear Gray)
_depth = pixelBufferFromCIImage(depthImage, context: context, pixelFormat: kCVPixelFormatType_DepthFloat32, colorSpace: depthImage.colorSpace!)
// kCVPixelFormatType_OneComponent8, <CGColorSpace 0x100d1b000> (kCGColorSpaceICCBased; kCGColorSpaceModelMonochrome; Linear Gray)
_mask = pixelBufferFromCIImage(mask, context: context, pixelFormat: kCVPixelFormatType_OneComponent8, colorSpace: CGColorSpace(name: CGColorSpace.linearGray)!)
// Prepare sample
var sample = PhotogrammetrySample.init(id: index, image: _image!)
if let _gravity = _gravity {
sample.gravity = _gravity
}
if let _depth = _depth {
sample.depthDataMap = _depth
}
if let _mask = _mask {
sample.objectMask = _mask
}
// Append the sample
sampleSequence.append(sample)
}
return sampleSequence
}
And here is how I convert my CIImages into CVPixelBuffers.
func pixelBufferFromCIImage(_ image: CIImage, context: CIContext, pixelFormat: OSType, colorSpace: CGColorSpace) -> CVPixelBuffer? {
var pixelBuffer: CVPixelBuffer?
let attrs = [kCVPixelBufferCGImageCompatibilityKey: true,
kCVPixelBufferCGBitmapContextCompatibilityKey: true] as CFDictionary
let width: Int = Int(image.extent.width)
let height: Int = Int(image.extent.height)
let status = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
pixelFormat,
attrs,
&pixelBuffer)
switch status {
case kCVReturnInvalidPixelFormat:
print("status == kCVReturnInvalidPixelFormat")
case kCVReturnInvalidSize:
print("status == kCVReturnInvalidSize")
case kCVReturnPixelBufferNotMetalCompatible:
print("status == kCVReturnPixelBufferNotMetalCompatible")
case kCVReturnSuccess:
print("status == kCVReturnSuccess")
default:
print("status is other")
}
guard (status == kCVReturnSuccess) else {
return nil
}
context.render(image, to: pixelBuffer!, bounds: image.extent, colorSpace: colorSpace)
return pixelBuffer
}
Other attempted steps that ultimately failed
Scale the mask to be the same size as the color image using CIFilter.lanczosScaleTransform().
Create a binary mask using CIFilter.colorThreshold().
Render an intermediary image to be extra sure the right pixel format is being used for the mask.
Checked all image extents and made sure the color image and mask were the same size and rotation.
Read all documentation, looked for similar questions
I appreciate any help!
Post not yet marked as solved
Happens for scans where I flip the object on it's side to get the bottom. I know I can fix it afterwards in xcode, but the images apple uses for their scans come out fine.
Screenshot of a scan of a basket, any ideas why it does this?
Post not yet marked as solved
I am using the default HelloPhotogrammetry app you guys made: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app/
My system originally did not fit the specs because of a GPU issue to run this command line. To solve this issue I bought the Apple supported eGPU Black Magic to allow the graphics issue to function. Here is the error when I run it despite the eGPU: apply_selection_policy_once: prefer use of removable GPUs (via (null):GPUSelectionPolicy->preferRemovable)
I have deduced that there needs to be this with the application running it: https://developer.apple.com/documentation/bundleresources/information_property_list/gpuselectionpolicy
I tried modifying the Terminal.plist to the updated value - but there was no luck with it. I believe the CL within Xcode needs to have the updated value -- I need help on that aspect to be able to allow the system to use the eGPU.
I did create a PropertyList within the MacOS app and added GPUSelectionPolicy with preferRemovable, and I am still having issues with the same above error. Please advice.
Also -- to note, I did try to temporary turn off the Prefer External GPU within Terminal -- and it was doing the processing of the Photogrammetry but it was taking awhile to process (>30 mins plus.) I ended up killing that task. I did have a look at Activity Monitor and I did see that my internal GPU was being used, not my eGPU which is what I am trying to use. Previously -- when I did not have the eGPU plugged in - I would be getting an error saying that my specs did not meet criteria, so it was interesting to see that it assumed my Mac had criteria (which it technically did) it just did processing on the less powerful GPU.
Post not yet marked as solved
I'm making an app that captures data using ARKit and will ultimately send the images+depth+gravity to an Object Capture Photogrammetry agent. I need to use the depth data and produce a model with correct scale, so from what I understand I need to send the depth file + set proper exif data in the image. Since I'm getting the images+depth from ARKit I'll need to set the exif data manually before saving the images. Unfortunately the documentation on this is a bit light, so would you be able to let me know what exif data needs to be set in order for the Photogrammetry to be able to create the model with proper scale?
If I try and set my Photogrammetry agent with manual metadata like this:
var sample = PhotogrammetrySample(id: id, image: image)
var dict:[ String: Any ] = [:]
dict["FocalLength"] = 23.551325
dict["PixelWidth"] = 1920
dict["PixelHeight"] = 1440
sample.metadata = dict
I get the following error in the output and depth is ignored:
[Photogrammetry] Can't use FocalLenIn35mmFilm to produce FocalLengthInPixel! Punting...
Post not yet marked as solved
So, by adding my own mask to a PhotogrammetrySample, I'm getting a crash with this message:
libc++abi: terminating with uncaught exception of type std::__1::bad_function_call
terminating with uncaught exception of type std::__1::bad_function_call
Program ended with exit code: 9
I'm using this extension to NSImage to convert a 8bit alpha only TIF to the required mask CVPixelBuffer:
extension NSImage {
// function used by depthPixelBuffer and disparityPixelBuffer to actually crate the CVPixelBuffer
func __toPixelBuffer(PixelFormatType: OSType) -> CVPixelBuffer? {
var bitsPerC = 8
var colorSpace = CGColorSpaceCreateDeviceRGB()
var bitmapInfo = CGImageAlphaInfo.noneSkipFirst.rawValue
// if we need depth/disparity
if PixelFormatType == kCVPixelFormatType_DepthFloat32 || PixelFormatType == kCVPixelFormatType_DisparityFloat32 {
bitsPerC = 32
colorSpace = CGColorSpaceCreateDeviceGray()
bitmapInfo = CGImageAlphaInfo.none.rawValue | CGBitmapInfo.floatComponents.rawValue
}
// if we need mask
else if PixelFormatType == kCVPixelFormatType_OneComponent8 {
bitsPerC = 8
colorSpace = CGColorSpaceCreateDeviceGray()
bitmapInfo = CGImageAlphaInfo.none.rawValue
}
let width = Int(self.size.width)
let height = Int(self.size.height)
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, PixelFormatType, attrs, &pixelBuffer)
guard let resultPixelBuffer = pixelBuffer, status == kCVReturnSuccess else {
return nil
}
CVPixelBufferLockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
guard let context = CGContext(data: CVPixelBufferGetBaseAddress(resultPixelBuffer),
width: width,
height: height,
bitsPerComponent: bitsPerC,
bytesPerRow: CVPixelBufferGetBytesPerRow(resultPixelBuffer),
space: colorSpace,
bitmapInfo: bitmapInfo)
else {
return nil
}
// context.translateBy(x: 0, y: height)
// context.scaleBy(x: 1.0, y: -1.0)
let graphicsContext = NSGraphicsContext(cgContext: context, flipped: false)
NSGraphicsContext.saveGraphicsState()
NSGraphicsContext.current = graphicsContext
draw(in: CGRect(x: 0, y: 0, width: width, height: height))
NSGraphicsContext.restoreGraphicsState()
CVPixelBufferUnlockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
return resultPixelBuffer
}
// return the NSImage as a color 32bit Color CVPixelBuffer
func colorPixelBuffer() -> CVPixelBuffer? {
return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_32ARGB)
}
func maskPixelBuffer() -> CVPixelBuffer? {
return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_OneComponent8)
}
// return NSImage as a 32bit depthData CVPixelBuffer
func depthPixelBuffer() -> CVPixelBuffer? {
return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_DepthFloat32)
}
// return NSImage as a 32bit disparityData CVPixelBuffer
func disparityPixelBuffer() -> CVPixelBuffer? {
return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_DisparityFloat32)
}
}
So basically, I call:
var sample = PhotogrammetrySample(id:count, image: heic_image!)
...
guard let sampleBuffer = NSImage(contentsOfFile: file) else {
fatalError("readCVPixelBuffer: Failed to read \(file)")
}
let imageBuffer = sampleBuffer.maskPixelBuffer()
sample.objectMask = imageBuffer!
When the photogrammerySession runs, It does reads all images and masks first, and after the
Data ingestion is complete. Beginning processing...
It crashes, and I get the error message:
libc++abi: terminating with uncaught exception of type std::__1::bad_function_call
terminating with uncaught exception of type std::__1::bad_function_call
Program ended with exit code: 9
Is anyone experiencing this?
Post not yet marked as solved
I'm creating a custom scanning solution for iOS and using RealityKit Object Capture PhotogrammetrySession API to build a 3D model. I'm finding the data I'm sending to it is ignoring the depth and not building the model to scale. The documentation is a little light on how to format the depth so I'm wondering if someone could take a look at some example files I send to the PhotogrammetrySession. Would you be able to tell me what I'm not doing correctly?
https://drive.google.com/file/d/1-GoeR_KMhX_i7-y8M8ElDRrySasOdoHy/view?usp=sharing
Thank you!
Post not yet marked as solved
Hi All -
For some reason I cannot get depth data from the rear iPad Pro cameras using the CaptureSample app from the 3D Object Capture Xcode. It works if I over-ride and use the front facing camera. Is there an issue with the AVCaptureDevice for the rear wide cameras?
Thanks!
Post not yet marked as solved
How does meta data affect the model creation? When I added image meta data to sample, the created model was changed.
this is the code I get meta data from image url:
func getImageMetaData(url: URL) -> CFDictionary? {
if let data = NSData(contentsOf: url),
let source = CGImageSourceCreateWithData(data as CFData, nil) {
let metadata = CGImageSourceCopyPropertiesAtIndex(source, 0, nil)
return metadata
}
return nil
}
when I create photogrammetrySample, this is the code I add meta data to it:
if let metaData = getImageMetaData(url: imageURL) as? [String:AnyObject] {
sample.metadata = metaData
}
Post not yet marked as solved
How can I adjust the bounding box for object capture reconstruction so that it just constructs the object and not all the surrounding data?
I think it is through providing a bounding box to
PhotogrametrySession.Request.modelFile(url:detail:geometry)
but I can't seem to figure out the proper way to do it.
Post not yet marked as solved
Will iPad ever receive these tools for object capture? Or at the very least xCode for the ability to use the command line apps for it? I have an M1 iPad Pro that should be able to do all that the M1 Macs can but it’s being held back by software limitations.
Are both TIFFs and DNG (Apple ProRAW format) currently not supported?
I have multiple requests in one session, how to stop or cancel the request which is running but not stop or cancel the whole PhotogrammetrySession?
Post not yet marked as solved
OS: 12.0 Beta
I'm interested in the Object Capture API and am trying to get the sample app to work.
https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture
https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app/
After trying a few times, I noticed that the output doesn't change much even if Depth (.TIF) and Gravity (.TXT) aren't in the folder.
I wanted to use Depth, so I tried using PhotoGrametorySample.
Because, I noticed PhotoGrametorySample.depthDataMap
The session was successfully created by using the same CVPixelBuffer as this.
AVCapturePhoto.depthData.converting (toDepthDataType: kCVPixelFormatType_DisparityFloat32) .depthMapData
However, the output from the session is .invalidSample only for the id containing the depth.
[command-line app log]
Successfully created session. (PhotogrammetrySample API)
Using request: modelFile(url: ***, detail: RealityFoundation.PhotogrammetrySession.Request.Detail.full, geometry: nil)
Invalid Sample! id=1 reason="The sample is not supported."
Invalid Sample! id=2 reason="The sample is not supported."
Invalid Sample! id=3 reason="The sample is not supported."
Invalid Sample! id=4 reason="The sample is not supported."
...
What is this reason ?
"The sample is not supported."
Is there sample code to use Depth in the process?
Post not yet marked as solved
Has anyone worked out how to create OBJ files instead of USDZ?
Post not yet marked as solved
I have recently purchased MacBook Air with M1 and have updated the IOS to Monterey, I am getting the error fatalError("Requires minimum macOS 12.0!") on Xcode at the end in main page of hello photogrammetry app, could you help me fix this