Create 3D models with Object Capture

RSS for tag

Discuss the WWDC21 session Create 3D models with Object Capture.

View Session

Posts under wwdc21-10076 tag

63 Posts
Sort by:
Post not yet marked as solved
2 Replies
884 Views
Hey, I have run several tests with masks in the given folder upon PhotogrammetrySession init. It seems the masks are taken into account as the results differ from when I don't provide them. Unfortunately, the results aren't as good as we can expect when masks are provided. Has anyone been able to make it work? How? Example of Imagemagik conversion applied and filename: magick mogrify -monitor -format tif -depth 8 *.png: my original masks are in png format. IMG_0001_mask.TIF
Posted
by hni.
Last updated
.
Post not yet marked as solved
1 Replies
425 Views
So, by adding my own mask to a PhotogrammetrySample, I'm getting a crash with this message: libc++abi: terminating with uncaught exception of type std::__1::bad_function_call terminating with uncaught exception of type std::__1::bad_function_call Program ended with exit code: 9 I'm using this extension to NSImage to convert a 8bit alpha only TIF to the required mask CVPixelBuffer: extension NSImage { // function used by depthPixelBuffer and disparityPixelBuffer to actually crate the CVPixelBuffer func __toPixelBuffer(PixelFormatType: OSType) -> CVPixelBuffer? { var bitsPerC = 8 var colorSpace = CGColorSpaceCreateDeviceRGB() var bitmapInfo = CGImageAlphaInfo.noneSkipFirst.rawValue // if we need depth/disparity if PixelFormatType == kCVPixelFormatType_DepthFloat32 || PixelFormatType == kCVPixelFormatType_DisparityFloat32 { bitsPerC = 32 colorSpace = CGColorSpaceCreateDeviceGray() bitmapInfo = CGImageAlphaInfo.none.rawValue | CGBitmapInfo.floatComponents.rawValue } // if we need mask else if PixelFormatType == kCVPixelFormatType_OneComponent8 { bitsPerC = 8 colorSpace = CGColorSpaceCreateDeviceGray() bitmapInfo = CGImageAlphaInfo.none.rawValue } let width = Int(self.size.width) let height = Int(self.size.height) let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue, kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary var pixelBuffer: CVPixelBuffer? let status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, PixelFormatType, attrs, &pixelBuffer) guard let resultPixelBuffer = pixelBuffer, status == kCVReturnSuccess else { return nil } CVPixelBufferLockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0)) guard let context = CGContext(data: CVPixelBufferGetBaseAddress(resultPixelBuffer), width: width, height: height, bitsPerComponent: bitsPerC, bytesPerRow: CVPixelBufferGetBytesPerRow(resultPixelBuffer), space: colorSpace, bitmapInfo: bitmapInfo) else { return nil } // context.translateBy(x: 0, y: height) // context.scaleBy(x: 1.0, y: -1.0) let graphicsContext = NSGraphicsContext(cgContext: context, flipped: false) NSGraphicsContext.saveGraphicsState() NSGraphicsContext.current = graphicsContext draw(in: CGRect(x: 0, y: 0, width: width, height: height)) NSGraphicsContext.restoreGraphicsState() CVPixelBufferUnlockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0)) return resultPixelBuffer } // return the NSImage as a color 32bit Color CVPixelBuffer func colorPixelBuffer() -> CVPixelBuffer? { return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_32ARGB) } func maskPixelBuffer() -> CVPixelBuffer? { return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_OneComponent8) } // return NSImage as a 32bit depthData CVPixelBuffer func depthPixelBuffer() -> CVPixelBuffer? { return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_DepthFloat32) } // return NSImage as a 32bit disparityData CVPixelBuffer func disparityPixelBuffer() -> CVPixelBuffer? { return __toPixelBuffer(PixelFormatType: kCVPixelFormatType_DisparityFloat32) } } So basically, I call: var sample = PhotogrammetrySample(id:count, image: heic_image!) ... guard let sampleBuffer = NSImage(contentsOfFile: file) else { fatalError("readCVPixelBuffer: Failed to read \(file)") } let imageBuffer = sampleBuffer.maskPixelBuffer() sample.objectMask = imageBuffer! When the photogrammerySession runs, It does reads all images and masks first, and after the Data ingestion is complete. Beginning processing... It crashes, and I get the error message: libc++abi: terminating with uncaught exception of type std::__1::bad_function_call terminating with uncaught exception of type std::__1::bad_function_call Program ended with exit code: 9 Is anyone experiencing this?
Posted
by rhradec.
Last updated
.
Post not yet marked as solved
1 Replies
266 Views
Hi, I'm using the provided CaptureSample together with HelloPhotogrammetry to create a 3D model. Although I am making sure that there are depth images within the folder (TIF) - after building the model, the model size is tiny. How can I make sure that the model is built in the real world size? or - how can i resize the model to real world size? Thanks.
Posted
by iddog.
Last updated
.
Post not yet marked as solved
0 Replies
340 Views
Problem I'm trying to attach masks to PhotogrammetrySamples, but when I run a request on the samples, the app crashes with a Thread 5: signal SIGABRT and my console says 2022-01-31 13:38:13.575333-0800 HelloPhotogrammetry[5538:258947] [HelloPhotogrammetry] Data ingestion is complete.  Beginning processing... libc++abi: terminating with uncaught exception of type std::__1::bad_function_call: std::exception terminating with uncaught exception of type std::__1::bad_function_call: std::exception App works fine if I run it without attaching the masks, or if I turn off object masking in the configuration. I'm at a bit of a loss for why it is breaking. Process On my iPhone, I've modified the sample capture app. After a capture session, I run a VNGeneratePersonSegmentationRequest(). I save the resulting buffer to disk using context.pngRepresentation(of: CIImage(cvPixelBuffer: maskPixelBuffer), format: .L8, colorSpace: CGColorSpace(name: CGColorSpace.linearGray)!, options: [:]) Then, I Airdrop my files over to my laptop and run a modified version of the HelloPhotogrammetry sample app. I prepare an array of PhotogrammetrySamples with the image, depth, gravity, and mask data. I create a PhotogrammetrySession using the samples array and have object masking enabled in my configuration. When I process my request on the session, it looks like the data is ingested just fine, but breaks with some bad function call. Related code Here is how I construct my PhotogrammetrySample sequence, shortened for brevity: private func makeSequenceFromStructuredFolder(folder: URL) -> [PhotogrammetrySample]{ // Look in the sample folder, prepare arrays of capture data // For each capture set // Load color image, depth image, and mask as CIImages // Load and decode gravity data // Convert all CIImages into CVPixelBuffers // kCVPixelFormatType_32BGRA, <CGColorSpace 0x108e04f80> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; Display P3) _image = pixelBufferFromCIImage(image, context: context, pixelFormat: kCVPixelFormatType_32BGRA, colorSpace: image.colorSpace!) // kCVPixelFormatType_DepthFloat32, <CGColorSpace 0x100d1b000> (kCGColorSpaceICCBased; kCGColorSpaceModelMonochrome; Linear Gray) _depth = pixelBufferFromCIImage(depthImage, context: context, pixelFormat: kCVPixelFormatType_DepthFloat32, colorSpace: depthImage.colorSpace!) // kCVPixelFormatType_OneComponent8, <CGColorSpace 0x100d1b000> (kCGColorSpaceICCBased; kCGColorSpaceModelMonochrome; Linear Gray) _mask = pixelBufferFromCIImage(mask, context: context, pixelFormat: kCVPixelFormatType_OneComponent8, colorSpace: CGColorSpace(name: CGColorSpace.linearGray)!) // Prepare sample var sample = PhotogrammetrySample.init(id: index, image: _image!) if let _gravity = _gravity { sample.gravity = _gravity } if let _depth = _depth { sample.depthDataMap = _depth } if let _mask = _mask { sample.objectMask = _mask } // Append the sample sampleSequence.append(sample) } return sampleSequence } And here is how I convert my CIImages into CVPixelBuffers. func pixelBufferFromCIImage(_ image: CIImage, context: CIContext, pixelFormat: OSType, colorSpace: CGColorSpace) -> CVPixelBuffer? { var pixelBuffer: CVPixelBuffer? let attrs = [kCVPixelBufferCGImageCompatibilityKey: true, kCVPixelBufferCGBitmapContextCompatibilityKey: true] as CFDictionary let width: Int = Int(image.extent.width) let height: Int = Int(image.extent.height) let status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, pixelFormat, attrs, &pixelBuffer) switch status { case kCVReturnInvalidPixelFormat: print("status == kCVReturnInvalidPixelFormat") case kCVReturnInvalidSize: print("status == kCVReturnInvalidSize") case kCVReturnPixelBufferNotMetalCompatible: print("status == kCVReturnPixelBufferNotMetalCompatible") case kCVReturnSuccess: print("status == kCVReturnSuccess") default: print("status is other") } guard (status == kCVReturnSuccess) else { return nil } context.render(image, to: pixelBuffer!, bounds: image.extent, colorSpace: colorSpace) return pixelBuffer } Other attempted steps that ultimately failed Scale the mask to be the same size as the color image using CIFilter.lanczosScaleTransform(). Create a binary mask using CIFilter.colorThreshold(). Render an intermediary image to be extra sure the right pixel format is being used for the mask. Checked all image extents and made sure the color image and mask were the same size and rotation. Read all documentation, looked for similar questions I appreciate any help!
Posted Last updated
.
Post not yet marked as solved
0 Replies
136 Views
I tried the HelloPhotogrammetry sample app and ended up with the following errors. 'SampleOverlap' is not a member type of struct 'HelloPhotogrammetry.Configuration' (aka 'PhotogrammetrySession.Configuration') Type 'HelloPhotogrammetry' does not conform to protocol 'Decodable' 'SampleOverlap' is not a member type of struct 'RealityFoundation.PhotogrammetrySession.Configuration Value of type 'PhotogrammetrySession' has no member 'output' Value of type 'PhotogrammetrySession.Configuration' has no member 'sampleOverlap' Command EmitSwiftModule failed with a nonzero exit code I am new to Xcode and have no experience with swift. Can anyone point me in the right direction? Mac OS version: Monterey 12.3 Xcode version: 13.3 (13E113)
Posted
by viknesh.
Last updated
.
Post marked as solved
1 Replies
312 Views
Can I take photos from an android device or any camera instead of an ipad or iphone while using the Object Capture API? What negative side effects can it have. Note: I'm asking this question because I noticed that the size of the usdz model has grown too much in my experiments. (For example 200mb instead of 20mb)
Posted
by mudur.
Last updated
.
Post not yet marked as solved
2 Replies
662 Views
So, I've modified the CaptureSample IOS app to take photos using the truedepth front camera. It worked perfectly, and I have TIF depth maps together with the gravity vector and the photos I took. Using the HelloPhotogrammetry command line, I created the meshes without any problems. I notice the meshes have a consistent size between then, for example, creating a mesh of my face and a mesh of my nose, the nose mesh fits perfectly on top of the nose on the face mesh! Great! BUT, when I open the meshes in Maya, for example, they are really really tiny! I was expecting to see the objects in the proper scale, and hopefully bee able to even take measurements in maya to see if they would match the real measurements of the scanned object, but they don't seem to come on the right size at all. I tried set Maya to meters, centimetres and milimetres, but it always imports the meshes really tiny. I have to apply a scale of 100 to be able to see the meshes. But then they don't measure correctly. By try and error, I was able to find that scaling the meshes by 86 would make then match the real world scale in centimetres. Is there a proper space conversion that needs to be applied to the mesh to convert it to the real world scale? Would the problem be that I'm using the truedepth camera instead of the back camera, and the depth map value is coming in a different scale than what HelloPhotogrammetry expects?
Posted
by rhradec.
Last updated
.
Post not yet marked as solved
9 Replies
2.1k Views
Hi there, when I run the ObjectCapture sample project on my iPad Pro 2020, depth is always disabled. Is there a way to enable it? Thanks in advance
Posted
by fkochdev.
Last updated
.
Post not yet marked as solved
1 Replies
518 Views
OS: 12.0 Beta I'm interested in the Object Capture API and am trying to get the sample app to work. https://developer.apple.com/documentation/realitykit/taking_pictures_for_3d_object_capture https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app/ After trying a few times, I noticed that the output doesn't change much even if Depth (.TIF) and Gravity (.TXT) aren't in the folder. I wanted to use Depth, so I tried using PhotoGrametorySample. Because, I noticed PhotoGrametorySample.depthDataMap The session was successfully created by using the same CVPixelBuffer as this. AVCapturePhoto.depthData.converting (toDepthDataType: kCVPixelFormatType_DisparityFloat32) .depthMapData However, the output from the session is .invalidSample only for the id containing the depth. [command-line app log] Successfully created session. (PhotogrammetrySample API) Using request: modelFile(url: ***, detail: RealityFoundation.PhotogrammetrySession.Request.Detail.full, geometry: nil) Invalid Sample! id=1 reason="The sample is not supported." Invalid Sample! id=2 reason="The sample is not supported." Invalid Sample! id=3 reason="The sample is not supported." Invalid Sample! id=4 reason="The sample is not supported." ... What is this reason ? "The sample is not supported." Is there sample code to use Depth in the process?
Posted Last updated
.
Post not yet marked as solved
0 Replies
238 Views
Happens for scans where I flip the object on it's side to get the bottom. I know I can fix it afterwards in xcode, but the images apple uses for their scans come out fine. Screenshot of a scan of a basket, any ideas why it does this?
Posted
by Ross17.
Last updated
.
Post not yet marked as solved
4 Replies
720 Views
How does meta data affect the model creation? When I added image meta data to sample, the created model was changed. this is the code I get meta data from image url: func getImageMetaData(url: URL) -> CFDictionary? {             if let data = NSData(contentsOf: url),                let source = CGImageSourceCreateWithData(data as CFData, nil) {                 let metadata = CGImageSourceCopyPropertiesAtIndex(source, 0, nil)                 return metadata             }             return nil         } when I create photogrammetrySample, this is the code I add meta data to it: if let metaData = getImageMetaData(url: imageURL) as? [String:AnyObject] {                                 sample.metadata = metaData                             }
Posted
by Kosmoyin.
Last updated
.
Post not yet marked as solved
2 Replies
642 Views
Hi! In the WWDC keynote for object capture, at 17:38, they drag and edit the bounds of the object. Please can someone guide me how to do this or how to get started? Is there any kind of sample code anywhere for editing bounds? https://developer.apple.com/videos/play/wwdc2021/10076/
Posted
by saeidg.
Last updated
.
Post not yet marked as solved
0 Replies
384 Views
I am using the default HelloPhotogrammetry app you guys made: https://developer.apple.com/documentation/realitykit/creating_a_photogrammetry_command-line_app/ My system originally did not fit the specs because of a GPU issue to run this command line. To solve this issue I bought the Apple supported eGPU Black Magic to allow the graphics issue to function. Here is the error when I run it despite the eGPU: apply_selection_policy_once: prefer use of removable GPUs (via (null):GPUSelectionPolicy->preferRemovable) I have deduced that there needs to be this with the application running it: https://developer.apple.com/documentation/bundleresources/information_property_list/gpuselectionpolicy I tried modifying the Terminal.plist to the updated value - but there was no luck with it. I believe the CL within Xcode needs to have the updated value -- I need help on that aspect to be able to allow the system to use the eGPU. I did create a PropertyList within the MacOS app and added GPUSelectionPolicy with preferRemovable, and I am still having issues with the same above error. Please advice. Also -- to note, I did try to temporary turn off the Prefer External GPU within Terminal -- and it was doing the processing of the Photogrammetry but it was taking awhile to process (>30 mins plus.) I ended up killing that task. I did have a look at Activity Monitor and I did see that my internal GPU was being used, not my eGPU which is what I am trying to use. Previously -- when I did not have the eGPU plugged in - I would be getting an error saying that my specs did not meet criteria, so it was interesting to see that it assumed my Mac had criteria (which it technically did) it just did processing on the less powerful GPU.
Posted Last updated
.
Post not yet marked as solved
1 Replies
333 Views
I'm creating a custom scanning solution for iOS and using RealityKit Object Capture PhotogrammetrySession API to build a 3D model. I'm finding the data I'm sending to it is ignoring the depth and not building the model to scale. The documentation is a little light on how to format the depth so I'm wondering if someone could take a look at some example files I send to the PhotogrammetrySession. Would you be able to tell me what I'm not doing correctly? https://drive.google.com/file/d/1-GoeR_KMhX_i7-y8M8ElDRrySasOdoHy/view?usp=sharing Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
282 Views
I'm making an app that captures data using ARKit and will ultimately send the images+depth+gravity to an Object Capture Photogrammetry agent. I need to use the depth data and produce a model with correct scale, so from what I understand I need to send the depth file + set proper exif data in the image. Since I'm getting the images+depth from ARKit I'll need to set the exif data manually before saving the images. Unfortunately the documentation on this is a bit light, so would you be able to let me know what exif data needs to be set in order for the Photogrammetry to be able to create the model with proper scale? If I try and set my Photogrammetry agent with manual metadata like this: var sample = PhotogrammetrySample(id: id, image: image)       var dict:[ String: Any ] = [:]      dict["FocalLength"] = 23.551325 dict["PixelWidth"] = 1920 dict["PixelHeight"] = 1440       sample.metadata = dict I get the following error in the output and depth is ignored: [Photogrammetry] Can't use FocalLenIn35mmFilm to produce FocalLengthInPixel! Punting...
Posted Last updated
.
Post not yet marked as solved
0 Replies
213 Views
Hi All - For some reason I cannot get depth data from the rear iPad Pro cameras using the CaptureSample app from the 3D Object Capture Xcode. It works if I over-ride and use the front facing camera. Is there an issue with the AVCaptureDevice for the rear wide cameras? Thanks!
Posted
by parche.
Last updated
.
Post not yet marked as solved
0 Replies
290 Views
How can I adjust the bounding box for object capture reconstruction so that it just constructs the object and not all the surrounding data? I think it is through providing a bounding box to PhotogrametrySession.Request.modelFile(url:detail:geometry) but I can't seem to figure out the proper way to do it.
Posted Last updated
.