Exporting Point Cloud as 3D PLY Model

I have seen this question come up a few times here on Apple Developer forums (recently noted here), though I tend to find myself having a misunderstanding of what technology and steps are required to achieve a goal.

In general, my colleague and I are try to use Apple's Visualizing a Point Cloud Using Scene Depth sample project from WWDC 2020, and save the rendered point cloud as a 3D model. I've seen this achieved (there are quite a few samples of the final exports available on popular 3D modeling websites), but remain unsure how to do so.

From what I can ascertain, using Model I/O seems like an ideal framework choice, by creating an empty MDLAsset and appending a MDLObject for each point to, finally, end up with a model ready for export.

How would one go about converting each "point" to a MDLObject to append to the MDLAsset? Or am I going down the wrong path?

Accepted Reply

Okay well probably the easiest approach to exporting the point cloud to some 3d file is to make use of SceneKit.

The general steps would be as follows:
  1. Use Metal (as demonstrated in the point cloud sample project) to unproject points from the depth texture into world space.

  2. Store world space points in a MTLBuffer. (You could also store the sampled color for each point if you wanted to use that data in your model)

  3. When the command buffer has completed, copy the world space points from the buffer and append them to an array. Repeat with the next frame. (Consider limiting how large you allow this array to grow to, otherwise you will eventually run out of memory)

  4. When you are ready to write out your file (i.e. you have finished "scanning"), create an SCNScene.

  5. Iterate through the stored world space points and add an SCNNode with some geometry (i.e. an SCNSphere) to your SCNScene. (If you also stored a color, use it as the diffuse material parameter of your geometry)

  6. Use write(to:options:delegate:progressHandler:) to write your point cloud model to some supported 3d file format, like .usdz

  • Hi, @gchiste! Thanks a lot for all your answers! Can you suggest how to create PointCloud with not just one TrueDepth frame but many of them? Do I understand correctly that I need to get the inverted transform matrix from ARFrame to get the position of camera in world-space and then multiply all my XYZ points to this matrix? Can I build PointCloud from all sides by this way?

Add a Comment

Replies

Hello Brandon,

Before going down this path, are you sure that what you want to export as a 3D model is a point cloud, as opposed to the sceneReconstruction ARMeshGeometry?
Thank you for your reply, @gchiste. That's a fair question, and frankly, I'm embarrassed to say my response is, "I don't know." Having looked at both the Visualizing and Interacting with a Reconstructed Scene and Visualizing a Point Cloud using Scene Depth sample projects, I find myself intrigued by the possibility of using the LiDAR camera to recreate 3D environments.

While I am not wholly expecting a photorealistic 3D representation of the world around me, some samples of both the Reconstructed Scene/Point Cloud projects, as evidenced on YouTube/Twitter, show some amazing potential for using the gathered representations to bring into 3D modeling programs for a variety of use cases. The Point Cloud sample project creates a sort of whimsical representation of the environment, and I would love to be able to take what I see on screen when running that sample project and, effectively, export it as a 3D model.

I see the question come up a bit around the Apple Developer Forums and other technical resources, though I think a large disconnect is knowing what technologies and efforts one would need to learn to be able to take those sample projects and create a 3D model. The ARMeshGeometry is a bit more straightforward, it seems, but knowing what general Apple frameworks one would need to connect those points/particles being shown in the Visualizing a Point Cloud using Scene Depth sample project to some sort of model output, I think, would intrigue many.
Okay well probably the easiest approach to exporting the point cloud to some 3d file is to make use of SceneKit.

The general steps would be as follows:
  1. Use Metal (as demonstrated in the point cloud sample project) to unproject points from the depth texture into world space.

  2. Store world space points in a MTLBuffer. (You could also store the sampled color for each point if you wanted to use that data in your model)

  3. When the command buffer has completed, copy the world space points from the buffer and append them to an array. Repeat with the next frame. (Consider limiting how large you allow this array to grow to, otherwise you will eventually run out of memory)

  4. When you are ready to write out your file (i.e. you have finished "scanning"), create an SCNScene.

  5. Iterate through the stored world space points and add an SCNNode with some geometry (i.e. an SCNSphere) to your SCNScene. (If you also stored a color, use it as the diffuse material parameter of your geometry)

  6. Use write(to:options:delegate:progressHandler:) to write your point cloud model to some supported 3d file format, like .usdz

  • Hi, @gchiste! Thanks a lot for all your answers! Can you suggest how to create PointCloud with not just one TrueDepth frame but many of them? Do I understand correctly that I need to get the inverted transform matrix from ARFrame to get the position of camera in world-space and then multiply all my XYZ points to this matrix? Can I build PointCloud from all sides by this way?

Add a Comment
Hi @gchiste,

Thank you so much for your detailed reply. This really helped to break down the steps necessary to gather the point cloud data coordinates, color values, and how to connect this data to a SceneKit scene that could be rendered out as a 3D model. I am thrilled with this and am working my way through the Visualizing a Point Cloud Using Scene Depth sample project to adapt your logic, both for understanding and testing.

If I may ask, basing this question as a follow-up to your detailed guidance and looking at the Visualizing a Point Cloud Using Scene Depth sample project;


Use Metal (as demonstrated in the point cloud sample project) to unproject points from the depth texture into world space.

I notice that this function exists in the project’s Shaders.metal file (as the unprojectVertex call). Is it possible to use the results of the call to this function and save the results to a MTLBuffer, or does the unprojectVertex call need to be adapted to run on the CPU as each frame is rendered, and those results then saved to a MTLBuffer? Perhaps I’m getting away from the root of the question, but I am unsure if using what exists in Shaders.metal can yield the position/color data I need, or if I need to develop my own function to do that outside of Shaders.metal?


Hello Brandon,


Is it possible to use the results of the call to this function and save the results to a MTLBuffer?

Yes that is definitely the recommended approach. In fact, the sample already does this, it stores the results in particlesBuffer. Add a completion handler to the command buffer and you can access this data on the CPU, i.e.:

Code Block
commandBuffer.addCompletedHandler { [self] _ in
print(particlesBuffer[9].position) // Prints the 10th particles position
}


Thank you for your replies, @gchiste. While working in Metal and SceneKit is a learning experience, this sample project, and your guidance, certainly makes a world of difference in being able to gather a strong foundation in how this technology works, and how to best apply these learnings to our own apps.

Your follow-up regarding the particlesBuffer position makes sense, and I have been able to successfully print the position of specific points by hard-coding a particlesBuffer index, per your example.

While you've been immensely helpful and much of this is where I need to learn, I'm wondering if you could note where one would add the

Code Block
commandBuffer.addCompletedHandler { [self] _ in
let position = particlesBuffer[9].position
}


call (I presume in the renderer's accumulatePoints() function). Where I'm struggling to wrap my head around is how to actually read the position from the particlesBuffer. Normally, I would expect to iterate over the elements and copy the position (or color) to a new array, though I find that particlesBuffer is not able to be iterated through. Determined to figure this out!
Hi @gchiste,

Just wanted to thank you for all of your replies. Your guidance was extremely helpful, and I have been able to achieve what I was looking to achieve. I have additional questions on other technologies outside of ARKit/Metal that relate to this, which I will post to unique questions, where relevant. This has been a great learning lesson!
Hey Brandon,

I'm trying to achieve the same thing, and export the point cloud into a PLY model (using the same sample project). I am very new to ARKit, and I couldn't figure out exactly how to do so by reading this thread. You mentioned that you have achieved this in your last comment. I was wondering if you could provide a bit more detail on the implementation.

In particular, how did you end up saving the data from the particlesBuffer into a PLY model?
Hi @JeffCloe,

Forgive me if this gets lengthy, but I'll try to stick to the key points on how I achieved success with this task. @gchiste's help pointed me in all of the right directions, and while I'm certainly an ARKit/point cloud novice, that help taught me quite a bit. To keep the reply as brief as possible, I will assume that you have the Visualizing a Point Cloud using Scene Depth project already downloaded and accessible.

Firstly, you'll need to have some methodology to tell the app when you are done "scanning" the environment, and to save the point clouds to a file. For ease, I added a simple UIButton to my ViewController.swift's viewDidLoad method;

Code Block
// Setup a save button
let button = UIButton(type: .system, primaryAction: UIAction(title: "Save", handler: { (action) in
self.renderer.savePointsToFile()
}))
button.translatesAutoresizingMaskIntoConstraints = false
self.view.addSubview(button)
NSLayoutConstraint.activate([
button.centerXAnchor.constraint(equalTo: self.view.centerXAnchor),
button.centerYAnchor.constraint(equalTo: self.view.centerYAnchor)
])

Naturally, in your Renderer.swift, you'll need to add a new method to handle when the button is tapped. Additionally, you'll likely want to add a variable to your Renderer.swift file, something like, var isSavingFile = false, to prevent the Save button from being tapped repeatedly while a save is in process. More importantly, setting up your savePointsToFile() method in Renderer.swift, is where the bulk of the work takes place.

Code Block
private func savePointsToFile() {
guard !self.isSavingFile else { return }
self.isSavingFile = true
// 1
var fileToWrite = ""
let headers = ["ply", "format ascii 1.0", "element vertex \(currentPointCount)", "property float x", "property float y", "property float z", "property uchar red", "property uchar green", "property uchar blue", "property uchar alpha", "element face 0", "property list uchar int vertex_indices", "end_header"]
for header in headers {
fileToWrite += header
fileToWrite += "\r\n"
}
// 2
for i in 0..<currentPointCount {
// 3
let point = particlesBuffer[i]
let colors = point.color
// 4
let red = colors.x * 255.0
let green = colors.y * 255.0
let blue = colors.z * 255.0
// 5
let pvValue = "\(point.position.x) \(point.position.y) \(point.position.z) \(Int(red)) \(Int(green)) \(Int(blue)) 255"
fileToWrite += pvValue
fileToWrite += "\r\n"
}
// 6
let paths = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)
let documentsDirectory = paths[0]
let file = documentsDirectory.appendingPathComponent("ply_\(UUID().uuidString).ply")
do {
// 7
try fileToWrite.write(to: file, atomically: true, encoding: String.Encoding.ascii)
self.isSavingFile = false
} catch {
print("Failed to write PLY file", error)
}
}


Going through the method, I'll try to detail my approach broken down by notation;

1) A .ply file, as I recently learned, requires a header detailing the file format, number of vertex, and formats for the points x, y, z parameters, as well as color parameters. In this case, since we are using only points and not a mesh, the header indicates there will be no faces to the model, and we end our header with a notation that the index of each vertex is an integer. As a whole, it's worth mentioning that I am effectively just creating a "text" file, with each line (that isn't the header) being the details for each point, and then saving that file out with a .ply extension.

2) Using the currentPointCount, which is already being calculated and incremented through the sample project, I am iterating from 0 through the number of collected points.

3) Using the index, I am accessing the relevant point through the particlesBuffer, which provides me the point as a ParticleUniforms. This gives me access to the relevant point data, which includes the point's X, Y, and Z position, as well as the RGB color values.

4) I am setting up the colors as its own item, then multiplying the Red, Green, and Blue by 255 to get the relevant RGB color. The color data is saved as a simd_float_3, which sets each color value to X, Y, Z components (red is X, green is Y, blue is Z).

5) Creating a string with the data formatted as the .ply file expects allows it to be written to be appended to the existing fileToWrite, which already contains our header. After some trial and error, I found this syntax here created the best result (in this case, converting the RGB values from Floats to Ints, which rounds them). The last column indicates the alpha value of the point, which I am setting to 255, as each pixel should be fully visible. The pvValue string is appended to fileTowrite, as is a return carriage so the next point is added to the subsequent line.

6) Once all of the points have been added to fileToWrite, I am setting up a file path/file name as to where I want to write the file.

7) Finally, the file is being written to my desired destination. At this point, you could decide what you want to do with the file, whether that's to provide the user an option to save/share, upload it somewhere, etc. I'm setting my isSavingFile to false, and that's the setup. Once I grab my saved file (in my case, I provide the user a UIActivityController to save/share the file), and preview it (I'm using Meshlab on my Mac for preview purposes), I see the rendered point cloud. I've also tried uploading to SketchFab and it seems to work well.

A few notes;
  • My end goal is to save the point cloud as a .usdz file, not necessarily a .ply. @gchiste pointed me in the right direction, by creating something like a SCNSphere and coloring its material's diffuse contents with the relevant point cloud color, then setting its X, Y, and Z position in my SceneKit's view coordinate space. I did manage to get a SceneKit representation of the point cloud working, but the app crashes when I try to save out as a .usdz, with no particular indication as to why it's crashing. I filed feedback on this issue.

  • The PLY file generated can be quite large depending on how many points you've gathered. While I'm a novice at PLY and modeling, I believe that writing the PLY file in a different format, such as working with little endian or big endian encoding, could yield smaller PLY file results. I haven't figured that out yet, but I saw an app in the App Store that seems to gather the point clouds/generate a PLY file, and the resulting PLY is in little endian format, and much smaller in file size. Just worth mentioning.

  • This does not at all account of performance (which, there might be more efficient ways of doing this), nor providing the user any feedback that file writing is taking place, which can be time consuming. If you're planning to use this in a production app, just things to consider.

  • Hi @brandonK212,

    have you by any chance figured out how to implement the file conversion to binary little endian. I am wondering how to do this. Thanks!

  • Hello @ma0909, and @brandonK212! I created a modified version of the sample app which allows for saving and exporting in ASCII or Binary(LE and BE) formats https://github.com/ryanphilly/IOS-PointCloud. The implementation for converting to binary is in PLYFile.swift. Feel free to use whatever you want in the repo!

Add a Comment
Hello @brandonK212

I tried out your code sample, but can't get the PLY file as an output. I've pressed the save button many times, but the file does not appear in any of my File folders.

My viewDidLoad looks something like this :

Code Block
override func viewDidLoad() {
        super.viewDidLoad()
            // Setup a save button
                let button = UIButton(type: .system, primaryAction: UIAction(title: "Save", handler: { (action) in
                    self.renderer.savePointsToFile()
                }))
                button.translatesAutoresizingMaskIntoConstraints = false
                self.view.addSubview(button)
                NSLayoutConstraint.activate([
                    button.centerXAnchor.constraint(equalTo: self.view.centerXAnchor),
                    button.centerYAnchor.constraint(equalTo: self.view.centerYAnchor)
                ])
        guard let device = MTLCreateSystemDefaultDevice() else {
            print("Metal is not supported on this device")
            return
        }

And i've put the savePointsToFile in the Render.swift, but it only allows me to define it as func not "private func", because of the error - savePointsToFile' is inaccessible due to 'private' protection level.
Code Block var isSavingFile = false
    func savePointsToFile() {
      guard !self.isSavingFile else { return }
      self.isSavingFile = true
..


I'm still figuring out Swift, would be great to hear feedback from you, or other developers, to fix this issue.


Hi @Ricards97,

You are indeed correct that the savePointsToFile() method should not be private. That was an error on my part when posting; great catch and my apologies.

With regards to your question as to why the .ply file is not saving; the likely answer is that the file is saving, but to the Documents directory of the app's container (which is used for internal storage and not something that you have direct access to via the UI). While not totally specific to the point cloud example, there are a few approaches you can take to access the saved .ply;

Saving via iTunes File Sharing/Files App

  • You can enable both "iTunes File Sharing" and "Supports opening file in place" in your app's Info.plist file. Doing so would serve a two-fold purpose; you would be able to connect your iOS device to your Mac and access your device via Finder. In the "Files" tab, you could navigate to your point cloud app, and you should see the saved .ply files (which could then be copied to another location on your Mac for use). Subsequently, this will also make the .ply file accessible via the Files app on your iOS device, which should make it easier to access the file and use it for your desired purpose.

To do so, add the following entries to your Info.plist, setting the value of each to true;
UIFileSharingEnabled
LSSupportsOpeningDocumentsInPlace

Saving via delegate method

  • You could also implement a mechanism in your app to call a method once your file is done writing, then present a UI element within your app that would provide you to do something with the saved .ply file (such as share it via iMessage, e-mail it, save it to Files, etc.).

I'd suggest having a look at documentation on how delegates work, but for a quick approach, try the following;
  • In Renderer.swift, at the end of the file (outside of the Renderer class), add a new protocol;

Code Block
protocol RendererDelegate: class {
func didFinishSaving(path: URL)
}
  • Also in Renderer.swift, within the Renderer class, where variables are being defined, add a new variable like so;

Code Block
weak var delegate: RendererDelegate?

  • Lastly, still in Renderer.swift, you will want to add a line near the end of the savePointsToFileMethod(), just after the self.isSavingFile = false line. That line would call the delegate and provide the .ply file's URL. Like so;

Code Block
delegate?.didFinishSaving(path: file)
  • Over in your ViewController.swift file, you'll need to do a few things; conform ViewController to the RendererDelegate class, add a function to handle when the delegate's didFinishSaving method is called, and call a UI component to allow you to share the .ply file.

At the top of ViewController.swift, where the ViewController class is defined, you will see that the class already conforms to UIViewController and ARSessionDelegate. Add a comma after ARSessionDelegate and conform to RendererDelegate, so that line now looks as so;
Code Block
final class ViewController: UIViewController, ARSessionDelegate, RendererDelegate {
...
  • Somewhere within the ViewController class, add a new method to handle when the delegate is called. You likely are receiving an error that ViewController does not conform to RendererDelegate at this point. This step should take care of that error. The method should appear like so;

Code Block
func didFinishSaving(file: URL) {
//
}
  • In the viewDidLoad() method, you will need to inform the Renderer that ViewController is the delegate. Currently, Renderer is being instantiated like so;

Code Block
renderer = Renderer(session: session, metalDevice: device, renderDestination: view)

Just below this line, add a new line;
renderer.delegate = self
  • Lastly, in your didFinishSaving() method that we created in ViewController, you can add a UIActivityViewController which will make the method appear like so;

Code Block
func didFinishSaving(path: URL) {
DispatchQueue.main.async {
let ac = UIActivityViewController(activityItems: [path], applicationActivities: nil)
if let popover = ac.popoverPresentationController {
popover.sourceView = self.confidenceControl
}
self.present(ac, animated: true)
}
}


This was a lengthly and very verbose explanation of how to add a delegate callback and present a UIActivityViewController to allow you to work with the saved .ply file. To note, if you are running your app on iPad, your UIActivityController needs a "source" to present from, by way of a popover. In my example, I am using the confidenceControl element that already exists in the sample app, but if you've modified the UI of that sample app, you may need to use another popover point, such as a button or other UI element.
Hello @brandonK212,
Thank you for the quick reply. It works very well now! Great solution.
Hello, I saw that you studied LiDAR before, and I want to ask you, https://apps.apple.com/us/app/id1419913995 How does this app achieve scene reconstruction? Is it through a Point Cloud Using Scene Depth or https://developer.apple.com/forums/thread/654431
Hi, brandonK212.
I have quite similar problem with you.
But, because I'm a noob in metal, I couldn't figure out this.
Could you mail me ? (h890819j@gmail.com)
Or Could you give me any small advice how to get points world space into array ?
Hi @mjbmjbjhjghkj,

Without knowing more about the inner-workings of the app you referenced, it would be tough to answer how they are building their scene reconstruction. Keep in mind that there are multiple ways to use the LiDAR scanner on iPad/iPhone to recreate the scanned environment and export as a 3D model (the two common ways, as provided by Apple's sample code, are creating "point cloud" representations of the environment, and reconstructing a scene using the ARMeshGeometry type, as noted in the Visualizing a Point Cloud Using Scene Depth and Visualizing and Interacting with a Reconstructed Scene sample projects, respectively).

From a quick glance, it looks more as though the app you referenced is building a 3D model of the environment by capturing the geometry of the world and converting that geometry (which would be gathered as an ARMeshGeometry) to something like a SCNGeometry, then using that SCNGeometry to create a SCNNode, and then, add each SCNNode to a SCNScene to build a "growing" model and finally exporting as a 3D object. How the app you referenced creates the environmental texture to apply to that 3D model, so it appears like a "real" representation, is not something I am familiar with, though the thread you referenced here on the Developer Forums has some extensive detail and discussion about how to convert ARMeshGeometry to SCNGeometry for this purpose.