Post not yet marked as solved
I'm using Xcode 13 after recently updating to MacOS Monterey, and only after updating am I getting this error: [MTLDebugCommandBuffer lockPurgeableObjects]:2103: failed assertion `MTLResource 0x14a8a8cc0 (label: null), referenced in cmd buffer 0x149091400 (label: null) is in volatile or empty purgeable state at commit'
I haven't changed my code at all between updating to the latest OS, and it worked perfectly before. How can I fix this? I don't think there should be any reason that I can't use a command buffer on a texture resource with a volatile/empty purgeable state.
Post not yet marked as solved
So, I've modified the CaptureSample IOS app to take photos using the truedepth front camera. It worked perfectly, and I have TIF depth maps together with the gravity vector and the photos I took.
Using the HelloPhotogrammetry command line, I created the meshes without any problems.
I notice the meshes have a consistent size between then, for example, creating a mesh of my face and a mesh of my nose, the nose mesh fits perfectly on top of the nose on the face mesh! Great!
BUT, when I open the meshes in Maya, for example, they are really really tiny!
I was expecting to see the objects in the proper scale, and hopefully bee able to even take measurements in maya to see if they would match the real measurements of the scanned object, but they don't seem to come on the right size at all. I tried set Maya to meters, centimetres and milimetres, but it always imports the meshes really tiny. I have to apply a scale of 100 to be able to see the meshes. But then they don't measure correctly. By try and error, I was able to find that scaling the meshes by 86 would make then match the real world scale in centimetres.
Is there a proper space conversion that needs to be applied to the mesh to convert it to the real world scale?
Would the problem be that I'm using the truedepth camera instead of the back camera, and the depth map value is coming in a different scale than what HelloPhotogrammetry expects?
Post not yet marked as solved
Hi guys!
I'm studying CoreML converting now.
I want to convert a model which deals 3D point cloud data, but I can't make the code that determine input shape.
3d data sets shape depends on the number of points, and that is variable whenever LiDAR gets the 3d data.
Is there any way I can do?
Post not yet marked as solved
Hi!
I'm really excited to try the new ObjectCapture API. I have a iPhone 12 Pro (with the lidar) but have a old MacBook. I'm planning to get a new MacBook to run the RealityKit and Photogrammetry software, as given in this example: https://developer.apple.com/videos/play/wwdc2021/10076/.
Are there any restrictions on the Mac hardware or is it fine as long as they support macOS 12.0+ Beta and Xcode 13.0+?
Thanks!
Post not yet marked as solved
My app uses SceneKit to do 3D rendering, and on the iPad Pro, it detects the 120Hz screen and lets you pick that as a target frames per second in the settings. All works well.
On the iPhone 13 Pro, it can see the screen, and shows the option, but everything seems to be capped at 60Hz regardless of what you set the preferredFramesPerSecond of the SceneView to.
Does anybody have an idea what I need to do on the iPhone to get this to work? Thanks!
Post not yet marked as solved
I'm on Mac OS 12 (Monterey) and Xcode 13 but it still get the error "Cannot find type 'PhotogrammetrySession' in scope"
I tried restarting Xcode, tried restarting the Mac. But I still get the error. I have imported "RealityKit".
I'm trying to run the HelloPhotogrammetry code provided by Apple.
Post not yet marked as solved
I have a 3D scene with a perspective camera and I'd like some of the elements to be projected using an orthographic projection instead.
My use case is that I have some 3D elements with attached text nodes. I'd like the text on these nodes to always be the same size no matter how far away the camera is. Is there a way I can use SceneKit to mix and match? Or is there another technique I can use?
I work in the thoroughbred industry. I am interested in capturing a 3D model of a racehorse (at rest) to later use in a dataset for analysis.
A recent paper (see "Body measurement of riding horses with a versatile tablet-type 3D scanning device") used the iPhnoe 12, a commerical app (Scandy) and LiDAR to create 3D models of the horse. It reads as a fairly straightfoward process, however I was wondering if there was any benefit to using Object Capture over LiDAR. It would seem as easy to walk around the horse and capture a video and then create the process to extract frames from the video for Object Capture?
In terms of creating 3D models, is one method better/more accurate than another?
Post not yet marked as solved
Hi,
I am trying to build and run the HelloPhotogrammetry app that is associated with WWDC21 session 10076 (available for download here).
But when I run the app, I get the following error message:
A GPU with supportsRaytracing is required
I have a Mac Pro (2019) with an AMD Radeon Pro 580X 8 GB graphics card and 96 GB RAM. According to the requirements slide in the WWDC session, this should be sufficient. Is this a configuration issue or do I actually need a different graphics card (and if so, which one?).
Thanks in advance.
Post not yet marked as solved
I know it's uncool to ask vague questions here, but what do they call it when you create a world and follow it with a camera in Swift? Like an RPG? Like Doom?
I want to try and learn that now. And more importantly can it be done without using the Xcode scene builder? Can it be done all via code?
Thanks, as always. Without the forum I would never have gotten much farther than "Hello World!"
Post not yet marked as solved
I am new to apple app and metal shader development, and I want to learn this skill from the very beginning, so is there a full list of tutorial, thanks.
Hi,
I have a MacPro, and am looking to buy a Sapphire AMD RX 580 8GB GPU.
(Since my PowerColor R9 280X 3GB is just shy of the minimum 4GB requirement...)
And I'm wondering, what if I bought two RX 580's? Would Object Capture take advantage of a dual gpu setup? ... and if so, would it increase the performance?
PS. Just to clarify - I don't want / not talking about doing a Dual-Link / CrossFire setup (since that practice is kinda "dead"...) ... just wondering if Object Capture would recognise "aaah, there are two identical GPUs in the system, lets use both..."
Hi,
I'm getting this error code "-21" ... anyone know what it means?
cantCreateSession("Native session create failed: CPGReturn(rawValue: -21)
2021-07-17 14:01:29.621817+0200 HelloPhotogrammetry[2578:40148] [HelloPhotogrammetry] Using configuration: Configuration(isObjectMaskingEnabled: true, sampleOrdering: RealityFoundation.PhotogrammetrySession.Configuration.SampleOrdering.unordered, featureSensitivity: RealityFoundation.PhotogrammetrySession.Configuration.FeatureSensitivity.normal)
2021-07-17 14:01:29.669711+0200 HelloPhotogrammetry[2578:40148] Metal API Validation Enabled
2021-07-17 14:01:29.709715+0200 HelloPhotogrammetry[2578:40148] [HelloPhotogrammetry] Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -21)")
Program ended with exit code: 1
Post not yet marked as solved
The CaptureSample App creates a depth map image (.TIF) and a gravity file (.TXT).
Whats the role of those files?
Are they used to calculate the scale of the object ONLY?
or do they contribute in other areas aswell to the algorithm?
Post not yet marked as solved
Hello,
in this project https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces there is some sample code that describes how to map the camera feed to an object with SceneKit and a shader modifier.
I would like know if there is an easy way to achieve the same thing with a CustomMaterial and RealityKit 2.
Specifically I'm interested in what would be the best way to pass in the background of the RealityKit environment as a texture to the custom shader.
In SceneKit this was really easy as one could just do the following:
material.diffuse.contents = sceneView.scene.background.contents
As the texture input for custom material requires a TextureResource I would probably need a way to create a CGImage from the background or camera feed on the fly.
What I've tried so far is accessing the captured image from the camera feed and creating a CGImage from the pixel buffer like so:
guard
let frame = arView.session.currentFrame,
let cameraFeedTexture = CGImage.create(pixelBuffer: frame.capturedImage),
let textureResource = try? TextureResource.generate(from: cameraFeedTexture, withName: "cameraFeedTexture", options: .init(semantic: .color))
else {
return
}
// assign texture
customMaterial.custom.texture = .init(textureResource)
extension CGImage {
public static func create(pixelBuffer: CVPixelBuffer) -> CGImage? {
var cgImage: CGImage?
VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)
return cgImage
}
}
This seems wasteful though and is also quite slow.
Is there any other way to accomplish this efficiently or would I need to go the post processing route?
In the sample code the displayTransform for the view is also being passed as a SCNMatrix4. CustomMaterial custom.value only accepts a SIMD4 though. Is there another way to pass in the matrix?
Another idea I've had was to create a CustomMaterial from an OcclusionMaterial which already seems to contain information about the camera feed but so far had no luck with it.
Thanks for the support!
Post not yet marked as solved
Has anyone run into limitations with an 8gb RAM M1 Mac Mini? I'm expecting there are some compromises with only 8gb, but curious about real-world results.
It's impressive to see that the M1 Mac Mini is capable of running PhotogrammetrySession at all, unlike my 2020 Intel MBP. I'm planning to buy one just for this purpose.
The requirements in the slide from the presentation says that any M1 will work, whereas Intel chips need 16gb RAM and a 4gb AMD video card.
I'm inclined to get a Mac Mini with 16gb, but that config isn't available near me for pickup and delivery is more than a week out. If I knew that 8gb was enough to process 150 or so photos at high quality that's probably all I would need and could save $200 and get it immediately.
Side note: I've been doing photogrammetry on PCs for years and would run out of memory occasionally using Agisoft on a 64gb system, which I needed to upgrade to 128gb. Those were large datasets (500+ photos) covering several hundred square meters from a drone at high resolution. My object scanning needs won't be as demanding, however 8gb just doesn't seem like much to work with. But, I suppose that even Nvidia 3070 Ti's only have 8gb of video memory and the M1's unified memory architecture might make that a better comparison than thinking about traditional system memory...
Post not yet marked as solved
Hi, In SceneKit I pass custom parameters to a metal shader using a SCNAnimation, for example:
let revealAnimation = CABasicAnimation(keyPath: "revealage")
revealAnimation.duration = duration
revealAnimation.toValue = toValue
let scnRevealAnimation = SCNAnimation(caAnimation: revealAnimation)
material.addAnimation(scnRevealAnimation, forKey: "Reveal")
How would I do similar to a metal shader in RealityKit?
I saw in the Octopus example:
//int(params.uniforms().custom_parameter()[0])
But it's commented out and there is no example how to set the custom variable and animate it? (unless I missed it)
Great session BTW
Thanks
Hi
I'm trying to become familiar with RealityKit 2. I'm trying to build the code in the session, but I'm getting compile errors.
Any advice?
Link to the sample code below
https://developer.apple.com/documentation/realitykit/building_an_immersive_experience_with_realitykit
Post not yet marked as solved
Is there some code to create the example GUI app you used in the demo?
Post not yet marked as solved
I donnot know how to run it