3D Graphics

RSS for tag

Discuss integrating three-dimensional graphics into your app.

Posts under 3D Graphics tag

48 Posts
Sort by:
Post not yet marked as solved
4 Replies
1.9k Views
Has anyone run into limitations with an 8gb RAM M1 Mac Mini? I'm expecting there are some compromises with only 8gb, but curious about real-world results. It's impressive to see that the M1 Mac Mini is capable of running PhotogrammetrySession at all, unlike my 2020 Intel MBP. I'm planning to buy one just for this purpose. The requirements in the slide from the presentation says that any M1 will work, whereas Intel chips need 16gb RAM and a 4gb AMD video card. I'm inclined to get a Mac Mini with 16gb, but that config isn't available near me for pickup and delivery is more than a week out. If I knew that 8gb was enough to process 150 or so photos at high quality that's probably all I would need and could save $200 and get it immediately. Side note: I've been doing photogrammetry on PCs for years and would run out of memory occasionally using Agisoft on a 64gb system, which I needed to upgrade to 128gb. Those were large datasets (500+ photos) covering several hundred square meters from a drone at high resolution. My object scanning needs won't be as demanding, however 8gb just doesn't seem like much to work with. But, I suppose that even Nvidia 3070 Ti's only have 8gb of video memory and the M1's unified memory architecture might make that a better comparison than thinking about traditional system memory...
Posted Last updated
.
Post not yet marked as solved
8 Replies
1.4k Views
Hello, in this project https://developer.apple.com/documentation/arkit/content_anchors/tracking_and_visualizing_faces there is some sample code that describes how to map the camera feed to an object with SceneKit and a shader modifier. I would like know if there is an easy way to achieve the same thing with a CustomMaterial and RealityKit 2. Specifically I'm interested in what would be the best way to pass in the background of the RealityKit environment as a texture to the custom shader. In SceneKit this was really easy as one could just do the following: material.diffuse.contents = sceneView.scene.background.contents As the texture input for custom material requires a TextureResource I would probably need a way to create a CGImage from the background or camera feed on the fly. What I've tried so far is accessing the captured image from the camera feed and creating a CGImage from the pixel buffer like so: guard     let frame = arView.session.currentFrame,     let cameraFeedTexture = CGImage.create(pixelBuffer: frame.capturedImage),     let textureResource = try? TextureResource.generate(from: cameraFeedTexture, withName: "cameraFeedTexture", options: .init(semantic: .color)) else {     return } // assign texture customMaterial.custom.texture = .init(textureResource) extension CGImage {   public static func create(pixelBuffer: CVPixelBuffer) -> CGImage? {     var cgImage: CGImage?     VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)     return cgImage   } } This seems wasteful though and is also quite slow. Is there any other way to accomplish this efficiently or would I need to go the post processing route? In the sample code the displayTransform for the view is also being passed as a SCNMatrix4. CustomMaterial custom.value only accepts a SIMD4 though. Is there another way to pass in the matrix? Another idea I've had was to create a CustomMaterial from an OcclusionMaterial which already seems to contain information about the camera feed but so far had no luck with it. Thanks for the support!
Posted Last updated
.
Post marked as solved
1 Replies
542 Views
Hi, I have a MacPro, and am looking to buy a Sapphire AMD RX 580 8GB GPU. (Since my PowerColor R9 280X 3GB is just shy of the minimum 4GB requirement...) And I'm wondering, what if I bought two RX 580's? Would Object Capture take advantage of a dual gpu setup? ... and if so, would it increase the performance? PS. Just to clarify - I don't want / not talking about doing a Dual-Link / CrossFire setup (since that practice is kinda "dead"...) ... just wondering if Object Capture would recognise "aaah, there are two identical GPUs in the system, lets use both..."
Posted
by danalien.
Last updated
.
Post marked as solved
3 Replies
841 Views
Hi, I'm getting this error code "-21" ... anyone know what it means? cantCreateSession("Native session create failed: CPGReturn(rawValue: -21) 2021-07-17 14:01:29.621817+0200 HelloPhotogrammetry[2578:40148] [HelloPhotogrammetry] Using configuration: Configuration(isObjectMaskingEnabled: true, sampleOrdering: RealityFoundation.PhotogrammetrySession.Configuration.SampleOrdering.unordered, featureSensitivity: RealityFoundation.PhotogrammetrySession.Configuration.FeatureSensitivity.normal) 2021-07-17 14:01:29.669711+0200 HelloPhotogrammetry[2578:40148] Metal API Validation Enabled 2021-07-17 14:01:29.709715+0200 HelloPhotogrammetry[2578:40148] [HelloPhotogrammetry] Error creating session: cantCreateSession("Native session create failed: CPGReturn(rawValue: -21)") Program ended with exit code: 1
Posted
by danalien.
Last updated
.