3D Graphics

RSS for tag

Discuss integrating three-dimensional graphics into your app.

Posts under 3D Graphics tag

32 Posts
Sort by:
Post not yet marked as solved
0 Replies
47 Views
I write iOS plug in to integrate MetalFX Spatial Upscaling to Unity URP project. C# Code in Unity: namespace UnityEngine.Rendering.Universal { /// /// Renders the post-processing effect stack. /// internal class PostProcessPass : ScriptableRenderPass { RenderTexture _dstRT = null; [DllImport ("__Internal")] private static extern void MetalFX_SpatialScaling (IntPtr srcTexture, IntPtr dstTexture, IntPtr outTexture); } } void RenderFinalPass(CommandBuffer cmd, ref RenderingData renderingData) { // ...... case ImageUpscalingFilter.MetalFX: { var upscaleRtDesc = tempRtDesc; upscaleRtDesc.width = cameraData.pixelWidth; upscaleRtDesc.height = cameraData.pixelHeight; RenderingUtils.ReAllocateIfNeeded(ref m_UpscaledTarget, upscaleRtDesc, FilterMode.Point, TextureWrapMode.Clamp, name: "_UpscaledTexture"); var metalfxInputSize = new Vector2(cameraData.cameraTargetDescriptor.width, cameraData.cameraTargetDescriptor.height); if (_dstRT == null) { _dstRT = new RenderTexture(upscaleRtDesc.width, upscaleRtDesc.height, 0, RenderTextureFormat.ARGB32); _dstRT.Create(); } // call native plugin cmd.SetRenderTarget(m_UpscaledTarget, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare); MetalFX_SpatialScaling(sourceTex.rt.GetNativeTexturePtr(), m_UpscaledTarget.rt.GetNativeTexturePtr(), _dstRT.GetNativeTexturePtr()); Graphics.CopyTexture(_dstRT, m_UpscaledTarget.rt); sourceTex = m_UpscaledTarget; PostProcessUtils.SetSourceSize(cmd, upscaleRtDesc); break; } // ..... } Objective-c Code in iOS: head file: #import <Foundation/Foundation.h> #import <MetalFX/MTLFXSpatialScaler.h> @protocol MTLTexture; @protocol MTLDevice; API_AVAILABLE(ios(16.0)) @interface MetalFXDelegate : NSObject { int mode; id _device; id _commandQueue; id _outTexture; id _mfxSpatialScaler; id _mfxSpatialEncoder; }; (void)SpatialScaling: (MTLTextureRef) srcTexture dstTexure: (MTLTextureRef) dstTexture outTexure: (MTLTextureRef) outTexture; (void)saveTexturePNG: (MTLTextureRef) texture url: (CFURLRef) url; @end m file: #import "MetalFXOC.h" @implementation MetalFXDelegate (id)init { self = [super init]; return self; } static MetalFXDelegate* delegateObject = nil; (void)SpatialScaling: (MTLTextureRef) srcTexture dstTexture: (MTLTextureRef) dstTexture outTexture: (MTLTextureRef) outTexture { int width = (int)srcTexture.width; int height = (int)srcTexture.height; int dstWidth = (int)dstTexture.width; int dstHeight = (int)dstTexture.height; if (_mfxSpatialScaler == nil) { MTLFXSpatialScalerDescriptor* desc; desc = [[MTLFXSpatialScalerDescriptor alloc]init]; desc.inputWidth = width; desc.inputHeight = height; desc.outputWidth = dstWidth; ///_screenWidth desc.outputHeight = dstHeight; ///_screenHeight desc.colorTextureFormat = srcTexture.pixelFormat; desc.outputTextureFormat = dstTexture.pixelFormat; if (@available(iOS 16.0, *)) { desc.colorProcessingMode = MTLFXSpatialScalerColorProcessingModePerceptual; } else { // Fallback on earlier versions } _device = MTLCreateSystemDefaultDevice(); _mfxSpatialScaler = [desc newSpatialScalerWithDevice:_device]; if (_mfxSpatialScaler == nil) { return; } _commandQueue = [_device newCommandQueue]; MTLTextureDescriptor *texdesc = [[MTLTextureDescriptor alloc] init]; texdesc.width = (int)dstTexture.width; texdesc.height = (int)dstTexture.height; texdesc.storageMode = MTLStorageModePrivate; texdesc.usage = MTLTextureUsageRenderTarget | MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite; texdesc.pixelFormat = dstTexture.pixelFormat; _outTexture = [_device newTextureWithDescriptor:texdesc]; } id upscaleCommandBuffer = [_commandQueue commandBuffer]; upscaleCommandBuffer.label = @"Upscale Command Buffer"; _mfxSpatialScaler.colorTexture = srcTexture; _mfxSpatialScaler.outputTexture = _outTexture; [_mfxSpatialScaler encodeToCommandBuffer:upscaleCommandBuffer]; // outTexture = _outTexture; id textureCommandBuffer = [_commandQueue commandBuffer]; id _mfxSpatialEncoder =[textureCommandBuffer blitCommandEncoder]; [_mfxSpatialEncoder copyFromTexture:_outTexture toTexture:outTexture]; [_mfxSpatialEncoder endEncoding]; [upscaleCommandBuffer commit]; } @end extern "C" { void MetalFX_SpatialScaling(void* srcTexturePtr, void* dstTexturePtr, void* outTexturePtr) { if (delegateObject == nil) { if (@available(iOS 16.0, *)) { delegateObject = [[MetalFXDelegate alloc] init]; } else { // Fallback on earlier versions } } if (srcTexturePtr == nil || dstTexturePtr == nil || outTexturePtr == nil) { return; } id<MTLTexture> srcTexture = (__bridge id<MTLTexture>)(void *)srcTexturePtr; id<MTLTexture> dstTexture = (__bridge id<MTLTexture>)(void *)dstTexturePtr; id<MTLTexture> outTexture = (__bridge id<MTLTexture>)(void *)outTexturePtr; if (@available(iOS 16.0, *)) { [delegateObject SpatialScaling: srcTexture dstTexture: dstTexture outTexture: outTexture]; } else { // Fallback on earlier versions } return; } } With the C# and objective code, the appear on screen is black. If I save the MTLTexture to PNG in ios plug in, the PNG is ok(not black), so I think the outTexture: outTexture write to unity is failed.
Posted
by Hsuehnj.
Last updated
.
Post not yet marked as solved
3 Replies
444 Views
After the iOS 17 update, objects rendered in SceneKit that have both a normal map and morph targets do not render correctly. The shading and lighting appear dark and without reflections. Using a normal map without morph targets or having morph targets on an object without using a normal map works fine. However, the combination of using both breaks the rendering. Using diffuse, normal map and a morpher: Diffuse and normal, NO morpher:
Posted
by Ginada.
Last updated
.
Post not yet marked as solved
3 Replies
186 Views
Hello everyone! I have a small concern about one little thing when it comes to programming in metal. There are some models that I wish to use along with animations and skins on them, the file extension for them is called gltf. glTF has been used in a number of projects such as unity and unreal engine and godot and blender. I was wondering if metal supports this file extension or not. Anyone here knows the answer?
Posted Last updated
.
Post not yet marked as solved
0 Replies
91 Views
Hi, I trying to use Metal cpp, but I have compile error: ISO C++ requires the name after '::' to be found in the same scope as the name before '::' metal-cpp/Foundation/NSSharedPtr.hpp(162): template <class _Class> _NS_INLINE NS::SharedPtr<_Class>::~SharedPtr() { if (m_pObject) { m_pObject->release(); } } Use of old-style cast metal-cpp/Foundation/NSObject.hpp(149): template <class _Dst> _NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj) { #ifdef __OBJC__ return (__bridge _Dst)pObj; #else return (_Dst)pObj; #endif // __OBJC__ } XCode Project was generated using CMake: target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20) target_compile_options(${MODULE_NAME} PRIVATE "-Wgnu-anonymous-struct" "-Wold-style-cast" "-Wdtor-name" "-Wpedantic" "-Wno-gnu" ) May be need to set some CMake flags for C++ compiler ?
Posted Last updated
.
Post not yet marked as solved
0 Replies
202 Views
I am developing a web application that leverages WebGL to display 3D content. The app would benefit from tracking headset movement when viewing the 2D page as a Window while wearing Vision Pro. This would ultimately allow me a way to convey the idea of the Window acting as a portal into a virtual environment, as the rendered perspective of the 3D environment would match that of the user wearing the headset. This is a generic request/goal, as it would be applicable to any browser and any 6dof device, but I am interested in knowing if it is currently possible with Vision Pro (and the Simulator) and its version of Safari for "spatial computing". I can track the head movement while in a WebXR XR or "immersive" session, but I would like to be able to track it without going into VR mode. Is this possible? If so, how and using which tools?
Posted Last updated
.
Post not yet marked as solved
4 Replies
2.1k Views
Hi, I am trying to build and run the HelloPhotogrammetry app that is associated with WWDC21 session 10076 (available for download here). But when I run the app, I get the following error message: A GPU with supportsRaytracing is required I have a Mac Pro (2019) with an AMD Radeon Pro 580X 8 GB graphics card and 96 GB RAM. According to the requirements slide in the WWDC session, this should be sufficient. Is this a configuration issue or do I actually need a different graphics card (and if so, which one?). Thanks in advance.
Posted Last updated
.
Post not yet marked as solved
1 Replies
566 Views
Pre-planning a project to use multiple 360 cameras setup un a grid to generate an immersive experience, hoping to use photogrammetry to generate 3D images of objects inside the grid. beeconcern.ca wants to expand their bee gardens, and theconcern.ca wants to use it to make a live immersive apiary experience. Still working out the best method for compiling, editing, rendering; have been leaning towards UE5, but still seeking advice.
Posted
by g-arth.
Last updated
.
Post marked as solved
3 Replies
1.6k Views
So I'm trying to make a simple scene with some geometry of sorts and a movable camera. So far I've been able to render basic geometry in 2D alongside transforming set geometry using matrices. Following this I moved on to the Calculating Primitive Visibility Using Depth Testing Sample ... also smooth sailing. Then I had my first go at transforming positions between different coordinate spaces. I didn't get quite far with my rather blurry memory from OpenGL, all dough when I compared my view and projection matrix with the ones from the OpenGL glm::lookAt() and glm::perspective() functions there seemed to be no fundamental differences. Figuring Metal doing things differently I browsed the Metal Sample Code library for a sample containing a first-person camera. The only one I could find was Rendering Terrain Dynamically with Argument Buffers. Luckily it contained code for calculating view and projection matrices, which seemed to differ from my code. But I still have problems Problem Description When positioning the camera right in front of the geometry the view as well as the projection matrix produce seemingly accurate results: Camera Positon(0, 0, 1); Camera Directio(0, 0, -1) When moving further away though, parts of the scene are being wrongfully culled. Notably the ones farther away from the camera: Camera Position(0, 0, 2); Camera Direction(0, 0, -1) Rotating the Camera also produces confusing results: Camera Position: (0, 0, 1); Camera Direction: (cos(250°), 0, sin(250°)), yes I converted to radians My Suspicions The Projection isn't converting the vertices from view space to Normalised Device Coordinates correctly. Also when comparing two first two images, the lower part of the triangle seems to get bigger as the camera moves away which also doesn't appear to be right. Obviously the view matrix is also not correct as I'm pretty sure what's describe above isn't supposed to happen. Code Samples MainShader.metal #include <metal_stdlib> #include <Shared/Primitives.h> #include <Shared/MainRendererShared.h> using namespace metal; struct transformed_data {     float4 position [[position]];     float4 color; }; vertex transformed_data vertex_shader(uint vertex_id [[vertex_id]],                                       constant _vertex *vertices [[buffer(0)]],                                       constant _uniforms& uniforms [[buffer(1)]]) {     transformed_data output;     float3 dir = {0, 0, -1};     float3 inEye = float3{ 0, 0, 1 }; // position     float3 inTo = inEye + dir; // position + direction     float3 inUp = float3{ 0, 1, 0};          float3 z = normalize(inTo - inEye);     float3 x = normalize(cross(inUp, z));     float3 y = cross(z, x);     float3 t = (float3) { -dot(x, inEye), -dot(y, inEye), -dot(z, inEye) };     float4x4 viewm = float4x4(float4 { x.x, y.x, z.x, 0 },                               float4 { x.y, y.y, z.y, 0 },                               float4 { x.z, y.z, z.z, 0 },                               float4 { t.x, t.y, t.z, 1 });          float _nearPlane = 0.1f;     float _farPlane = 100.0f;     float _aspectRatio = uniforms.viewport_size.x / uniforms.viewport_size.y;     float va_tan = 1.0f / tan(0.6f * 3.14f * 0.5f);     float ys = va_tan;     float xs = ys / _aspectRatio;     float zs = _farPlane / (_farPlane - _nearPlane);     float4x4 projectionm = float4x4((float4){ xs,  0,  0, 0},                                     (float4){  0, ys,  0, 0},                                     (float4){  0,  0, zs, 1},                                     (float4){  0,  0, -_nearPlane * zs, 0 } );          float4 projected = (projectionm*viewm) * float4(vertices[vertex_id].position,1);     vector_float2 viewport_dim = vector_float2(uniforms.viewport_size);     output.position = vector_float4(0.0, 0.0, 0.0, 1.0);     output.position.xy = projected.xy / (viewport_dim / 2);     output.position.z = projected.z;     output.color = vertices[vertex_id].color;     return output; } fragment float4 fragment_shader(transformed_data in [[stage_in]]) {return in.color;} These are the vertices definitions let triangle_vertices = [_vertex(position: [ 480.0, -270.0, 1.0], color: [1.0, 0.0, 0.0, 1.0]),                          _vertex(position: [-480.0, -270.0, 1.0], color: [0.0, 1.0, 0.0, 1.0]),                          _vertex(position: [   0.0,  270.0, 0.0], color: [0.0, 0.0, 1.0, 1.0])] // TO-DO: make this use 4 vertecies and 6 indecies let quad_vertices = [_vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0])] This is the initialisation code of the depth stencil descriptor and state _view.depthStencilPixelFormat = MTLPixelFormat.depth32Float _view.clearDepth = 1.0 // other render initialisation code let depth_stencil_descriptor = MTLDepthStencilDescriptor() depth_stencil_descriptor.depthCompareFunction = MTLCompareFunction.lessEqual depth_stencil_descriptor.isDepthWriteEnabled = true; depth_stencil_state = try! _view.device!.makeDepthStencilState(descriptor: depth_stencil_descriptor)! So if you have any idea on why its not working or have some code of your own that's working or know of any public samples containing a working first-person camera, feel free to help me out. Thank you in advance! (please ignore any spelling or similar mistakes, english is not my primary language)
Posted
by _tally.
Last updated
.
Post marked as solved
3 Replies
847 Views
I'm looking to get a GPU to use for Object capture. The requirements are an AMD GPU with 4GB of VRAM and ray tracing support, the rx 580 seems to be able to do ray tracing from what I've found online but looks like someone had an issue with a 580X here https://developer.apple.com/forums/thread/689891
Posted Last updated
.
Post not yet marked as solved
0 Replies
516 Views
Hello With Unity you can import an animation and apply it to any character. For example, I import a walking animation and I can apply it to all my characters. Is there an equivalent with SceneKit? I would like to apply animations by programming without having to import for each character specifically Thanks
Posted Last updated
.
Post not yet marked as solved
0 Replies
304 Views
When trying to import Spatial in a project targeting macOS, I see the following error message: Cannot load underlying module for 'Spatial'. However, when I do the exact same import in an iOS project, it works fine. In the macOS project, Xcode offers to autocomplete the word 'Spatial', so it knows about it, but I can't use it. I've also tried adding the Spatial framework to the macOS project (note I never added anything to the iOS project where this works). When adding, there is no Spatial.Framework listed. There is libswiftSpatial.tbd, but there is no corresponding libswiftSpatial.dylib in \usr\lib\swift as the .tbd file says there should be. I'm kind of a n00b at this, and I don't understand what incredibly obvious thing I'm missing. The docs say Spatial is OK for macOS 13.0+, I have 13.4. I'm using Xcode 14.3.1. The doc page refers to Spatial as both a Framework, and as a module, and give no help on incorporating it into an Xcode project. Any help is appreciated, thanks. -C
Posted
by cjb9000.
Last updated
.
Post not yet marked as solved
0 Replies
281 Views
Hello I try to understand the movement of a character with physics. To do this, I imported max into the fox2 file provided by Apple. I apply a .static physics to it and I have a floor with static physics and static blocks to test collisions everything works very well except that Max is above the ground. He doesn't touch my ground. I couldn't understand why until I had the physicsShapes displayed in the debug options. With that I see that if max does not touch the ground it is because the automatic shape is below Max and this shape touches the ground. So I would like to know why the shape is shifted downwards and how to correct this problem? I did tests and the problem seems to come from physicsBody?.mass. If I remove the mass, the shape is correct but when I move my character it crosses the walls and when I put it on it is well stopped by the static boxes... Someone with an idea of how to correct this problem? This is my simplify code import SceneKit import PlaygroundSupport // create a scene view with an empty scene var sceneView = SCNView(frame: CGRect(x: 0, y: 0, width: 300, height: 300)) var scene = SCNScene() sceneView.scene = scene // start a live preview of that view PlaygroundPage.current.liveView = sceneView // default lighting sceneView.autoenablesDefaultLighting = true sceneView.allowsCameraControl = true sceneView.debugOptions = [.showPhysicsShapes] // a camera var cameraNode = SCNNode() cameraNode.camera = SCNCamera() cameraNode.position = SCNVector3(x: 0, y: 0, z: 3) scene.rootNode.addChildNode(cameraNode) // Make floor node let floorNode = SCNNode() let floor = SCNFloor() floor.reflectivity = 0.25 floorNode.geometry = floor floorNode.physicsBody = SCNPhysicsBody(type: .static, shape: nil) scene.rootNode.addChildNode(floorNode) //add character guard let fichier = SCNScene(named: "max.scn") else { fatalError("failed to load Max.scn") } guard let character = fichier.rootNode.childNode(withName: "Max_rootNode", recursively: true) else { fatalError("Failed to find Max_rootNode") } scene.rootNode.addChildNode(character) character.position = SCNVector3(0, 0, 0) character.physicsBody = SCNPhysicsBody(type: .static, shape: nil) character.physicsBody?.mass = 5 Thank you!
Posted Last updated
.
Post not yet marked as solved
0 Replies
417 Views
I'm using a Mid-2014 Macbook Pro with Intel Iris graphics. There seems to be a problem when running Metal shader programs that use global constant arrays. I've reproduced the problem by making a small modification to the Learn-Metal-CPP tutorial. I've specifically modified the MSL shader program in "01-primitive.cpp" to get each triangle vertex's position and color from a global array defined in the shader itself. The shader's constant array values are identical to the values being passed in as vertex arrays in the original tutorial. I'd expect the resulting image to look like the original tutorial, which looked like this: However, my version of the program that uses the shader global arrays produces the following result: Here is my shader source that produced the wrong 2nd result above. You can replace the shader in Learn-Metal-CPP's 01-primitive.cpp with my shader to reproduce my result (on my hardware at least): #include <metal_stdlib> using namespace metal; constant float2 myGlobalPositions[3] = { float2(-0.8, 0.8), float2(0.0,-0.8), float2(0.8,0.8) }; constant float3 myGlobalColors[3] = { float3(1.0, 0.3, 0.2), float3(0.8, 1.0, 0.0), float3(0.8, 0.0, 1.0) }; struct v2f { float4 position [[position]]; float3 color; }; v2f vertex vertexMain( uint vertexId [[vertex_id]], device const float3* positions [[buffer(0)]], device const float3* colors [[buffer(1)]] ) { v2f o; // This uses neither of the global const arrays. It produces the correct result. // o.position = float4( positions[ vertexId ], 1.0 ); // o.color = colors[ vertexId ]; // This does not use myGlobalPositions. It produces the correct result. // o.position = float4( positions[ vertexId ], 1.0 ); // o.color = myGlobalColors[vertexId]; // This uses myGlobalPositions and myGlobalColors. IT PRODUCES THE WRONG RESULT. o.position = float4( myGlobalPositions[vertexId], 0.0, 1.0); o.color = myGlobalColors[vertexId]; return o; } float4 fragment fragmentMain( v2f in [[stage_in]] ) { return float4( in.color, 1.0 ); } I believe the issue has something to do with the alignment of the shader global array data. If I mess around with the sizes of the global arrays, I can sometimes make it produce the correct result. For example, making myGlobalColors start at a 32-byte-aligned boundary seems to produce the correct results. I've also attached my full source for 01-primitive.cpp in case that helps. Has anyone run into this issue? What would it take to get a fix for it? 01-primitive.cpp
Posted
by giogadi.
Last updated
.
Post not yet marked as solved
4 Replies
1.8k Views
Apps like: https://www.clicktorelease.com/code/codevember-2017/shredder-redux/ https://lab.cheron.works/webgl-gpgpu-particles/ seem to stop working with the latest iOS update(in Safari and Chrome). Those applications are particle simulations that read from texture(those are high precision textures, which also may be a lead) in vertex shader. I have my own similar application which is also broken. And no error messages in console. Is this a known issue?
Posted
by evteevil6.
Last updated
.
Post not yet marked as solved
0 Replies
536 Views
After updating iOS version on iPhone devices, web application for video conferencing room3d.com became unusable - after trying to enter the meeting, 'Graphics Driver stopped working, you must to reload the website' error message is displayed. Issue is reproducing on Safari and Chrome browsers. It started to reproduce on all environments after iOS update. In case user was lucky to enter the meeting, all assets getting shifted and glitching. No audio and video stream inside the meeting. Issue is also reproducing on latest iPadOS. Here is an example of meeting link: https://room3d.com/ir/5c3f472f-76ee-42ca-b763-7b599350296d Here is an engine server with reproduced issue http://honya.myftp.org:88/wrooms/index.html?native_debug=true
Posted Last updated
.
Post not yet marked as solved
0 Replies
467 Views
This question relates to how Apple tech pairs with the phone app Polycam AI. When using PolycamAI app in photomode (iPhone 12 Pro) – the scale seems very good, how does this work if only photos (eg photo mode and no LiDAR mode) are being stitched together? • Do photomode captured images on the iPhone 12 Pro + upwards use Apple's stereo depth data or metadata attached to each image, to more accurately scale the PolyCam generated 3D models? • Does the Polycam server (and other 3D scanning apps on iPhone) utilise Apple’s Object capture 3D reconstruction software to create the 3D mesh files and point clouds?
Posted Last updated
.
Post not yet marked as solved
11 Replies
7.1k Views
I love how the LiDAR scan generates a beautifully colored mesh. Is it possible to retain that coloring when exporting (such as to an .OBJ file? The examples I've seen so far convert the LiDAR scan and create the .OBJ file, but none of those files include any of the coloring from the original scan. Is this even feasible?
Posted Last updated
.
Post not yet marked as solved
1 Replies
1.3k Views
I am currently trying to use Tier 2 Argument Buffers with an array of buffers to access an indirect buffer but am running into some issues. I am trying to get a basic example up and running but am having trouble getting the shader to read the value in the buffer. Accessing the buffer directly, without an argument buffer, works fine and shows the expected value (12345). The argument buffer shows the buffer as well (has the same cpu address in the debugger), but it seems to have a different device address than the direct one, and also returns 0xDEADBEEF instead of the correct value, which I assume is out of bounds memory or such. The metal debugger, however, correctly links together the buffers, so I can inspect the buffer in the debugger through the argument buffer, and it contains the correct value. I have the following (Rust) code: rust // Setup let argument_desc = mtl::ArgumentDescriptor::new(); argument_desc.set_data_type(mtl::MTLDataType::Pointer); argument_desc.set_index(0); argument_desc.set_array_length(1024); let encoder = device.new_argument_encoder(mtl::Array::from_slice(&amp;[argument_desc])); let argument_buffer = device.new_buffer(encoder.encoded_length(), mtl::MTLResourceOptions::empty()); encoder.set_argument_buffer(&amp;argument_buffer, 0); let buffer = self.device.new_buffer_with_data( [12345u32].as_ptr() as _, mem::size_of::u32() as _, mtl::MTLResourceOptions::StorageModeShared | mtl::MTLResourceOptions::CPUCacheModeDefaultCache ); encoder.set_buffer(0, &amp;buffer, 0); // Command Encoding let encoder = command_buffer.new_compute_command_encoder(); // ...set pipeline state encoder.set_buffer(0, Some(&amp;bufferArray), 0); encoder.use_resource(&amp;buffer, mtl::MTLResourceUsage::Read); encoder.set_bytes( 1, mem::size_of::u32() as _, &amp;0 as *const _ as _, ); encoder.set_buffer( 2, Some(&amp;buffer), 0, ); encoder.dispatch_thread_groups( mtl::MTLSize { width: 1, height: 1, depth: 1, }, mtl::MTLSize { width: 1, height: 1, depth: 1, }, ); This is the compute kernel (with Xcode debug annotations): msl #include metal_stdlib #include simd/simd.h using namespace metal; struct Argument { constant uint32_t *ptr; }; kernel void main0( constant Argument *bufferArray [[buffer(0)]], // bufferArray = 0x400400000 constant uint32_t&amp; buffer_index [[buffer(1)]], // buffer_index = 0 constant uint32_t *buffer [[buffer(2)]] // buffer = 0x400024000 ) { uint32_t x = *buffer; // x = 12345 constant uint32_t *ptr = bufferArray[buffer_index].ptr; // ptr = 0x40002000 uint32_t y = *ptr; // y = 0xDEADBEEF } If anyone has any ideas as to why the buffer access seems to be invalid, I'd greatly appreciate it.
Posted Last updated
.
Post not yet marked as solved
0 Replies
607 Views
I downloaded and built this app for capturing images that can be used to generate a USDZ 3D model file. But it doesn't work for me. I can take the photos and see them in the app, but there are not saved. Here are some of the log messages produced by the app. Note the last line. Creating capture path: "file:///var/mobile/Containers/Data/Application/40A7C69A-DE34-43B8-A1DF-D3BB87D5C23E/Documents/Captures/Jan%2029,%202023%20at%201:33:54%20PM/" Got back dual camera! didSet setupResult=success Starting session... Capture photo called... Found available previewPhotoFormat: Optional(["PixelFormatType": 875704422, "Width": 512, "Height": 512]) inProgressCaptures=1 Captured gravity vector: Optional(__C.CMAcceleration(x: 0.033207084983587265, y: -0.7901527881622314, z: -0.6120097041130066)) DidFinishProcessingPhoto: photo=<AVCapturePhoto: 0x280e04290 pts:388734.243725 1/1 settings:uid:4 photo:{4032x3024 SIS:ON} prev:{512x384} thumb:{512x384} time:0.741-0.797> [CIImage initWithCVImageBuffer:options:] failed because the buffer is nil.
Posted
by mvolkmann.
Last updated
.