3D Graphics

RSS for tag

Discuss integrating three-dimensional graphics into your app.

Posts under 3D Graphics tag

29 Posts
Sort by:

Post

Replies

Boosts

Views

Activity

How to write Unity RenderTexture from iOS plug in?
I write iOS plug in to integrate MetalFX Spatial Upscaling to Unity URP project. C# Code in Unity: namespace UnityEngine.Rendering.Universal { /// /// Renders the post-processing effect stack. /// internal class PostProcessPass : ScriptableRenderPass { RenderTexture _dstRT = null; [DllImport ("__Internal")] private static extern void MetalFX_SpatialScaling (IntPtr srcTexture, IntPtr dstTexture, IntPtr outTexture); } } void RenderFinalPass(CommandBuffer cmd, ref RenderingData renderingData) { // ...... case ImageUpscalingFilter.MetalFX: { var upscaleRtDesc = tempRtDesc; upscaleRtDesc.width = cameraData.pixelWidth; upscaleRtDesc.height = cameraData.pixelHeight; RenderingUtils.ReAllocateIfNeeded(ref m_UpscaledTarget, upscaleRtDesc, FilterMode.Point, TextureWrapMode.Clamp, name: "_UpscaledTexture"); var metalfxInputSize = new Vector2(cameraData.cameraTargetDescriptor.width, cameraData.cameraTargetDescriptor.height); if (_dstRT == null) { _dstRT = new RenderTexture(upscaleRtDesc.width, upscaleRtDesc.height, 0, RenderTextureFormat.ARGB32); _dstRT.Create(); } // call native plugin cmd.SetRenderTarget(m_UpscaledTarget, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.Store, RenderBufferLoadAction.DontCare, RenderBufferStoreAction.DontCare); MetalFX_SpatialScaling(sourceTex.rt.GetNativeTexturePtr(), m_UpscaledTarget.rt.GetNativeTexturePtr(), _dstRT.GetNativeTexturePtr()); Graphics.CopyTexture(_dstRT, m_UpscaledTarget.rt); sourceTex = m_UpscaledTarget; PostProcessUtils.SetSourceSize(cmd, upscaleRtDesc); break; } // ..... } Objective-c Code in iOS: head file: #import <Foundation/Foundation.h> #import <MetalFX/MTLFXSpatialScaler.h> @protocol MTLTexture; @protocol MTLDevice; API_AVAILABLE(ios(16.0)) @interface MetalFXDelegate : NSObject { int mode; id _device; id _commandQueue; id _outTexture; id _mfxSpatialScaler; id _mfxSpatialEncoder; }; (void)SpatialScaling: (MTLTextureRef) srcTexture dstTexure: (MTLTextureRef) dstTexture outTexure: (MTLTextureRef) outTexture; (void)saveTexturePNG: (MTLTextureRef) texture url: (CFURLRef) url; @end m file: #import "MetalFXOC.h" @implementation MetalFXDelegate (id)init { self = [super init]; return self; } static MetalFXDelegate* delegateObject = nil; (void)SpatialScaling: (MTLTextureRef) srcTexture dstTexture: (MTLTextureRef) dstTexture outTexture: (MTLTextureRef) outTexture { int width = (int)srcTexture.width; int height = (int)srcTexture.height; int dstWidth = (int)dstTexture.width; int dstHeight = (int)dstTexture.height; if (_mfxSpatialScaler == nil) { MTLFXSpatialScalerDescriptor* desc; desc = [[MTLFXSpatialScalerDescriptor alloc]init]; desc.inputWidth = width; desc.inputHeight = height; desc.outputWidth = dstWidth; ///_screenWidth desc.outputHeight = dstHeight; ///_screenHeight desc.colorTextureFormat = srcTexture.pixelFormat; desc.outputTextureFormat = dstTexture.pixelFormat; if (@available(iOS 16.0, *)) { desc.colorProcessingMode = MTLFXSpatialScalerColorProcessingModePerceptual; } else { // Fallback on earlier versions } _device = MTLCreateSystemDefaultDevice(); _mfxSpatialScaler = [desc newSpatialScalerWithDevice:_device]; if (_mfxSpatialScaler == nil) { return; } _commandQueue = [_device newCommandQueue]; MTLTextureDescriptor *texdesc = [[MTLTextureDescriptor alloc] init]; texdesc.width = (int)dstTexture.width; texdesc.height = (int)dstTexture.height; texdesc.storageMode = MTLStorageModePrivate; texdesc.usage = MTLTextureUsageRenderTarget | MTLTextureUsageShaderRead | MTLTextureUsageShaderWrite; texdesc.pixelFormat = dstTexture.pixelFormat; _outTexture = [_device newTextureWithDescriptor:texdesc]; } id upscaleCommandBuffer = [_commandQueue commandBuffer]; upscaleCommandBuffer.label = @"Upscale Command Buffer"; _mfxSpatialScaler.colorTexture = srcTexture; _mfxSpatialScaler.outputTexture = _outTexture; [_mfxSpatialScaler encodeToCommandBuffer:upscaleCommandBuffer]; // outTexture = _outTexture; id textureCommandBuffer = [_commandQueue commandBuffer]; id _mfxSpatialEncoder =[textureCommandBuffer blitCommandEncoder]; [_mfxSpatialEncoder copyFromTexture:_outTexture toTexture:outTexture]; [_mfxSpatialEncoder endEncoding]; [upscaleCommandBuffer commit]; } @end extern "C" { void MetalFX_SpatialScaling(void* srcTexturePtr, void* dstTexturePtr, void* outTexturePtr) { if (delegateObject == nil) { if (@available(iOS 16.0, *)) { delegateObject = [[MetalFXDelegate alloc] init]; } else { // Fallback on earlier versions } } if (srcTexturePtr == nil || dstTexturePtr == nil || outTexturePtr == nil) { return; } id<MTLTexture> srcTexture = (__bridge id<MTLTexture>)(void *)srcTexturePtr; id<MTLTexture> dstTexture = (__bridge id<MTLTexture>)(void *)dstTexturePtr; id<MTLTexture> outTexture = (__bridge id<MTLTexture>)(void *)outTexturePtr; if (@available(iOS 16.0, *)) { [delegateObject SpatialScaling: srcTexture dstTexture: dstTexture outTexture: outTexture]; } else { // Fallback on earlier versions } return; } } With the C# and objective code, the appear on screen is black. If I save the MTLTexture to PNG in ios plug in, the PNG is ok(not black), so I think the outTexture: outTexture write to unity is failed.
0
0
685
Sep ’23
Metal API supported files for models?
Hello everyone! I have a small concern about one little thing when it comes to programming in metal. There are some models that I wish to use along with animations and skins on them, the file extension for them is called gltf. glTF has been used in a number of projects such as unity and unreal engine and godot and blender. I was wondering if metal supports this file extension or not. Anyone here knows the answer?
3
1
1.2k
Sep ’23
Metal cpp compile errors
Hi, I trying to use Metal cpp, but I have compile error: ISO C++ requires the name after '::' to be found in the same scope as the name before '::' metal-cpp/Foundation/NSSharedPtr.hpp(162): template <class _Class> _NS_INLINE NS::SharedPtr<_Class>::~SharedPtr() { if (m_pObject) { m_pObject->release(); } } Use of old-style cast metal-cpp/Foundation/NSObject.hpp(149): template <class _Dst> _NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj) { #ifdef __OBJC__ return (__bridge _Dst)pObj; #else return (_Dst)pObj; #endif // __OBJC__ } XCode Project was generated using CMake: target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20) target_compile_options(${MODULE_NAME} PRIVATE "-Wgnu-anonymous-struct" "-Wold-style-cast" "-Wdtor-name" "-Wpedantic" "-Wno-gnu" ) May be need to set some CMake flags for C++ compiler ?
0
0
677
Sep ’23
Vision Pro Safari WebXR track head movement outside XR session
I am developing a web application that leverages WebGL to display 3D content. The app would benefit from tracking headset movement when viewing the 2D page as a Window while wearing Vision Pro. This would ultimately allow me a way to convey the idea of the Window acting as a portal into a virtual environment, as the rendered perspective of the 3D environment would match that of the user wearing the headset. This is a generic request/goal, as it would be applicable to any browser and any 6dof device, but I am interested in knowing if it is currently possible with Vision Pro (and the Simulator) and its version of Safari for "spatial computing". I can track the head movement while in a WebXR XR or "immersive" session, but I would like to be able to track it without going into VR mode. Is this possible? If so, how and using which tools?
0
0
878
Aug ’23
GPU ray tracing requirements for HelloPhotogrammetry command line app
Hi, I am trying to build and run the HelloPhotogrammetry app that is associated with WWDC21 session 10076 (available for download here). But when I run the app, I get the following error message: A GPU with supportsRaytracing is required I have a Mac Pro (2019) with an AMD Radeon Pro 580X 8 GB graphics card and 96 GB RAM. According to the requirements slide in the WWDC session, this should be sufficient. Is this a configuration issue or do I actually need a different graphics card (and if so, which one?). Thanks in advance.
11
0
2.7k
Aug ’23
Photogrammetry project question
Pre-planning a project to use multiple 360 cameras setup un a grid to generate an immersive experience, hoping to use photogrammetry to generate 3D images of objects inside the grid. beeconcern.ca wants to expand their bee gardens, and theconcern.ca wants to use it to make a live immersive apiary experience. Still working out the best method for compiling, editing, rendering; have been leaning towards UE5, but still seeking advice.
1
0
1.1k
Aug ’23
View and Projection Matrix in Metal for a First Person Camera
So I'm trying to make a simple scene with some geometry of sorts and a movable camera. So far I've been able to render basic geometry in 2D alongside transforming set geometry using matrices. Following this I moved on to the Calculating Primitive Visibility Using Depth Testing Sample ... also smooth sailing. Then I had my first go at transforming positions between different coordinate spaces. I didn't get quite far with my rather blurry memory from OpenGL, all dough when I compared my view and projection matrix with the ones from the OpenGL glm::lookAt() and glm::perspective() functions there seemed to be no fundamental differences. Figuring Metal doing things differently I browsed the Metal Sample Code library for a sample containing a first-person camera. The only one I could find was Rendering Terrain Dynamically with Argument Buffers. Luckily it contained code for calculating view and projection matrices, which seemed to differ from my code. But I still have problems Problem Description When positioning the camera right in front of the geometry the view as well as the projection matrix produce seemingly accurate results: Camera Positon(0, 0, 1); Camera Directio(0, 0, -1) When moving further away though, parts of the scene are being wrongfully culled. Notably the ones farther away from the camera: Camera Position(0, 0, 2); Camera Direction(0, 0, -1) Rotating the Camera also produces confusing results: Camera Position: (0, 0, 1); Camera Direction: (cos(250°), 0, sin(250°)), yes I converted to radians My Suspicions The Projection isn't converting the vertices from view space to Normalised Device Coordinates correctly. Also when comparing two first two images, the lower part of the triangle seems to get bigger as the camera moves away which also doesn't appear to be right. Obviously the view matrix is also not correct as I'm pretty sure what's describe above isn't supposed to happen. Code Samples MainShader.metal #include <metal_stdlib> #include <Shared/Primitives.h> #include <Shared/MainRendererShared.h> using namespace metal; struct transformed_data {     float4 position [[position]];     float4 color; }; vertex transformed_data vertex_shader(uint vertex_id [[vertex_id]],                                       constant _vertex *vertices [[buffer(0)]],                                       constant _uniforms& uniforms [[buffer(1)]]) {     transformed_data output;     float3 dir = {0, 0, -1};     float3 inEye = float3{ 0, 0, 1 }; // position     float3 inTo = inEye + dir; // position + direction     float3 inUp = float3{ 0, 1, 0};          float3 z = normalize(inTo - inEye);     float3 x = normalize(cross(inUp, z));     float3 y = cross(z, x);     float3 t = (float3) { -dot(x, inEye), -dot(y, inEye), -dot(z, inEye) };     float4x4 viewm = float4x4(float4 { x.x, y.x, z.x, 0 },                               float4 { x.y, y.y, z.y, 0 },                               float4 { x.z, y.z, z.z, 0 },                               float4 { t.x, t.y, t.z, 1 });          float _nearPlane = 0.1f;     float _farPlane = 100.0f;     float _aspectRatio = uniforms.viewport_size.x / uniforms.viewport_size.y;     float va_tan = 1.0f / tan(0.6f * 3.14f * 0.5f);     float ys = va_tan;     float xs = ys / _aspectRatio;     float zs = _farPlane / (_farPlane - _nearPlane);     float4x4 projectionm = float4x4((float4){ xs,  0,  0, 0},                                     (float4){  0, ys,  0, 0},                                     (float4){  0,  0, zs, 1},                                     (float4){  0,  0, -_nearPlane * zs, 0 } );          float4 projected = (projectionm*viewm) * float4(vertices[vertex_id].position,1);     vector_float2 viewport_dim = vector_float2(uniforms.viewport_size);     output.position = vector_float4(0.0, 0.0, 0.0, 1.0);     output.position.xy = projected.xy / (viewport_dim / 2);     output.position.z = projected.z;     output.color = vertices[vertex_id].color;     return output; } fragment float4 fragment_shader(transformed_data in [[stage_in]]) {return in.color;} These are the vertices definitions let triangle_vertices = [_vertex(position: [ 480.0, -270.0, 1.0], color: [1.0, 0.0, 0.0, 1.0]),                          _vertex(position: [-480.0, -270.0, 1.0], color: [0.0, 1.0, 0.0, 1.0]),                          _vertex(position: [   0.0,  270.0, 0.0], color: [0.0, 0.0, 1.0, 1.0])] // TO-DO: make this use 4 vertecies and 6 indecies let quad_vertices = [_vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0])] This is the initialisation code of the depth stencil descriptor and state _view.depthStencilPixelFormat = MTLPixelFormat.depth32Float _view.clearDepth = 1.0 // other render initialisation code let depth_stencil_descriptor = MTLDepthStencilDescriptor() depth_stencil_descriptor.depthCompareFunction = MTLCompareFunction.lessEqual depth_stencil_descriptor.isDepthWriteEnabled = true; depth_stencil_state = try! _view.device!.makeDepthStencilState(descriptor: depth_stencil_descriptor)! So if you have any idea on why its not working or have some code of your own that's working or know of any public samples containing a working first-person camera, feel free to help me out. Thank you in advance! (please ignore any spelling or similar mistakes, english is not my primary language)
3
1
2.7k
Aug ’23