Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

Post

Replies

Boosts

Views

Activity

Understanding MaterialX, USDShaders and Material workflows from Blender and other tools
Hi, I've been exploring a project with visionOS, and have been quite confused on capabilities and workflows for using custom materials in RealityKit & RealityComposerPro for visionOS. Ideally I would be able to create / load / modify a model and its materials in Blender, export to openUSD and have it load in fully in RCP, but this hasn't been the case. Instead different aspects of the material don't seem to be exported correctly and that has lead me to investigate more into understanding MaterialX, openUSD and Metal, and how they work in visionOS, RealityKit and RealityComposer. MaterialX was announced as a primary format for working with 3D materials, but the .mtlx file format doesn't appear to be compatible in RCP directly - specifically trying materials provided in the AMD OpenGPU MaterialX Library. (note: AFAIK, Blender does not currently support MaterialX) When downloading a material, this provides a folder with the textures and corresponding .mtlx, but currently in RCP (Xcode 15.6 beta) this file is ignored. Similarly, trying to load it using ShaderGraphMaterial fails with 'Error in prim' and no other details that I can see. It also appears that there is a way of bundling MaterialX files within an openUSD file (especially implied by the error about Prims), but I haven't been able to understand how this can be done, or if this is the correct approach. Unpacking the Apple-provided materials in RCP from usdz to usda, these appear to define the shaders in openUSD and reference the RCP MaterialX Preview Shader (presumably created using the Shader Graph). There is also reference however from the official MaterialX.org and OpenUSD around using a USD / MaterialX Plugin to enable compatibility. I've also tried, and followed along with the introductory tutorial on the in-built ShaderGraph, and find it difficult to understand and quite different from Blender's Shader Nodes, but it currently appears that this is the primary way promoted to create and work with materials. Finally, I had expected that CustomMaterials using Metal Shaders would be available, as Metal was mentioned for Fully Immersive Spaces, and 'Explore Advanced Rendering with RealityKit 2' from WWDC21 covers using custom shaders, but this is not listed as included in visionOS and according to an answer here, it's not currently planned. (although the documentation here still mentions Metal briefly) Overall, what are the suggestions for workflows with materials for RealityKit on visionOS? Is there a fully compatible path from Blender -> openUSD -> RealtyComposerPro? Do I need to export materials and models from Blender individually and rebuild them in RCP using the ShaderGraph? Can I utilise existing MaterialX materials in RealityComposerPro, and if so, how? Are there any other good resources for getting comfortable and understanding the nodes within the ShaderGraph? what WWDC talks would be good to revise on this? Really appreciate any guidance!
3
1
2.5k
Aug ’23
Wrong platform and OS detection from D3DMetal beta3
Using GPTK beta3, when launching steam from Sonoma b5 VM (launched from latest UTM 4.3.5) is says : D3DM: D3DMetal requires Apple silicon and macOS 14.0 Sonoma or higher command used to launch steam: gameportingtoolkit ~/my-game-prefix 'C:\Program Files (x86)\Steam\steam.exe' GPTK was compiled/installed fine using x86 homebrew and Xcode 15b6 command line tools. gameportingtoolkit have been copied to /usr/local/bin as to unmount GPTK image. On a M2 Pro 12 CPU/19GPU Mac mini, 32GB. (8 performance core and 20 GB ram allocated to the VM)
3
0
1.6k
Aug ’23
Which attribute should I use to know which eye in the fragment shader?
Hi, I have an app based on Metal and it runs on VisionOS. It has a huge sphere mesh and renders video outputs (from AVPlayer) on it. What I want to do is rendering left portion of my video output on left eye, and right portion of my video output on right eye. In my fragment shader, I think I need to know that the thread in shader is for left eye or right eye. (I'm not using MV-hevc encoded video but just hevc encoded one) So, what I currently do is, I assume 'amplification_id' is for the thing which determines the side of eyes. But, I'm not sure this is correct.
2
0
678
Aug ’23
Video Passthrough with Compositor Services and Metal on visionOS
I just created my first Compositor Services/Metal project for visionOS. I was surprised when I ran it in the simulator that the room wasn't visible. Looking through the Compositor Services API, there doesn't seem to be a way to enable passthrough video. If that's true it means there's no way to create a mixed immersive space using Compositor Services and Metal. And if that's true, it would also apply to game engines like Unity that use those APIs to support immersive spaces. (I'm aware that Unity also has a feature that allows it to render using RealityKit, but I'm referring to full screen apps using features like custom shaders.) Does this mean that apps created with Compositor Services and Metal are VR-only? If so, is that the way things are going to be for 1.0? And if that's so, are there any plans to allow compositing with the passthrough video in a future release? I hope I'm overlooking something obvious. Thanks in advance.
1
1
812
Aug ’23
Physics Engine for Metal API rendering
Hello everyone! Here with another graphics api question but slightly different. I'm currently looking at 2 SDK's for physics called PhysX and Bullet. The game asphalt 9 uses metal and bullet and I would like to do the same with asphalt 9. With metal 3 out and stable it seems, I would like to use one of these engines for my upcoming metal api rendering engine. Bur there's a catch, I wish to use objective-c or c++ for both the rendering engine and the physics engine as I mentioned above, but not me touching swift(its a good language but i wish to use c++ for game development). What do you guys say about this?
1
0
1k
Aug ’23
Unable to use bfloat on M1 Ultra
I have the higher end M1 Mac Studio, and I have had a lot of success with Metal pipelines. However, I tried to compile a compute pipeline that uses the bfloat type and it seems to have no idea what that is. Error: program_source:10:55: error: unknown type name 'bfloat'; did you mean 'float'? Is there an OS update that is necessary for this support?
1
0
466
Aug ’23
metal encoder dispatch based on value of an atomic counter
I'm trying to find a way to reduce synchronization time between two compute shader calls, where one dispatch depends on an atomic counter from the other. Example: If I have two metal kernels, select and execute, select is looking through the numbers-buffer and stores the index of all numbers < 10 in a new buffer selectedNumberIndices by using an atomic counter. execute is then run counter number of times do do something with those selected indices. kernel void select (device atomic_uint &counter, device uint *numbers, device uint *selectedNumberIndices, uint id [[thread_position_in_grid]]) { if(numbers[id] < 10) { uint idx = atomic_fetch_add_explicit(&counter, 1, memory_order_relaxed); selectedNumbers[idx] = id; } } kernel void execute (device uint *selectedNumberIndices, uint id [[thread_position_in_grid]]) { // do something #counter number of times } currently I can do this by using .waitUntilCompleted() between the dispatches to ensure I get accurate results, something like: // select buffer = queue.makeCommandBuffer()! encoder = buffer.makeComputeCommandEncoder()! encoder.setComputePipelineState(selectState) encoder.setBuffer(counterBuffer, offset: 0, index: 0) encoder.setBuffer(numbersBuffer, offset: 0, index: 1) encoder.setBuffer(selectedNumberIndicesBuffer, index: 2) encoder.dispatchThreads(.init(width: Int(numbersCount), height: 1, depth: 1), threadsPerThreadgroup: .init(width: selectState.threadExecutionWidth, height: 1, depth: 1)) encoder.endEncoding() buffer.commit() // wait buffer.waitUntilCompleted() // execute buffer = queue.makeCommandBuffer()! encoder = buffer.makeComputeCommandEncoder()! encoder.setComputePipelineState(executeState) encoder.setBuffer(selectedNumberIndicesBuffer, index: 0) var counterValue: uint = 0 // extract the value of the atomic counter counterBuffer.contents().copyBytes(to: &counterValue, count: MemoryLayout<UInt32>.stride) encoder.dispatchThreads(.init(width: Int(counterValue), height: 1, depth: 1), threadsPerThreadgroup: .init(width: executeState.threadExecutionWidth, height: 1, depth: 1)) encoder.endEncoding() buffer.commit() My question is if there is any way I can have this same functionality without the costly buffer.waitUntilCompleted() call? Or am I going about this in completely the wrong way, or missing something else?
2
0
559
Aug ’23
Sampling 3D Textures on apple Silicon
I create a 3D texture with size 32x32x64 : Then I write this RWTexture with a compute shader, threadsPerThreadgroup is {32,32,32} The interesting part is that when I only write the 32 slices and then sample it in a pixel shader, the sampling result is always 0.0. Only when I fill all the 64 slices, the sampling get what I expected. //Fill the first 32 slices, uint3 pixelIndex = threadIdWithinDispatch; Vol[pixelIndex ] = somedata; //Fill the last 32 slices pixelIndex.z += 32; Vol[pixelIndex ] = somedata;
0
0
265
Aug ’23
Dynamic Metal Library build from download fails
The Dynamic library build for the Metal Library fails when built from the downloaded copy. It uses the name of the download as the directory name which has spaces and does not give a syntactically correct file name. Renaming the folder in which the build files reside seems to resolve that problem. Then the header files for the Dynamic library get an access denied error when trying to compile. Why are these demos released with such trivial problems? Surely someone has tried to run it previously and the demo should have been fixed or the short Readme should have been updated with instructions on how to set it up.
2
0
596
Aug ’23
Metal render crash on iPhone X/XR/XS compiled with Xcode 15 beta
We're getting some strange rendering crashes on various devices running both ios16 and ios17 beta. The problems all appear when compiling with any of the Xcode 15 betas, including 8. This code has worked fine for years. The clearest error we get is on the iPhone X and XR where newRenderPipelineStateWithDescriptor returns: "Inlining all functions due to use of indirect argument bufferbuffer(15): Unable to map argument buffer access to resource" Buffer 15 is where we stash our textures and looks like this: typedef struct RegularTextures { // A bit per texture that's present uint32_t texPresent [[ id(WKSTexBufTexPresent) ]]; // Texture indirection (for accessing sub-textures) const metal::array<float, 2*WKSTextureMax> offset [[ id(WKSTexBuffIndirectOffset) ]]; const metal::array<float, 2*WKSTextureMax> scale [[ id(WKSTexBuffIndirectScale) ]]; const metal::array<metal::texture2d<float, metal::access::sample>, WKSTextureMax> tex [[ id(WKSTexBuffTextures) ]]; } RegularTextures; The program we're trying to set up looks like this: vertex ProjVertexTriB vertexTri_multiTex( VertexTriB vert [[stage_in]], constant Uniforms &uniforms [[ buffer(WKSVertUniformArgBuffer) ]], constant Lighting &lighting [[ buffer(WKSVertLightingArgBuffer) ]], constant VertexTriArgBufferB & vertArgs [[buffer(WKSVertexArgBuffer)]], constant RegularTextures & texArgs [[buffer(WKSVertTextureArgBuffer)]]) { // Do things } Fairly benign as these things go. Even more curiously, a different program with the same RegularTextures argument buffer is sometimes set up first without complaint. I strongly suspect Apple introduced a bug here, but with the impending release, we're just trying to figure out how to work around it.
2
3
798
Sep ’23
Automated, GUI-Based Installer for Apple Game Porting Toolkit
InstallAware Software has published an open source GUI to automate all of the command line chores involved in getting Apple's Game Porting Toolkit up and running: https://github.com/installaware/AGPT Features: Uninstalls Homebrew for Apple Silicon if it is already present (as mandated by the Apple Game Porting Toolkit) Installs Homebrew x86_64 Installs Wine Configures Wine settings for Windows (optional) Copies over Apple Game Porting Toolkit binaries to accelerate DirectX 12 games running on Apple Silicon when the DMG download from Apple Developer is locally available (optional) Installs the manually supplied version of Xcode Command Line Tools when the DMG download from Apple Developer is locally available (Xcode Command Line Tools are automatically set up by Homebrew unless you're running macOS Sonoma) Installs arbitrary Windows software, including games Runs previously installed Windows Software, including Wine supplied default tools (File Manager, Registry Editor, "reboot" tool, App Uninstaller, Task Manager, Notepad, Wordpad, Internet Browser); supports passing custom command line parameters to launched apps Does not require macOS Sonoma or Apple Silicon Supports Apple Intel hardware and earlier macOS versions through standard Wine functionality (running 3D games at high performance using your non-Apple Silicon embedded/dedicated GPUs) In addition to locally downloading and building the sources yourself using the repository above (and creating your own forks), you may also download a pre-built DMG, notarized by Apple and thus safe and free of security limitations - providing you with a single-click experience to run any Windows software on your Mac today: https://www.installaware.com/iamp/agpt.dmg
1
1
2.8k
Sep ’23
Texture Write Rounding
Hello, I used outTexture.write(half4(hx,0,0,0),uint2(x, y)) to write pixel value to texture and then read back by blitEncoder copyFromTexture to a MTLBuffer, but the integer value read from MTLBUffer is not as expected, for half value which less than 128/256, I got expected value. but got small value with half value huge than 128/256, for examples, 127.0/256; ==> 127 128.0/256; ==> 128 129.0/256; ==> 129 130.0/256; ==> 130 131.0/256; ==> 131 Any thoughts? Thanks Caijohn
3
0
333
Sep ’23
Need example of HDR video recording on macOS (and iOS)
The macOS screen recording tool doesn't appear to support recording HDR content (f.e. in QuickTime player). This tool can record from the camera using various YCbCr 422 and 420 formats needed for HVEC and ProRes HDR10 recording, but doesn't offer any options for screen recording HDR. So that leaves in-game screen recording with AVFoundation. Without any YCbCr formats exposed in Metal api, how do we use CVPixelBuffer with Metal, and then send these formats off to the video codes directly? Can we send Rec2020 RGB10A2Unorm data directly? I'd like the fewest conversions possible.
0
2
993
Sep ’23
Metal API supported files for models?
Hello everyone! I have a small concern about one little thing when it comes to programming in metal. There are some models that I wish to use along with animations and skins on them, the file extension for them is called gltf. glTF has been used in a number of projects such as unity and unreal engine and godot and blender. I was wondering if metal supports this file extension or not. Anyone here knows the answer?
3
1
1.3k
Sep ’23