Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

Post

Replies

Boosts

Views

Activity

Game Porting Toolkit issue 1.02 installment
How do I fix this, I type brew upgrade in terminal and this popped up. Error: Cannot install in Homebrew on ARM processor in Intel default prefix (/usr/local)! Please create a new installation in /opt/homebrew using one of the "Alternative Installs" from: https://docs.brew.sh/Installation You can migrate your previously installed formula list with: brew bundle dump
3
1
2.0k
Jul ’23
Exported .usdz scenes are not compatible with common tools
If you have a scene with a simple custom .usda material applied to a primitive like a cube, the exported (.usdz) material definition is unknown for tools like Reality Converter Version 1.0 (53) or Blender Version 3.6.1. Reality Converter shows up some warnings "Missing references in USD file", "Invalid USD shader node in USD file". Even Reality Composer Pro is unable to recreate the material correct with it's own exported .usdz files. Feedback: FB12699421
3
0
1.1k
Jul ’23
MTLBuffer Debug Markers
I'm trying to compare two allocation schemes in my Metal renderer: allocate separate MTLBuffers out of an MTLHeap allocate one full-size MTLBuffer from an MTLHeap, then doing my own suballocating and tracking the offsets The first is straightforward and I get a nice overview of everything in the Xcode Metal debugger. For the second, I use addDebugMarker:range: to label each of my custom buffers. I have looked everywhere and can't see where my debug labels are supposed to appear in the debugger. The memory overview only shows the one MTLBuffer that spans the entire MTLHeap. My renderpass works as expected, but the command list and resource views only reference the single MTLBuffer as opposed to the tagged ranges. What am I missing?
2
0
343
Jul ’23
Unable to use Jax with metal on Apple M2
Hi, I'm following https://developer.apple.com/metal/jax/ to install jax on my Mac. The installation is successful. However, running the give example gives $ python -c 'import jax; jax.numpy.arange(10)' 2023-07-27 20:26:08.492162: W pjrt_plugin/src/mps_client.cc:535] WARNING: JAX Apple GPU support is experimental and not all JAX functionality is correctly supported! Metal device set to: Apple M2 Pro systemMemory: 16.00 GB maxCacheSize: 5.33 GB loc("-":2:3): error: custom op 'func.func' is unknown fish: Job 1, 'python3 $argv' terminated by signal SIGSEGV (Address boundary error)
2
0
630
Jul ’23
Error when trying to create texture on CPU, "Not Supported on this Device"
I am working on creating a "Volume" application in RealityKit for visionOS. I want to create a texture on the CPU that I can hook into a Material and modify. When I go to create the texture I get this error: Linear textures from shared buffers is not supported on this device Here is the code: guard let device = MTLCreateSystemDefaultDevice() else { fatalError("unable to get metal device") } let textureDescriptor = MTLTextureDescriptor.textureBufferDescriptor(with: .r32Float, width: 64, resourceOptions: .storageModeShared, usage: .shaderRead) let buffer = device.makeBuffer(length: 64 * 4) return buffer?.makeTexture(descriptor: textureDescriptor, offset: 0, bytesPerRow: 64 * 4)
1
0
569
Jul ’23
Transfering data to Metal MTLBuffer dynamically in Objective-c
I have a following MTLBuffer created. How can I send INPUTVALUE to the memINPUT buffer? I need to send repeatedly in Objective-C. // header file @property id<MTLBuffer> memINPUT; // main file int length = 1000; ... memINPUT = [_device newBufferWithLength:(sizeof(float)*length) options:0]; ... float INPUTVALUE[length]; for (int i=0; i < length; i++) { INPUTVALUE[i] = (float)i; } // How to send to INPUTVALUE to memINPUT? ... The following is Swift version. I am looking for Objective-c version. memINPUT.contents().copyMemory(from: INPUTVALUE, byteCount: length * MemoryLayout<Float>.stride);
1
0
616
Jul ’23
crash 1325: failed assertion `Texture Descriptor Validation MTLTextureDescriptor has width
just using SKView func texture(from node: SKNode, crop: CGRect) -> SKTexture? crash_info_entry_0 -[MTLTextureDescriptorInternal validateWithDevice:]:1325: failed assertion `Texture Descriptor Validation MTLTextureDescriptor has width (8256) greater than the maximum allowed size of 8192. MTLTextureDescriptor has height (8242) greater than the maximum allowed size of 8192. ' Help!
0
0
549
Aug ’23
acceleration structure doesn't render in gpu trace
I've got a scene which renders as I expect: but in the acceleration structure inspector, the kraken primitive doesn't render: In the list on the left, the structure is there. As expected, there is just one bounding-box primitive as a lot happens in the intersection function (doing it this way since I've already built my own octree and it takes too long to rebuild BVHs for dynamic geometry) This is just based on the SimplePathTracer example. The signatures of the sphereIntersectionFunction and octreeIntersectionFunction aren't that different: [[intersection(bounding_box, triangle_data, instancing)]] BoundingBoxIntersection sphereIntersectionFunction(// Ray parameters passed to the ray intersector below float3 origin [[origin]], float3 direction [[direction]], float minDistance [[min_distance]], float maxDistance [[max_distance]], // Information about the primitive. unsigned int primitiveIndex [[primitive_id]], unsigned int geometryIndex [[geometry_intersection_function_table_offset]], // Custom resources bound to the intersection function table. device void *resources [[buffer(0), function_constant(useResourcesBuffer)]] #if SUPPORTS_METAL_3 ,const device Sphere* perPrimitiveData [[primitive_data]] #endif ,ray_data IntersectionPayload& payload [[payload]]) { vs. [[intersection(bounding_box, triangle_data, instancing)]] BoundingBoxIntersection octreeIntersectionFunction(// Ray parameters passed to the ray intersector below float3 origin [[origin]], float3 direction [[direction]], float minDistance [[min_distance]], float maxDistance [[max_distance]], // Information about the primitive. unsigned int primitiveIndex [[primitive_id]], unsigned int geometryIndex [[geometry_intersection_function_table_offset]], // Custom resources bound to the intersection function table. device void *resources [[buffer(0)]], const device BlockInfo* perPrimitiveData [[primitive_data]], ray_data IntersectionPayload& payload [[payload]]) Note: running 15.0 beta 5 (15A5209g) since even the unmodified SimplePathTracer example project will hang the acceleration structure viewer on Xcode 14. Update: Replacing the octreeIntersectionFunction's code with just a hard-coded sphere does render. Perhaps the viewer imposes a time (or instruction count) limit on intersection functions so as to not hang the GPU?
6
0
656
Aug ’23
Error while installing Game Porting Toolkit on the gptk command
Error: Failure while executing; /usr/bin/env /usr/local/Homebrew/Library/Homebrew/shims/shared/curl --disable --cookie /dev/null --globoff --show-error --user-agent Homebrew/4.1.3\ \(Macintosh\;\ Intel\ Mac\ OS\ X\ 14.0\)\ curl/8.1.2 --header Accept-Language:\ en --retry 3 --fail --location --silent --head https://mirrors.ustc.edu.cn/homebrew-bottles/bison-3.8.2.ventura.bottle.tar.gz exited with 35.
1
0
843
Aug ’23
makeLibrary doesn’t report warnings on successful compile
I’m working with Metal on iPad Playgrounds and made a simple editor to help craft shaders. Press a button, it tries to make the library, and reports any errors or warnings. The problem is it doesn’t report warnings if the compile was successful. I’m using the version of makeLibrary with a completion handler which passes in both a MTLLibrary? and Error? because the docs specifically say “Both library and error can be non-nil if the compiler successfully generates a library with warnings.” However I’m not seeing that happen, when there’s only warnings the Error parameter is nil. Maybe I’m misunderstanding or using something wrong. Is this a bug or how do I get the warnings on a successful compile? Here’s a demo. It should show 2 warnings but doesn’t because err is nil. Add an extraneous character in source to cause an error and then err is not nil and shows the 2 warnings. import SwiftUI let source = """ #include <metal_stdlib> using namespace metal; float something() {} #warning "foo"; """ struct ContentView: View { var body: some View { Button("test") { guard let device = MTLCreateSystemDefaultDevice() else { return } device.makeLibrary(source: source, options: nil) { lib, err in if err == nil { print("no errors or warnings") } else if let err = err as? MTLLibraryError { print("found errors or warnings") print(err.localizedDescription) } else { print("unknown error type") } } } } } Oh, and this is where it says both library and error can be non-nil https://developer.apple.com/documentation/metal/mtlnewlibrarycompletionhandler
0
0
357
Aug ’23
Discover Metal for immersive apps - Example Code
Is there an example app? Am writing a visual music app. It uses a custom SwiftUI menus over a UIKit (UIHostingController) canvas. The metal pipeline is custom. A fully functional example app would be super helpful. Nothing fancy; an immersive hello triangle would be a great start. Later, will need to port UITouches to draw and CMMotionManager to track pov within a Cubemap. Meanwhile, baby steps. Thanks!
3
0
1k
Aug ’23
Metal Shader Converter shader debug symbols
Hello, I’ve started testing the Metal Shader Converter to convert my HLSL shaders to metallib directly, and I was wondering if the option ’-frecord-sources’ was supported in any way? Usually I’m compiling my shaders as follows (from Metal): xcrun -sdk macosx metal -c -frecord-sources shaders/shaders.metal -o shaders/shaders.air xcrun -sdk macosx metallib shaders/shaders.air -o shaders/shaders.metallib The -frecord-sources allow me to see the source when debugging and profiling a Metal frame. Now with DXC we have a similar option, I can compile a typical HLSL shader with embedded debug symbols with: dxc -T vs_6_0 -E VSMain shaders/triangle.hlsl -Fo shaders/triangle.dxil -Zi -O0 -Qembed_debug The important options here are ’-Zi` and ’-Qembed_debug’, as they make sure debug symbols are embedded in the DXIL. It seems that right now Metal Shader Converter doesn’t pass through the DXIL debug information, and I was wondering if it was possible. I’ve looked at all the options in the utility and haven’t seen anything that looked like it. Right now debug symbols in my shaders is a must-have both for profiling and debugging. For reference an alternative pipeline would be to use spir-v cross instead. For reference here's what a typical pipeline with dxc and spir-v cross look like: HLSL -> SPIRV -> AIR -> METALLIB dxc -T ps_6_0 -E PSMain -spirv shaders/triangle.hlsl -Zi -Qembed_debug -O0 -Fo shaders/triangle.frag.spirv spirv-cross --msl shaders/triangle.frag.spirv --output shaders/triangle.frag.metal xcrun -sdk macosx metal -c -frecord-sources shaders/triangle.frag.metal -o shaders/triangle.frag.air xcrun -sdk macosx metallib shaders/triangle.frag.air -o shaders/triangle.frag.metallib As you can see, it's a lot more steps than metal shader converter, but after all those steps you can get some sort of shader symbols in xcode when debugging a metal frame, which is better than nothing: Please let me know if I can provide files, projects or anything that can help supporting shader symbols directly with metal shader converter. Thank you for your time!
0
0
714
Aug ’23
is the MPSDynamicScene example correctly computing the motion vector texture?
I'm trying to implement de-noising of AO in my app, using the MPSDynamicScene example as a guide: https://developer.apple.com/documentation/metalperformanceshaders/animating_and_denoising_a_raytraced_scene In that example, it computes motion vectors in UV coordinates, resulting in very small values: // Compute motion vectors if (uniforms.frameIndex > 0) { // Map current pixel location to 0..1 float2 uv = in.position.xy / float2(uniforms.width, uniforms.height); // Unproject the position from the previous frame then transform it from // NDC space to 0..1 float2 prevUV = in.prevPosition.xy / in.prevPosition.w * float2(0.5f, -0.5f) + 0.5f; // Next, remove the jittering which was applied for antialiasing from both // sets of coordinates uv -= uniforms.jitter; prevUV -= prevUniforms.jitter; // Then the motion vector is simply the difference between the two motionVector = uv - prevUV; } Yet the documentation for MPSSVGF seems to indicate the offsets should be expressed in texels: The motion vector texture must be at least a two channel texture representing how many texels * each texel in the source image(s) have moved since the previous frame. The remaining channels * will be ignored if present. This texture may be nil, in which case the motion vector is assumed * to be zero, which is suitable for static images. Is this a mistake in the example code? Asking because doing something similarly in my own app leaves AO trails, which would indicate the motion vector texture values are too small in magnitude. I don't really see trails in the example, even when I speed up the animation, but that could be due to the example being monochrome. Update: If I multiply the uv offsets by the size of the texture, I get a bad result. Which seems to indicate the header is misleading and they are in fact in uv coordinates. So perhaps the trails I'm seeing in my app are for some other reason. I also wonder who is actually using this API other than me? I would think most game engines are doing their own thing. Perhaps some of apple's own code uses it.
0
0
580
Aug ’23
Maximize memory read bandwidth on M1 Ultra/M2 Ultra
I am in the process of developing a matrix-vector multiplication kernel. While conducting performance evaluations, I've noticed that on M1/M1 Pro/M1 Max, the kernel demonstrates an impressive memory bandwidth utilization of around 90%. However, when executed on the M1 Ultra/M2 Ultra, this figure drops to approximately 65%. My suspicion is that this discrepancy is attributed to the dual-die architecture of the M1 Ultra/M2 Ultra. It's plausible that the necessary data might be stored within the L2 cache of the alternate die. Could you kindly provide any insights or recommendations for mitigating the occurrence of on-die L2 cache misses on the Ultra chips? Additionally, I would greatly appreciate any general advice aimed at enhancing memory load speeds on these particular chips.
0
0
661
Aug ’23
Where can I find software developers for Vision Pros software?
My name is Leuy, a sophomore at the Wharton School of Business, with a passion for entrepreneurship and a strong belief in the potential of VR and AR technologies to reshape our family interactions. I'm currently working on a groundbreaking startup that aims to create a family-oriented co-working and co-learning platform. The essence of my vision is to help busy working parents spend quality time with their kids using virtual reality (VR) and augmented reality (AR) on Apple's vision pros. Do you know where I can find the best software developers to help bring my vision to life? Thanks.
3
1
966
Aug ’23
How to get external display information after choosing use as separate as display via screen mirroring on MacOS?
I would like to get some information of the connected display such as vendor number, eisaId, … after connecting the external display via “screen mirroring” -&gt; “use as Separate Display” When the same display was connected through HDMI port or extend mode in screen mirroring, the information is not identical: HDMI Other display found - ID: 19241XXXX, Name: YYYY (Vendor: 19ZZZ, Model: 57WWW) Screen mirroring - extend mode Other display found - ID: 41288XX, Name: AAA (Vendor: 163ZYYBBB, Model: 16ZZWWYYY) I tried to get display information with the below method. func configureDisplays() { var onlineDisplayIDs = [CGDirectDisplayID](repeating: 0, count: 16) var displayCount: UInt32 = 0 guard CGGetOnlineDisplayList(16, &amp;onlineDisplayIDs, &amp;displayCount) == .success else { os_log("Unable to get display list.", type: .info) return } for onlineDisplayID in onlineDisplayIDs where onlineDisplayID != 0 { let name = DisplayManager.getDisplayNameByID(displayID: onlineDisplayID) let id = onlineDisplayID let vendorNumber = CGDisplayVendorNumber(onlineDisplayID) let modelNumber = CGDisplayModelNumber(onlineDisplayID) let serialNumber = CGDisplaySerialNumber(onlineDisplayID) if !DEBUG_SW, DisplayManager.isAppleDisplay(displayID: onlineDisplayID) { let appleDisplay = AppleDisplay(id, name: name, vendorNumber: vendorNumber, modelNumber: modelNumber, serialNumber: serialNumber, isVirtual: isVirtual, isDummy: isDummy) os_log("Apple display found - %{public}@", type: .info, "ID: \(appleDisplay.identifier), Name: \(appleDisplay.name) (Vendor: \(appleDisplay.vendorNumber ?? 0), Model: \(appleDisplay.modelNumber ?? 0))") } else { let otherDisplay = OtherDisplay(id, name: name, vendorNumber: vendorNumber, modelNumber: modelNumber, serialNumber: serialNumber, isVirtual: isVirtual, isDummy: isDummy) os_log("Other display found - %{public}@", type: .info, "ID: \(otherDisplay.identifier), Name: \(otherDisplay.name) (Vendor: \(otherDisplay.vendorNumber ?? 0), Model: \(otherDisplay.modelNumber ?? 0))") } } } Can we have the same display information when connect with an external display via HDMI port and extend mode in Screen Mirroring?
0
0
498
Aug ’23