Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

MPSMatrixDecompositionCholesky Status code
Hi, I am trying to extend the pytorch library. I would like to add MPS native Cholesky Decomposition. I finally got it working (mostly). But I am struggling to implement the status codes. What I did: // init status id<MTLBuffer> status = [device newBufferWithLength:sizeof(int) options:MTLResourceStorageModeShared]; if (status) { int* statusPtr = (int*)[status contents]; *statusPtr = 42; // Set the initial content to 42 NSLog(@"Status Value: %d", *statusPtr); } else { NSLog(@"Failed to allocate status buffer"); } ... [commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> commandBuffer) { // Your completion code here int* statusPtr = (int*)[status contents]; int statusVal = *statusPtr; NSLog(@"Status Value: %d", statusVal); // Update the 'info' tensor here based on statusVal // ... }]; for (const auto i : c10::irange(batchSize)) { ... [filter encodeToCommandBuffer:commandBuffer sourceMatrix:sourceMatrix resultMatrix:solutionMatrix status:status]; } (full code here: https://github.com/pytorch/pytorch/blob/ab6a550f35be0fdbb58b06ff8bfda1ab0cc236d0/aten/src/ATen/native/mps/operations/LinearAlgebra.mm) But this code prints the following when input with a non positive definite tensor: 2023-09-02 19:06:24.167 python[11777:2982717] Status Value: 42 2023-09-02 19:06:24.182 python[11777:2982778] Status Value: 0 initial tensor: tensor([[-0.0516, 0.7090, 0.9474], [ 0.8520, 0.3647, -1.5575], [ 0.5346, -0.3149, 1.9950]], device='mps:0') L: tensor([[-0.0516, 0.0000, 0.0000], [ 0.8520, -0.3612, 0.0000], [ 0.5346, -0.3149, 1.2689]], device='mps:0') What am I doing wrong? Why do I get a 0 (success) status even tough the matrix is not positive definite. Thank you in advance!
0
0
556
Sep ’23
Texture Write Rounding
Hello, I used outTexture.write(half4(hx,0,0,0),uint2(x, y)) to write pixel value to texture and then read back by blitEncoder copyFromTexture to a MTLBuffer, but the integer value read from MTLBUffer is not as expected, for half value which less than 128/256, I got expected value. but got small value with half value huge than 128/256, for examples, 127.0/256; ==> 127 128.0/256; ==> 128 129.0/256; ==> 129 130.0/256; ==> 130 131.0/256; ==> 131 Any thoughts? Thanks Caijohn
3
0
321
Sep ’23
Need example of HDR video recording on macOS (and iOS)
The macOS screen recording tool doesn't appear to support recording HDR content (f.e. in QuickTime player). This tool can record from the camera using various YCbCr 422 and 420 formats needed for HVEC and ProRes HDR10 recording, but doesn't offer any options for screen recording HDR. So that leaves in-game screen recording with AVFoundation. Without any YCbCr formats exposed in Metal api, how do we use CVPixelBuffer with Metal, and then send these formats off to the video codes directly? Can we send Rec2020 RGB10A2Unorm data directly? I'd like the fewest conversions possible.
0
2
981
Sep ’23
Metal API supported files for models?
Hello everyone! I have a small concern about one little thing when it comes to programming in metal. There are some models that I wish to use along with animations and skins on them, the file extension for them is called gltf. glTF has been used in a number of projects such as unity and unreal engine and godot and blender. I was wondering if metal supports this file extension or not. Anyone here knows the answer?
3
1
1.3k
Sep ’23
Sources for Explosions and Other Assets in USDZ format?
Does anyone know where I can find quality assets in USDZ format? For Unity and Unreal Engine, I just use the built-in asset stores. There seem to be a number of third-party 3D model stores like Laughing Squid, but they tend not to have models in USD format. In particular, I'm looking for some nice-looking explosions for a RealityKit-based visionOS game I'm writing. Some nice boulders would also be useful. Thanks in advance!
0
0
438
Sep ’23
Can't center entity on AnchorEntity(.plane)
How can entities be centered on a plane AnchorEntity? On top of the pure existence of the box's offset from the anchor's center, the offset also varies depending on the user's location in the space when the app is being started. This is my code: struct ImmersiveView: View { var body: some View { RealityView { content in let wall = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 1.5]), trackingMode: .continuous) let mesh = MeshResource.generateBox(size: 0.3) let box = ModelEntity(mesh: mesh, materials: [SimpleMaterial(color: .green, isMetallic: false)]) box.setParent(wall) content.add(wall) } } } With PlaneDetectionProvider being unavailable on the simulator, I currently don't see a different way to set up entities at least somewhat consistently at anchors in full space.
1
0
636
Sep ’23
RealityKit box cannot be equipped with different material structures on visionOS
I have generated a box in RealityKit with splitFaces property set to true to allow different materials on each cube side. Applying different SimpleMaterials (e.g. with different colors) works fine on Vision Pro simulator. But combining VideoMaterial and SimpleMaterial does not work. BTW: a 6x video cube can be rendered successfully so the problem seems to be mixing material structures. Here's my relevant code snippet: let mesh = MeshResource.generateBox(width: 0.3, height: 0.3, depth: 0.3, splitFaces: true) let mat1 = VideoMaterial(avPlayer: player) let mat2 = SimpleMaterial(color: .blue, isMetallic: true) let mat3 = SimpleMaterial(color: .red, isMetallic: true) let cube = ModelEntity(mesh: mesh, materials: [mat1, mat2, mat3, mat1, mat2, mat3]) In detail, the video textures are shown whereas the simple surfaces are invisible. Is this a problem of Vision Pro simulator? Or is it not possible to combine different material structures on a box? Any help is welcome!
1
0
465
Sep ’23
Error in Capturing depth using the LiDAR camera
I downloaded the code example of Capturing depth using the LiDAR camera. https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera Im on iPad Pro 2nd Generation, iPadOS version 16.6 Im running the code and the app crashes with error: Fatal error: Unable to configure the capture session. 2023-09-08 12:56:44.761898-0400 LiDARDepth[2393:828514] Is there correct version of the code?
0
0
346
Sep ’23
Creating an immersive space using UIKit?
How do I create & open an immersive space window scene from a UIKit view or view controller? I need to create one in order to use Compositor Services in order to draw a 3D object using Metal, but this particular GUI is drawn & laid out using UIKit, and it isn't possible for me to rewrite it to use SwiftUI. I already tried [UIApplication.sharedApplication activateSceneSessionForRequest:[UISceneSessionActivationRequest requestWithRole:UISceneSessionRoleImmersiveSpaceApplication] errorHandler:...], but all that happened was it opened a new window for the main application scene (UIWindowSceneSessionRoleApplication), instead of opening an immersive space scene as I expected. Yes, I did create a scene manifest in my app's Info.plist, with a UIWindowSceneSessionRoleApplication scene, and a CPSceneSessionRoleImmersiveSpaceApplication scene. Surely there has to be a way to do this without resorting to SwiftUI...
2
0
670
Sep ’23
VisionOS Unity app gives Build Errors in Xcode
Trying to build a volume scene, and get this error. Build/B3/Libraries/ARM64/Packages/com.unity.xr.visionos/Runtime/VisionOSNativeBridge.mm:457:33 Use of undeclared identifier 'ar_plane_extent_get_plane_anchor_from_plane_extent_transform' The line in the file is: simd_float4x4 worldMatrix = ar_plane_extent_get_plane_anchor_from_plane_extent_transform(plane_extent); I previously got this error in Xcode 15 beta 8 and now in Xcode 15 beta 2. Unity 2022.3.5f1 LTS
0
1
518
Sep ’23
Metal cpp compile errors
Hi, I trying to use Metal cpp, but I have compile error: ISO C++ requires the name after '::' to be found in the same scope as the name before '::' metal-cpp/Foundation/NSSharedPtr.hpp(162): template <class _Class> _NS_INLINE NS::SharedPtr<_Class>::~SharedPtr() { if (m_pObject) { m_pObject->release(); } } Use of old-style cast metal-cpp/Foundation/NSObject.hpp(149): template <class _Dst> _NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj) { #ifdef __OBJC__ return (__bridge _Dst)pObj; #else return (_Dst)pObj; #endif // __OBJC__ } XCode Project was generated using CMake: target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20) target_compile_options(${MODULE_NAME} PRIVATE "-Wgnu-anonymous-struct" "-Wold-style-cast" "-Wdtor-name" "-Wpedantic" "-Wno-gnu" ) May be need to set some CMake flags for C++ compiler ?
0
0
729
Sep ’23
Different methods of animating shape along custom path?
I'm trying to animate a shape (e.g. a circle) to follow a custom path, and struggling to find the best way of doing this. I've had a look at the animation options from SwiftUI, UIKit and SpriteKit and all seem very limited in what paths you can provide. Given the complexity of my path, I was hoping there'd be a way of providing a set of coordinates in some input file and have the shape follow that, but maybe that's too ambitious. I was wondering if this were even possible, and assuming not, if there were other options I could consider.
1
0
886
Sep ’23
How to load half2 vectors from thread group memory faster?
Hello, I'm trying to optimize code of loading half2 vectors from thread group(or constant) memory, for example, //option A, read once(?) and then unpack #define load_4half2(x, y, z, w, p, i) do{ uint4 readU4 = * ((threadgroup uint4* )(p+i)); x = as_type(readU4.x); y = as_type(readU4.y); z = as_type(readU4.z); w = as_type(readU4.w); }while(0) //option B, read one by one #define load_4half2(x, y, z, w, p, i) do{ threadgroup half2* readU4 = ((threadgroup half2*)(p+i)); x = readU4[0]; y = readU4[1]; z = readU4[2]; w = readU4[3]; }while(0) I haven't figure out how to get "disassembled" code, thus I'm confused which is best solution for this problem. Could anyone kindly help to shed some lights on this? Thanks a lot!
0
0
214
Sep ’23
Core UTType Identifiers deprecated
Since the type identifiers in UTCoreTypes.h have been deprecated, what's the expected way to use the Core Graphics APIs that use those types, particularly in C code that doesn't have access to the UniformTypeIdentifiers framework? Using CFSTR( "public.jpeg" ) works, but is that the new best practice, or have the core type definitions been moved/renamed?
0
0
368
Sep ’23
MPS Graph Neural Network Training Produces NaN Loss on Xcode 15.0 beta 8 + iOS 17.0
Hello, I've been working on an app that involves training a neural network model on the iPhone. I've been using the Metal Performance Shaders Graph (MPS Graph) for this purpose. In the training process the loss becomes Nan on iOS17 (21A329). I noticed that the official sample code for Training a Neural Network using MPS Graph (link) works perfectly fine on Xcode 14.3.1 with iOS 16.6.1. However, when I run the same code on Xcode 15.0 beta 8 with iOS 17.0 (21A329), the training process produces a NaN loss in function updateProgressCubeAndLoss. The official sample code and my own app exhibit the same issue. Has anyone else experienced this issue? Is this a known bug, or is there something specific that needs to be adjusted for iOS 17? Any guidance would be greatly appreciated. Thank you!
1
0
713
Sep ’23
MTLMeshRenderPipelineState
Looking at the documentation for the methods to create MTLRenderPipelineStates I'm trying to understand the differences between the different RenderPipelineStates created by using: MTLRenderPipelineStateDesciptor (5 methods) MTLTileRenderPipelineDescriptor (3 methods) MTLMeshRenderPipelineDescriptor (2 methods) Not all methods that exist for the MTLRenderPipelineDescriptor case exist for the Tile and Mesh variants and I was wondering why. The only way to synchronously make a Mesh PipelineState is currently by this method: func makeRenderPipelineState( descriptor: MTLMeshRenderPipelineDescriptor, options: MTLPipelineOption ) throws -> (MTLRenderPipelineState, MTLRenderPipelineReflection?) which also creates a MTLRenderPipelineReflection? Is there a clear reason for that which I just fail to understand? Or are these methods just not there at the moment? The example code in https://developer.apple.com/wwdc22/10162 does not compile for example. // initialize pipeline state object var meshPipeline: MTLRenderPipelineState! do { meshPipeline = try device.makeRenderPipelineState(descriptor: meshPipelineDescriptor) } catch { print(“Error when creating pipeline state: \(error)\”)
}
0
0
244
Sep ’23