Hi,
I am trying to extend the pytorch library. I would like to add MPS native Cholesky Decomposition. I finally got it working (mostly).
But I am struggling to implement the status codes.
What I did:
// init status
id<MTLBuffer> status = [device newBufferWithLength:sizeof(int) options:MTLResourceStorageModeShared];
if (status) {
int* statusPtr = (int*)[status contents];
*statusPtr = 42; // Set the initial content to 42
NSLog(@"Status Value: %d", *statusPtr);
}
else {
NSLog(@"Failed to allocate status buffer");
}
...
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> commandBuffer) {
// Your completion code here
int* statusPtr = (int*)[status contents];
int statusVal = *statusPtr;
NSLog(@"Status Value: %d", statusVal);
// Update the 'info' tensor here based on statusVal
// ...
}];
for (const auto i : c10::irange(batchSize)) {
...
[filter encodeToCommandBuffer:commandBuffer
sourceMatrix:sourceMatrix
resultMatrix:solutionMatrix
status:status];
}
(full code here: https://github.com/pytorch/pytorch/blob/ab6a550f35be0fdbb58b06ff8bfda1ab0cc236d0/aten/src/ATen/native/mps/operations/LinearAlgebra.mm)
But this code prints the following when input with a non positive definite tensor:
2023-09-02 19:06:24.167 python[11777:2982717] Status Value: 42
2023-09-02 19:06:24.182 python[11777:2982778] Status Value: 0
initial tensor: tensor([[-0.0516, 0.7090, 0.9474],
[ 0.8520, 0.3647, -1.5575],
[ 0.5346, -0.3149, 1.9950]], device='mps:0')
L: tensor([[-0.0516, 0.0000, 0.0000],
[ 0.8520, -0.3612, 0.0000],
[ 0.5346, -0.3149, 1.2689]], device='mps:0')
What am I doing wrong? Why do I get a 0 (success) status even tough the matrix is not positive definite.
Thank you in advance!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Hello,
I used outTexture.write(half4(hx,0,0,0),uint2(x, y)) to write pixel value to texture and then read back by blitEncoder copyFromTexture to a MTLBuffer, but the integer value read from MTLBUffer is not as expected, for half value which less than 128/256, I got expected value. but got small value with half value huge than 128/256, for examples,
127.0/256; ==> 127
128.0/256; ==> 128
129.0/256; ==> 129
130.0/256; ==> 130
131.0/256; ==> 131
Any thoughts?
Thanks
Caijohn
The macOS screen recording tool doesn't appear to support recording HDR content (f.e. in QuickTime player). This tool can record from the camera using various YCbCr 422 and 420 formats needed for HVEC and ProRes HDR10 recording, but doesn't offer any options for screen recording HDR.
So that leaves in-game screen recording with AVFoundation. Without any YCbCr formats exposed in Metal api, how do we use CVPixelBuffer with Metal, and then send these formats off to the video codes directly? Can we send Rec2020 RGB10A2Unorm data directly? I'd like the fewest conversions possible.
Hello everyone! I have a small concern about one little thing when it comes to programming in metal. There are some models that I wish to use along with animations and skins on them, the file extension for them is called gltf. glTF has been used in a number of projects such as unity and unreal engine and godot and blender. I was wondering if metal supports this file extension or not. Anyone here knows the answer?
I have tested SpatialUpsacling with a Unity URP sample project: https://github.com/mao-test-h/MetalFXSamples.
I used iPhone 13 with iOS Beta 7. The performance and quality are both worse than Native.
'init(make:update:attachments:)' is unavailable in visionOS
in Xcode 15 beta 8, but it's fine in beta 7
Does anyone know where I can find quality assets in USDZ format? For Unity and Unreal Engine, I just use the built-in asset stores. There seem to be a number of third-party 3D model stores like Laughing Squid, but they tend not to have models in USD format.
In particular, I'm looking for some nice-looking explosions for a RealityKit-based visionOS game I'm writing. Some nice boulders would also be useful.
Thanks in advance!
How can entities be centered on a plane AnchorEntity?
On top of the pure existence of the box's offset from the anchor's center, the offset also varies depending on the user's location in the space when the app is being started.
This is my code:
struct ImmersiveView: View {
var body: some View {
RealityView { content in
let wall = AnchorEntity(.plane(.vertical, classification: .wall, minimumBounds: [2.0, 1.5]), trackingMode: .continuous)
let mesh = MeshResource.generateBox(size: 0.3)
let box = ModelEntity(mesh: mesh, materials: [SimpleMaterial(color: .green, isMetallic: false)])
box.setParent(wall)
content.add(wall)
}
}
}
With PlaneDetectionProvider being unavailable on the simulator, I currently don't see a different way to set up entities at least somewhat consistently at anchors in full space.
I have generated a box in RealityKit with splitFaces property set to true to allow different materials on each cube side.
Applying different SimpleMaterials (e.g. with different colors) works fine on Vision Pro simulator. But combining VideoMaterial and SimpleMaterial does not work. BTW: a 6x video cube can be rendered successfully so the problem seems to be mixing material structures.
Here's my relevant code snippet:
let mesh = MeshResource.generateBox(width: 0.3, height: 0.3, depth: 0.3, splitFaces: true)
let mat1 = VideoMaterial(avPlayer: player)
let mat2 = SimpleMaterial(color: .blue, isMetallic: true)
let mat3 = SimpleMaterial(color: .red, isMetallic: true)
let cube = ModelEntity(mesh: mesh, materials: [mat1, mat2, mat3, mat1, mat2, mat3])
In detail, the video textures are shown whereas the simple surfaces are invisible.
Is this a problem of Vision Pro simulator? Or is it not possible to combine different material structures on a box? Any help is welcome!
I downloaded the code example of Capturing depth using the LiDAR camera. https://developer.apple.com/documentation/avfoundation/additional_data_capture/capturing_depth_using_the_lidar_camera
Im on iPad Pro 2nd Generation, iPadOS version 16.6
Im running the code and the app crashes with error:
Fatal error: Unable to configure the capture session.
2023-09-08 12:56:44.761898-0400 LiDARDepth[2393:828514]
Is there correct version of the code?
How do I create & open an immersive space window scene from a UIKit view or view controller? I need to create one in order to use Compositor Services in order to draw a 3D object using Metal, but this particular GUI is drawn & laid out using UIKit, and it isn't possible for me to rewrite it to use SwiftUI.
I already tried [UIApplication.sharedApplication activateSceneSessionForRequest:[UISceneSessionActivationRequest requestWithRole:UISceneSessionRoleImmersiveSpaceApplication] errorHandler:...], but all that happened was it opened a new window for the main application scene (UIWindowSceneSessionRoleApplication), instead of opening an immersive space scene as I expected.
Yes, I did create a scene manifest in my app's Info.plist, with a UIWindowSceneSessionRoleApplication scene, and a CPSceneSessionRoleImmersiveSpaceApplication scene. Surely there has to be a way to do this without resorting to SwiftUI...
Trying to build a volume scene, and get this error.
Build/B3/Libraries/ARM64/Packages/com.unity.xr.visionos/Runtime/VisionOSNativeBridge.mm:457:33 Use of undeclared identifier 'ar_plane_extent_get_plane_anchor_from_plane_extent_transform'
The line in the file is:
simd_float4x4 worldMatrix = ar_plane_extent_get_plane_anchor_from_plane_extent_transform(plane_extent);
I previously got this error in Xcode 15 beta 8 and now in Xcode 15 beta 2.
Unity 2022.3.5f1 LTS
Hi, I trying to use Metal cpp, but I have compile error:
ISO C++ requires the name after '::' to be found in the same scope as the name before '::'
metal-cpp/Foundation/NSSharedPtr.hpp(162):
template <class _Class>
_NS_INLINE NS::SharedPtr<_Class>::~SharedPtr()
{
if (m_pObject)
{
m_pObject->release();
}
}
Use of old-style cast
metal-cpp/Foundation/NSObject.hpp(149):
template <class _Dst>
_NS_INLINE _Dst NS::Object::bridgingCast(const void* pObj)
{
#ifdef __OBJC__
return (__bridge _Dst)pObj;
#else
return (_Dst)pObj;
#endif // __OBJC__
}
XCode Project was generated using CMake:
target_compile_features(${MODULE_NAME} PRIVATE cxx_std_20)
target_compile_options(${MODULE_NAME}
PRIVATE
"-Wgnu-anonymous-struct"
"-Wold-style-cast"
"-Wdtor-name"
"-Wpedantic"
"-Wno-gnu"
)
May be need to set some CMake flags for C++ compiler ?
I’m interested in evaluating the Physics capabilities of RealityKit and VisionOS.
i assume that I could create Entities, add the PhysicBody Component and “simulate” and tweak settings interactively - but that’s not my experience.
Is something like this possible with Beta 8?
I'm trying to animate a shape (e.g. a circle) to follow a custom path, and struggling to find the best way of doing this.
I've had a look at the animation options from SwiftUI, UIKit and SpriteKit and all seem very limited in what paths you can provide. Given the complexity of my path, I was hoping there'd be a way of providing a set of coordinates in some input file and have the shape follow that, but maybe that's too ambitious.
I was wondering if this were even possible, and assuming not, if there were other options I could consider.
With the Xcode 15 RC, the documentation for Metal Pipelines Script (man metal-pipelines-script) doesn't mention anything about defining a Mesh Render Pipeline (MTLMeshRenderPipelineDescriptor).
Is there way to do offline compilation OR harvest binary archives for mesh shaders?
Hello,
I'm trying to optimize code of loading half2 vectors from thread group(or constant) memory, for example,
//option A, read once(?) and then unpack
#define load_4half2(x, y, z, w, p, i) do{
uint4 readU4 = * ((threadgroup uint4* )(p+i));
x = as_type(readU4.x);
y = as_type(readU4.y);
z = as_type(readU4.z);
w = as_type(readU4.w);
}while(0)
//option B, read one by one
#define load_4half2(x, y, z, w, p, i) do{
threadgroup half2* readU4 = ((threadgroup half2*)(p+i));
x = readU4[0];
y = readU4[1];
z = readU4[2];
w = readU4[3];
}while(0)
I haven't figure out how to get "disassembled" code, thus I'm confused which is best solution for this problem. Could anyone kindly help to shed some lights on this?
Thanks a lot!
Since the type identifiers in UTCoreTypes.h have been deprecated, what's the expected way to use the Core Graphics APIs that use those types, particularly in C code that doesn't have access to the UniformTypeIdentifiers framework?
Using CFSTR( "public.jpeg" ) works, but is that the new best practice, or have the core type definitions been moved/renamed?
Hello,
I've been working on an app that involves training a neural network model on the iPhone. I've been using the Metal Performance Shaders Graph (MPS Graph) for this purpose. In the training process the loss becomes Nan on iOS17 (21A329).
I noticed that the official sample code for Training a Neural Network using MPS Graph (link) works perfectly fine on Xcode 14.3.1 with iOS 16.6.1. However, when I run the same code on Xcode 15.0 beta 8 with iOS 17.0 (21A329), the training process produces a NaN loss in function updateProgressCubeAndLoss. The official sample code and my own app exhibit the same issue.
Has anyone else experienced this issue? Is this a known bug, or is there something specific that needs to be adjusted for iOS 17?
Any guidance would be greatly appreciated.
Thank you!
Looking at the documentation for the methods to create MTLRenderPipelineStates
I'm trying to understand the differences between the different RenderPipelineStates created by using:
MTLRenderPipelineStateDesciptor (5 methods)
MTLTileRenderPipelineDescriptor (3 methods)
MTLMeshRenderPipelineDescriptor (2 methods)
Not all methods that exist for the MTLRenderPipelineDescriptor case exist for the Tile and Mesh variants and I was wondering why. The only way to synchronously make a Mesh PipelineState is currently by this method:
func makeRenderPipelineState(
descriptor: MTLMeshRenderPipelineDescriptor,
options: MTLPipelineOption
) throws -> (MTLRenderPipelineState, MTLRenderPipelineReflection?)
which also creates a MTLRenderPipelineReflection?
Is there a clear reason for that which I just fail to understand? Or are these methods just not there at the moment?
The example code in https://developer.apple.com/wwdc22/10162 does not compile for example.
// initialize pipeline state object
var meshPipeline: MTLRenderPipelineState!
do {
meshPipeline = try device.makeRenderPipelineState(descriptor: meshPipelineDescriptor)
} catch {
print(“Error when creating pipeline state: \(error)\”)
}