Post not yet marked as solved
I'm trying to calculate that the rotation becomes smoother when rotating the model, their movement from fast to slow steadily.
I read the documentation and found it related to calculating Angular Speed and Angular Acceleration.
Has anyone done it before can share me how to do it in metal ?
Something like example here: https://threejs.org/examples/?q=rotation#misc_controls_arcball
Thanks all.
Post not yet marked as solved
I want to apply a given CIFilter but instead of the effect showing up instantly, I want to animate it: e.g., a color image desaturating to grey scale over 2 seconds, a blocky image depixellating to a full-resolution image using an EaseInOut animation curve over 0.8 seconds.
If you're using one of the built in SwiftUI view modifiers like .blur(), you're golden. Just append some .animate() and you're done.
But given that you have to jump through hoops whether you go the UIImage, CGImage, CIImage route, or the MTLView, CIRenderDestination, ContentView example from the WWDC 2022 sample code, I'm a bit confused.
Ideally I guess I'd just like to write View Modifiers for each effect I want to do, so that they're as usable as the SwiftUI built-in ones, but I don't know if that's possible. Does anyone have any ideas?
Post not yet marked as solved
In my game project, there is a functions.data file in then /AppData/Library/Caches/[bundleID]/com.apple.metal/functions.data,
when we reboot and launch the game, this file was rest to about 40KB, normaly this file's is about 30MB, this operation was done by the metal, Is there any way to avoid it?
Post not yet marked as solved
It is possible with SceneKit, but I haven’t found any way for RealityKit.
Good day
I am developing XRKit framework, which contains the pipeline logic for rendering using Metal, in manifest it has two targets - framework itself in Swift and XRKitDefenitions in C++ and MSL (since Apple forbids us multilingualism in one package). Both targets have Resources folders open in their manifest.
When I try to access the test files hello01.txt (Resources for XRKit) and hello2.txt (Resources for XRKitDefenitions) via Bundle.module, I only see hello01.txt and it doesn't read hello2.txt because it's in a different target.
How do I properly organize my code with SPM to access the Resources of XRKitDefenitions target?
PS When trying to organize XRKitDefenitions as a remote package on GitHub and defining it as a dependency, situation does not change. I understand now that Bundle.module only refers to its Resources. Is there a way to refer to resources that provided other targets or dependencies in the same package?
Post not yet marked as solved
I want to crop the usdz model in runtime. I use ModelIO for this.
Before: [https://i.stack.imgur.com/yDXXF.jpg)
After: [https://i.stack.imgur.com/m9ryg.jpg)
First of all, get file from bundle
let url = URL(fileURLWithPath: file)
} else {
print("Object not found in Bundle")
}
And then I need to access asset
let asset = MDLAsset(url: url)
What should I do after this step? How am I supposed to use SCNGeometrySource and SCNGeometryElement or MDLVoxelArray classes?
How can I crop a 3D model as seen in the photos? Should I use MetalKit or can I handle it with sceneKit and modelIO? I couldn't find any code examples on this topic. Can you share the code snippet
Before:
[https://i.stack.imgur.com/yDXXF.jpg)
After:
[https://i.stack.imgur.com/m9ryg.jpg)
Post not yet marked as solved
Hi there,
I am trying to create a CoreML Custom layer that runs on the GPU, using Objective-C for CoreML setup and Metal for GPU programming.
I have created the CoreML model with the custom layer and can successfully execute on the GPU, I wish to create an MTLBuffer from an input MTLTexture in my setup actual GPU execution, although I can't seem to do so, or get access to the memory address to the MTLTexture memory.
When defining a custom layer in CoreML to run on the GPU, the following function needs to be defined, with the given prototype;
(BOOL) encodeToCommandBuffer:(id<MTLCommandBuffer>)commandBuffer inputs:(NSArray<id<MTLTexture>> *)inputs outputs:(NSArray<id<MTLTexture>> *)outputs error:(NSError *__autoreleasing _Nullable *)error{
// GPU Setup, moving data, encoding, execution and so on here
}
Here, the inputs are passed as an NSArray of MTLTexture's, we then pass these texture's on to the Metal Shader for computation. My problem is that I want to pass an MTLBuffer to the Metal Shader, which points to the input data, say inputs[0], but I am having troubling copying the input MTLTexture to an MTLBuffer.
I have tried using the MTLBlitCommandEncoder to copy the data from the MTLTexture to an MTLBuffer like so;
id<MTLBuffer> test_buffer = [command_PSO.device newBufferWithLength:(8) options:MTLResourceStorageModeShared];
id <MTLBlitCommandEncoder> blitCommandEncoder = [commandBuffer blitCommandEncoder];
[blitCommandEncoder copyFromTexture:inputs[0]
sourceSlice:0
sourceLevel:0
sourceOrigin:MTLOriginMake(0, 0, 0)
sourceSize:MTLSizeMake(1, 1, 1)
toBuffer:test_buffer
destinationOffset:0
destinationBytesPerRow:8
destinationBytesPerImage:8];
[blitCommandEncoder endEncoding];
The above example should copy a single pixel from the MTLTexture, inputs[0], to the MTLBuffer, test_buffer, but this is not the case.
MTLTextures, getBytes also doesn't work as the inputs have MTLResourceStorageModePrivate set.
When I inspect the input MTLTexture I note that the attribute buffer = <null> and I'm wondering if this could be an issue since the texture was not created from a buffer, and perhaps doesn't store the address to memory easily, but surely we should be able to get the memory address somewhere?
For further reference, here is the input MTLTexture definition;
<CaptureMTLTexture: 0x282469500> -> <AGXA14FamilyTexture: 0x133d9bb00>
label = <none>
textureType = MTLTextureType2DArray
pixelFormat = MTLPixelFormatRGBA16Float
width = 8
height = 1
depth = 1
arrayLength = 1
mipmapLevelCount = 1
sampleCount = 1
cpuCacheMode = MTLCPUCacheModeDefaultCache
storageMode = MTLStorageModePrivate
hazardTrackingMode = MTLHazardTrackingModeTracked
resourceOptions = MTLResourceCPUCacheModeDefaultCache MTLResourceStorageModePrivate MTLResourceHazardTrackingModeTracked
usage = MTLTextureUsageShaderRead MTLTextureUsageShaderWrite
shareable = 0
framebufferOnly = 0
purgeableState = MTLPurgeableStateNonVolatile
swizzle = [MTLTextureSwizzleRed, MTLTextureSwizzleGreen, MTLTextureSwizzleBlue, MTLTextureSwizzleAlpha]
isCompressed = 0
parentTexture = <null>
parentRelativeLevel = 0
parentRelativeSlice = 0
buffer = <null>
bufferOffset = 0
bufferBytesPerRow = 0
iosurface = 0x0
iosurfacePlane = 0
allowGPUOptimizedContents = YES
label = <none>
Post not yet marked as solved
I want to remove unnecessary materials or textures in order to reduce the size of the USDZ model I have. How can I manipulate this model with swift?
or, I can try any advice to reduce the size of the USDZ model
Hello, developers,
I'm implementing slice rendering of 3d volume.
And then, I have a simple question...
I use a simple vertex buffer type both in swift code and in metal code. Firstly, I defined uv to float2 but it's not working. It has weird texture coordinates when I use float2...
public struct VertexIn: sizeable {
var position = float3()
var normal = float3()
var uv = float3()
}
struct VertexIn {
float3 position [[ attribute(0) ]];
float3 normal [[ attribute(1) ]];
float3 uv [[ attribute(2) ]];
};
like this
float2.
float3.
It has just difference at the uv type. And I have same issue at passing uniform to shader. When I pass uniform that includes float or short types it doesn't work. So I change type to float3... So I inquire that metal data type is so difference compared with swift type??? Or what types are same and supported from metal.
Post not yet marked as solved
I was wondering if anyone had any insight into why MTKTextureLoader might very rarely return a texture which is just fully opaque magenta (each pixel is #ff00ff).
What I know:
I'm using MTKTextureLoader.newTexture(URL:options:) to synchronously load the texture within a standard, synchronous dispatch queue (though the single texture loader itself is created on the main thread).
No error is thrown in the above call and nothing is printed to the console, and a texture is returned.
The URL of the texture resides on the local filesystem and points to a fairly unremarkable 512x512 JPEG.
The resulting texture returned by the loader is the correct resolution (but every pixel is magenta).
The majority of launches of the app load all the textures without any issues (I think at least 90%), but if a texture does fail to load, many others fail to load as well (especially textures which are loaded immediately after a failed one). The exact same files which load incorrectly in one run of the app will load correctly in another run.
For completeness, the texture loader options:
textureLoaderOptions = [
.allocateMipmaps: false,
.generateMipmaps: false,
.textureUsage: NSNumber(value: MTLTextureUsage.shaderRead.rawValue),
.textureStorageMode: NSNumber(value: MTLStorageMode.`private`.rawValue)
]
Post not yet marked as solved
Is it possible to render an MTKView in different color spaces? For example, if I want to render it in CMYK, is there a way to adjust the colors on every frame to that colorspace before presenting it on the screen, or is that something that needs to be handled in a shader?
Post not yet marked as solved
I'm trying to render text with Metal for my (2D) game.
I'm using the system fonts, e.g. the SF Pro family for English texts. I render the glyphs onto an atlas texture, and then sample from this texture.
My questions:
I assume that, for copyright reasons, I'm not allowed to include a pre-rendered font atlas in my app. Is my assumption correct?
I can, however, have the app generate the atlas when it's first opened, and then use it within the app, right?
If 2. is true, then can the app save the atlas somewhere in the app's private storage, so that it would not need to re-generate the atlas the next time?
Thanks!
Hi
What's the standard/recommended way to set the MTKView's drawableSize to a size lower than the view/window and, at the same time, be able to update the drawableSize whenever the app's window is resized?
Thanks
Post not yet marked as solved
Hi,
MacOS version: 12.2.1
Xcode version: 13.2.1
I'm using MTKView and setting a drawableSize = 1024,768 and autoResizeDrawable = NO, however, drawableSizeWillChange() is being called on app startup, just once, and never again, with a different drawableSize.
Maybe it has to do with incorrect drawable size?
Thanks
I have started learning Metal and the source text I was reading mentioned that interactions with images rendered to the screen is handled by Core Animation layers (CAMetalLayer) in iOS, but had no mention for any other platform. I am guessing for the other "mobile" platforms the same thing applies as iOS, but what handles this interaction for macOS, or is it also the same?
Post not yet marked as solved
I working on this project last 1 month I have gone through all the possibilities but have not found the expected output.
I have the go through the following article and link:
https://ixtli.unam.mx/6280-2/
https://developer.apple.com/documentation/arkit/content_anchors/scanning_and_detecting_3d_objects?language=objc
https://github.com/TokyoYoshida/ExampleOfiOSLiDAR
https://stackoverflow.com/questions/61063571/arkit-3-5-how-to-export-obj-from-new-ipad-pro-with-lidar/61104855#61104855
I am getting normal 3d objects with gray color, I am expecting objects with texture, if you guys know some way to find the solution then please let me know.
Thank You!
Post not yet marked as solved
Hello Everybody.
I'm trying to port graphic code written cg in unity to metal.
And, one more thing I don't want to manually implement scene graph, so I gonna use SceneKit.
So I should use SCNProgram or SCNNodeRendererDelegate, and I think SCNProgram is more comfort.
And real my question is how convert this code, in cg
Cull Front
ZTest LEqual
ZWrite On
Blend SrcAlpha OneMinusSrcAlpha
I know source alpha blending in MTLPipelineDescriptor, zbuffer in RenderCommandEncoder and Cull Face also. But When I use SCNProgram or SCNSceneRendererNode, can't find these options... how I change these. Help me.
Hello, Everyone.
I try to use a metal kit with a scene kit. Because, the scene kits scene graph is great, I want to implement a low-level metal shader.
I want to use SCNNodeRenderDelegate, without SCNProgram. Because I want low-level implement for example passing custom extra MTLBuffer, or multi-pass-rendering.
So I pass model view projection matrix like that,
in metal shader
struct NodeBuffer {
float4x4 modelTransform;
float4x4 modelViewProjectionTransform;
float4x4 modelViewTransform;
float4x4 normalTransform;
float2x3 boundingBox;
};
in Swift code
struct NodeMatrix: sizeable {
var modelTransform = float4x4()
var modelViewProjectionTransform = float4x4()
var modelViewTransform = float4x4()
var normalTransform = float4x4()
var boundingBox = float2x3()
}
...
private func updateNodeMatrix(_ camNode: SCNNode) {
guard let camera = camNode.camera else {
return
}
let modelMatrix = transform
let viewMatrix = camNode.transform
let projectionMatrix = camera.projectionTransform
let viewProjection = SCNMatrix4Mult(viewMatrix, projectionMatrix)
let modelViewProjection = SCNMatrix4Mult(modelMatrix, viewProjection)
nodeMatrix.modelViewProjectionTransform = float4x4(modelViewProjectionMatrix)
}
...
public func renderNode(_ node: SCNNode,
renderer: SCNRenderer,
arguments: [String: Any])
{
guard let renderTexturePipelineState = renderTexturePipelineState,
let renderCommandEncoder = renderer.currentRenderCommandEncoder,
let camNode = renderer.pointOfView,
let texture = texture
else { return }
updateNodeMatrix(camNode)
guard let nodeBuffer
= renderer.device?.makeBuffer(bytes: &nodeMatrix,
length: NodeMatrix.stride,
options: [])
else { return }
renderCommandEncoder.setDepthStencilState(depthState)
renderCommandEncoder.setRenderPipelineState(renderTexturePipelineState)
renderCommandEncoder.setFragmentTexture(texture, index: 0)
renderCommandEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
renderCommandEncoder.setVertexBuffer(nodeBuffer, offset: 0, index: 1)
renderCommandEncoder.drawIndexedPrimitives(type: .triangle,
indexCount: indexCount,
indexType: .uint16,
indexBuffer: indexBuffer,
indexBufferOffset: 0)
}
But I got the wrong model view projection matrix in the shader.
I think scene kit has modify intermediate transform hiding.
I can't know, help me...
Is it possible to pass MTLTexture to Metal Core Image Kernel? How can Metal resources be shared with Core Image?