Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

High CPU usage with CoreImage vs Metal
I am processing CVPixelBuffers received from camera using both Metal and CoreImage, and comparing the performance. The only processing that is done is taking a source pixel buffer and applying crop & affine transforms, and saving the result to another pixel buffer. What I do notice is CPU usage is as high a 50% when using CoreImage and only 20% when using Metal. The profiler shows most of the time spent is in CIContext render: let cropRect = AVMakeRect(aspectRatio: CGSize(width: dstWidth, height: dstHeight), insideRect: srcImage.extent) var dstImage = srcImage.cropped(to: cropRect) let translationTransform = CGAffineTransform(translationX: -cropRect.minX, y: -cropRect.minY) var transform = CGAffineTransform.identity transform = transform.concatenating(CGAffineTransform(translationX: -(dstImage.extent.origin.x + dstImage.extent.width/2), y: -(dstImage.extent.origin.y + dstImage.extent.height/2))) transform = transform.concatenating(translationTransform) transform = transform.concatenating(CGAffineTransform(translationX: (dstImage.extent.origin.x + dstImage.extent.width/2), y: (dstImage.extent.origin.y + dstImage.extent.height/2))) dstImage = dstImage.transformed(by: translationTransform) let scale = max(dstWidth/(dstImage.extent.width), CGFloat(dstHeight/dstImage.extent.height)) let scalingTransform = CGAffineTransform(scaleX: scale, y: scale) transform = CGAffineTransform.identity transform = transform.concatenating(scalingTransform) dstImage = dstImage.transformed(by: transform) if flipVertical { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: 1, y: -1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: 0, y: dstImage.extent.size.height)) } if flipHorizontal { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: -1, y: 1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: dstImage.extent.size.width, y: 0)) } var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) Here is how CIContext was created: _ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [CIContextOption.cacheIntermediates: false]) I want to know if I am doing anything wrong and what could be done to lower CPU usage in CoreImage?
4
1
1.3k
Jul ’23
RealityConverter lacking permissions to access image files
Hi, I have this pesky issue where whenever I open an fbx or usdc file in Reality Converter it fails to load the image texture due to lack of permissions. I then have to click on each individual one to open a file dialog and manually open it. This gets boring very quickly. I have granted full disk access to Reality Converter in my Privacy & Security Preferences but this made no difference. Does anyone know how to get around this issue?
3
1
976
Jul ’23
Object Capture With only manual capturing
Is it possible to capture only manually (automatic off) on object capture api ? And can I proceed to capturing stage right a way? Only Object Capture API captures real scale object. Using AVFoundation or ARKit, I've tried using lidar capturing HEVC or create PhotogrammetrySample, It doesn't create real scale object. I think, during object capture api, it catches point cloud, intrinsic parameter, and it help mesh to be in real scale. Does anyone knows 'Object Capture With only manual capturing' or 'Capturing using AVFoundation for real scale mesh'
2
0
1.1k
Jul ’23
Diablo IV - Entering new areas, opening Character menu causes RAM Memory Overflow and screen freezes/crashes
I have a Macbook Pro 16 M1Pro 16Gb Ram MacOS 14 Sonoma Beta 4 and GPT 1.0.2 and currently testing Diablo 4 V 1.04 (latest Update on 08.08.2023). The game is awesome and it runs in 2560x1440 in 50-60fps on my 4K-LG Display over HDMI very smoothly, until .... see Problem 1 and Problem 2 Graphics details are in full detail. smooth shadows and even FSR2 works perfectly. Diablo4 needs around 9-11 GB Ram on my system. There are no background activities running! Problem 1: exploring new areas causes Ram Buffer overflow, freezes the screen, and crashes, and a new system reboot is needed. Problem 2: when trying to buy/sell an item, or just the characters menu will be opened, the game freezes. A game reboot is necessary! While running the HUD in games I can see what's going on and could analyze, that while it's the case Problem 1 and Problem 2 happening, The RAM jumps from 9-11 GB to 16-18 GB. This is much more than the System can deliver and cause the screen freezes and crash. Either the whole system reboot or mostly just the game reboot is needed. Would be very nice if Apple could fix/adjusts GPT in the next versions of Diablo 4. Many thanks in advanced
4
2
3.1k
Jul ’23
Generating vertex data in compute shader
Hi there, I am working on a 3d game engine in Swift and Metal. Currently I dynamically generate vertex buffers for terrain "chunks" on the CPU and pass all models to the GPU via argument buffer and make indirect draw calls. Calculating where vertices should be is costly and I would like to offload the work to a compute shader. Setting up the shader was straightforward and I can see that it (at least as an empty function) is being executed in the CommandBuffer. However, I come to this problem: since I do not know ahead of time how many vertices a chunk of terrain will have, I cannot create a correctly-sized MTLBuffer to pass into the compute function to be populated for later use in a draw call. The only solution I could think of is something like the following: For each chunk model, allocate a VertexBuffer and IndexBuffer that will accommodate the maximum possible number of vertices for a chunk. Pass in the empty too-large buffers to the compute function Populate the too-large buffers and set the actual vertex count and index count on the relevant argument buffer. On the CPU, before the render encoder executes commands in the indirect command buffer, do the following: for each chunk argument buffer, create new buffers that fix the actual vertex count and index count blit copy the populated sections of memory from the original too-large buffers to the new correctly-sized buffers replace the buffers on each chunk model and update the argument buffers for the draw kernel function But I am still a Metal novice and would like to know if there is any more straightforward or optimal way to accomplish something like this.
6
1
1.2k
Jul ’23
Metal Shader Converter Help
So I have been working on using the new Metal Shader Converter to create a graphics abstraction between D3D12 and Metal. One thing I cannot wrap my head around is how someone would do bindless buffers in Metal. Take this for example... the metal shader converter easily converts this code into Metal `ByteAdressBuffer bindless_buffers[] : register(space1); v2f vertMain(vertIn in){ Mesh m = bindless_buffers[1].Load(0); v2f out; out.pos = in.pos * m.pos; return out;` And using the new Shader Converter one can easily create a DescriptorTable from a root signature that holds this unbounded array of ByteAdressBuffers. But when you try to fill an argument buffer using the DescriptorTableEntry struct, it looks like you can only place one resource at a time and cannot even access the array in the descriptor. For textures this is okay because you can supply a MTLTexture that holds other textures. But you can't have a MTLBuffer hold different kinds of buffers. Is it possible to do this ByteAdressBuffer style of full bindless in Metal? The shader converter allows it but I don't know how to get the API to supply the shaders with the correct data... Any Help would be GREATLY appreciated and all the work I do will be posted online for others to learn about using Metal :)
1
0
796
Jul ’23
Pointers in MSL
Hello! I have to use a specific pattern with pointers for a shader and I am not sure what's wrong. The shader renders with artefacts. Seems to be something messed up with the pointers and the UVs. Here is a simplified version: float3 outColor; float2 uv; }; device Context *ContextInit(float3 color, float2 uv) { device Context *context = nullptr; context->outColor = color; context->uv = uv; return context; } void drawSomething(device Context &context) { float d = length(context.uv); context.outColor *= d; } void manupulateUV(device Context &context, float2 uv) { uv +=0.5; float d = length(sin(uv)); context.outColor -=d; } fragment float4 pointer(VertexOut input[[stage_in]]) { float2 uv = input.textureCoordinate; device Context *context = ContextInit(float3(1, 0, 0), uv); drawSomething(*context); return float4(context->outColor.x, context->outColor.y, 0, 1); }
5
0
408
Jul ’23
How to get grounding shadow to work in VisionOS?
Hi, I'm trying to replicate ground shadow in this video. However, I couldn't get it to work in the simulator. My scene looks like the following which is rendered as an immersive space: The rocket object has the grounding shadow component with "cast shadow" set to true: but I couldn't see any shadow on the plane beneath it. Things I tried: using code to add the grounding shadow component, didn't work re-used the IBL from the helloworld project to get some lighting for the objects. Although the IBL worked, I still couldn't see the shadow tried adding a DirectionalLight but got an error saying that directional lights are not supported in VisionOS (despite the docs saying the opposite) A related question on lighting: I can see that the simulator definitely applies some scene lighting to objects. But it doesn't seem to do it perfectly. For example in the above screenshot I placed the objects under a transparent ceiling which is supposed to get a lot of lights. But everything is still quite dark.
6
1
2.2k
Jul ’23
RealityView update closure not executed upon state change
I have the following piece of code: @State var root = Entity() var body: some View { RealityView { content, _ in do { let _root = try await Entity(named: "Immersive", in: realityKitContentBundle) content.add(_root) // root = _root <-- this doesn't trigger the update closure Task { root = _root // <-- this does } } catch { print("Error in RealityView's make: \(error)") } } update: { content, attachments in // NOTE: update not called when root is modififed // unless root modification is wrapped in Task print(root) // the intent is to use root for positioning attachments. } attachments: { Text("Preview") .font(.system(size: 100)) .background(.pink) .tag("initial_text") } } // end body If I change the root state in the make closure by simply assigning it another entity, the update closure will not be called - print(root) will print two empty entities. Instead if I wrap it in a Task, the update closure would be called: I would see the correct root entity being printed. Any idea why this is the case? In general, I'm unsure the order in which the make, update and attachment closures are executed. Is there more guidance on what we should expect the order to be, what should we do typically in each closure, etc?
1
0
885
Jul ’23
Surface shader can't get opacity from the texture
I have a simple material with a texture that has transparency baked in (PNG file), this is the code that I did to load it : var triMat = SimpleMaterial(color: .orange, isMetallic: false) triMat.color = SimpleMaterial.BaseColor( tint: .white.withAlphaComponent(0.9), texture: MaterialParameters.Texture(try! .load(named: "whity_4x4"))) self.components[ModelComponent.self] = try! ModelComponent(mesh: .generate(from: [meshDesc]), materials: [triMat]) It's pretty straight foward and it works fine, the geometry is a plane and it shows the texture correctly with transparency. But when I try to create a surface shader (I want to animate the uv so it looks like it scrolls) the transparency become a black color. This is the part where I sample the texture for color and opacity auto color = params.textures().base_color().sample(textureSampler, float2(u1,v1)); auto opacity = params.textures().opacity().sample(textureSampler, float2(u1,v1)).r; and I set the surface color and opacity params.surface().set_base_color(color.rgb); params.surface().set_opacity(opacity); upon debugging, it seems the opacity value is wrong (always 1) that's why it doesn't show as transparent. Did I do something wrong with sampling the texture opacity? I looked at the Metal API documentation that's how you supposed to do it? Thanks for any help
3
0
505
Jul ’23
Metal Shader Performance
Hi, I have a customized ray marching algorithm code and I want to run it in a metal shader. btw I am not sure how I can get the maximum performance in the metal shader. can you let me know what is the best way to get maximum performance between below 1) and 2) approach? Use Computing Pipeline. make a kernel function for ray marching and link it into computing pipeline and display it as a image. Use Graphics Pipeline. run the algorithm inside of the fragment shader directly and link it into graphics pipeline and display it directly. Thanks,
0
0
255
Jul ’23
Entity rotation animation doesn't go beyond 180 degree?
I use simple transform and move method to animate my entity. Something like this : let transform = Transform(scale: .one, rotation: simd_quatf(angle: .pi, axis: SIMD3(x:0, y:0, z:1), translate: .zero) myEntity.move(to: transform, relativeTo: myEntity, duration: 1) All is well, but when I try to rotate any more than 180 degree, the rotation stays still ? How do I animate something that wants to turn 360 degree? Thanks
2
0
730
Jul ’23
Unity Apple plugin Issue
We are upgrading our ObjC-based iOS multiplayer game to Unity, so we plan to use the Unity Apple plugins. Specifically, we want to use Apple.Core and Apple.Gamekit packages. Cloned the project from https://github.com/apple/unityplugins Built the package using Unity 2020.3.33f1 In the Unity Editor, used Windows > Package Manager and added both packages from the tarball. No errors; all looks good. 5, Built & Run the project and installed the game on an iPhone 13 device. 6. Once the game starts, we see the following errors, and the game quits. Errors: '/private/preboot/Cryptexes/OS/usr/lib/swift/AppleCoreNative.framework/AppleCoreNative' (no such file), '/private/var/containers/Bundle/Application/0CF8E689-36C1-4AA6-9CF6-1E078EC1A3FB/Game.app/Frameworks/AppleCoreNative.framework/AppleCoreNative' (no such file), '/private/var/containers/Bundle/Application/0CF8E689-36C1-4AA6-9CF6-1E078EC1A3FB/Game.app/Frameworks/UnityFramework.framework/Frameworks/AppleCoreNative.framework/AppleCoreNative' (no such file), '/private/var/containers/Bundle/Application/0CF8E689-36C1-4AA6-9CF6-1E078EC1A3FB/Game.app/Frameworks/AppleCoreNative.framework/AppleCoreNative' (no such file), '/System/Library/Frameworks/AppleCoreNative.framework/AppleCoreNative' (no such file, not in dyld cache) Anyone, please help us to resolve the above issue?
1
0
953
Jul ’23
How to position windows in the environment in VisionOS?
The below code is my entry point import SwiftUI @main struct KaApp: App { var body: some Scene { WindowGroup { ContentView() } WindowGroup(id:"text-window"){ ZStack{ TextViewWindow().background(.ultraThickMaterial).edgesIgnoringSafeArea(.all) } }.windowStyle(.automatic).defaultSize(width: 0.1, height: 0.1, depth: 1, in: .meters) WindowGroup(id:"model-kala"){ ModelView() }.windowStyle(.volumetric).defaultSize(width: 0.8, height: 0.8, depth: 0.8, in:.meters) WindowGroup(id:"model-kala-2"){ AllModelsView().edgesIgnoringSafeArea(.all) }.windowStyle(.volumetric).defaultSize(width: 1, height: 1, depth: 1, in:.meters) } } I want to place the TextViewWindow exactly near a model that I have placed in the environment. But I'm unable to reposition the window to exactly where I want. if let Armor_Cyber = try? await ModelEntity(named:"Armor_Cyber"), let animation = Armor_Cyber.availableAnimations.first{ Armor_Cyber.playAnimation(animation.repeat(duration: .infinity)) Armor_Cyber.scale = [0.008, 0.008, 0.008] Armor_Cyber.position = [-4, -1, 0.15] let rotation = simd_quatf(angle: -.pi / 6, axis: SIMD3<Float>(0, 1, 0)) * simd_quatf(angle: -.pi / 2, axis: SIMD3<Float>(1, 0, 0)) * simd_quatf(angle: .pi / 2, axis: SIMD3<Float>(0, 0, 1)) Armor_Cyber.transform.rotation = rotation content.add(Armor_Cyber) } How can I place the windowGroup exactly on the right-top of the above model?
2
0
1k
Jul ’23
VisionOS Simulator DragGestures
Hello, Right now I am learning some RealityKit for VisionOS. I do not receive any errors in my code so it seems okay. But I can't drag my object around. Does the SIM supports gestures in general? // // ImmersiveView.swift // NewDimensionn // // Created by Patrick Schnitzer on 18.07.23. // import SwiftUI import RealityKit import RealityKitContent struct ImmersiveView: View { var earth: Entity = Entity() var moon: Entity = Entity() var body: some View { RealityView { content in async let earth = ModelEntity(named: "EarthScene", in: realityKitContentBundle) async let moon = ModelEntity(named: "MoonScene", in: realityKitContentBundle) if let earth = try? await earth, let moon = try? await moon { content.add(earth) content.add(moon) } } .gesture(DragGesture() .targetedToEntity(earth) .onChanged{ value in earth.position = value.convert(value.location3D, from: .local, to: earth.parent!) } ) } } #Preview { ImmersiveView() .previewLayout(.sizeThatFits) }```
4
0
2.1k
Jul ’23
Game Porting Toolkit issue 1.02 installment
How do I fix this, I type brew upgrade in terminal and this popped up. Error: Cannot install in Homebrew on ARM processor in Intel default prefix (/usr/local)! Please create a new installation in /opt/homebrew using one of the "Alternative Installs" from: https://docs.brew.sh/Installation You can migrate your previously installed formula list with: brew bundle dump
3
1
2.0k
Jul ’23
Does a listen-only CGEventTap block events handling?
I am writing a tool that tracks statistics about key strokes. For that I create an event tap using CGEventTapCreate (docs). Since my code does not alter events, I create the tap using the kCGEventTapOptionListenOnly option. Do I still need to minimize the runtime of my event handling callback for fast processing of keyboard events? I assume that a listen-only handler does not block the OS-internal event handling queue, but I can't find anything assertive for that in the documentation. Many thanks in advance.
1
0
617
Jul ’23
Exported .usdz scenes are not compatible with common tools
If you have a scene with a simple custom .usda material applied to a primitive like a cube, the exported (.usdz) material definition is unknown for tools like Reality Converter Version 1.0 (53) or Blender Version 3.6.1. Reality Converter shows up some warnings "Missing references in USD file", "Invalid USD shader node in USD file". Even Reality Composer Pro is unable to recreate the material correct with it's own exported .usdz files. Feedback: FB12699421
3
0
1.1k
Jul ’23