Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

MTKView, presentDrawable afterMinimumDuration seems flakey...
I downloaded this sample: https://developer.apple.com/documentation/metal/basic_tasks_and_concepts/using_metal_to_draw_a_view_s_contents?preferredLanguage=occ I commented out this line in AAPLViewController.mm //    _view.enableSetNeedsDisplay = YES; I modified the presentDrawable line in AAPLRenderer.mm to add afterMinimumDuration:     [commandBuffer presentDrawable:drawable afterMinimumDuration:1.0/60]; I then added a presentedHandler before the above line that records the time between successive presents. Most of the time it correctly reports 0.166667s. However, about every dozen or so frames (it varies) it seems to present a frame early with an internal of 0.0083333333s followed by the next frame after around 0.24s. Is this expected behaviour, I was hoping that afterMinimumDuration would specifically make things consistent. Why would it present a frame early? This is on a new MacBook Pro 16 running latest macOS Monterrey, and the sample project upgraded to have a minimum deployment target of 11.0. Xcode latest public release 13.1.
3
0
952
Nov ’21
error ‘Texture Descriptor Validation
My RealityKit app uses an ARView with camera mode .nonAR. Later it puts another ARView with camera mode .ar on top of this. When I apply layout constraints to the second view the program aborts with the follow messages. If both views are of type .ar this doesn't occur, it is only when the first view is .nonAR and then has the second presented over it. I have been unable so far to reproduce this behavior in a demo program to provide to you and the original code is complex and proprietary. Does anyone know what is happening? I've seen other questions concerning situation but not under the same circumstances. 2021-12-01 17:59:11.974698-0500 MyApp[10615:6672868] -[MTLTextureDescriptorInternal validateWithDevice:], line 1325: error ‘Texture Descriptor Validation MTLTextureDescriptor has width (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has height (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has invalid pixelFormat (0). ’ -[MTLTextureDescriptorInternal validateWithDevice:]:1325: failed assertion `Texture Descriptor Validation MTLTextureDescriptor has width (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has height (4294967295) greater than the maximum allowed size of 16384. MTLTextureDescriptor has invalid pixelFormat (0).
3
0
2.4k
Dec ’21
How to fully apply parallel computing on CPU and GPU of M1max
Project is based on python3.8 and 3.9, containing some C and C++ source How can I do parallel computing on CPU and GPU of M1max In deed, I buy Mac m1max for the strong GPU to do quantitative finance, for which the speed is extremely important. Unfortunately, cuda is not compatible with Mac. Show me how to do it, thx. Are Accelerate(for CPU) and Metal(for GPU) can speed up any source by building like this: Step 1: download source from github Step 2: create a file named "site.cfg"in this souce file, and add content: [accelerate] libraries=Metal, Acelerate, vecLib Step 3: Terminal: NPY_LAPACK_Order=accelerate python3 setup.py build Step 4: pip3 install . or python3 setup.py install ? (I am not sure which method to apply) 2、how is the compatibility of such method? I need speed up numpy, pandas and even a open souce project, such as https://github.com/microsoft/qlib 3、just show me the code 4、when compiling C++, C source, a lot of errors were reported, which gcc and g++ to choose? the default gcc installed by brew is 4.2.1, which cannot work. and I even tried to download gcc from the offical website of ARM, still cannot work. give me a hint. thx so much urgent
1
1
2.2k
Dec ’21
PhotogrammetrySamples metaData
How does meta data affect the model creation? When I added image meta data to sample, the created model was changed. this is the code I get meta data from image url: func getImageMetaData(url: URL) -> CFDictionary? {             if let data = NSData(contentsOf: url),                let source = CGImageSourceCreateWithData(data as CFData, nil) {                 let metadata = CGImageSourceCopyPropertiesAtIndex(source, 0, nil)                 return metadata             }             return nil         } when I create photogrammetrySample, this is the code I add meta data to it: if let metaData = getImageMetaData(url: imageURL) as? [String:AnyObject] {                                 sample.metadata = metaData                             }
5
0
2.9k
Jan ’22
Display USDZ 3D model on website without AR
I am trying to build a website where I would like to render the USDZ 3D model on the browser without the AR feature. The user should be able to interact with the 3D model using a pointing device (mouse). If the user wants to see the 3D model in AR she/he can do so by loading the page on a compatible device where the 3D model can be projected in AR. I am looking for an answer to how to display the USDZ 3D model on the browser without the AR feature.
2
1
2.4k
Jan ’22
Metal Ray Tracing, RealityKit, SwiftUI Problems
First of all, I apologize for such a general question, but my code is far too long to post. However, I have narrowed down my problems and am seeking advice for any solutions/workarounds that may help me. I recently updated my physics simulation code from using Metal Performance Shaders for ray tracing to using the (fairly) new Metal ray tracing routines directly. A few notes on my program: I perform the ray tracing entirely in a separate thread using Compute Kernels -- there is no rendering based on the ray tracing results. The compute kernels are called repeatedly and each ends with a waitUntilCompleted() command. The main thread is running a SwiftUI interface with some renderings that use RealityKit to display the scene (i.e., the vertices), and the rays that traverse the scene using Metal Ray Tracing. This is purely a Mac program and has no IOS support. So, the problem is that there seems to be some conflict between RealityKit rendering and the ray-tracing compute kernels where I will get a "GPU Soft Fault" when I run the "intersect" command in Metal. After this soft-fault error, my Ray Tracing results are completely bogus. I have figured out a solution to this which is to refit my acceleration structures semi-regularly. However, this solution is inelegant and probably not sustainable. This problem gets worse the more I render in my RealityKit display UI (rendered as a SwiftUI view) so I am now confident that the problem is some "collision" between the GPU resources needed by my program and RealityKit. I have not been able to find any information on what a "GPU Soft Fault" actually is although I suspect it is a memory violation. I suspect that I need to use fences to cordon off my ray tracing compute kernel from other things that use Metal (i.e., RealityKit), however I am just not sure if this is the case or how to accomplish this. Again, I apologize for the vague question, but I am really stuck. I have confirmed that every Metal buffer I pass to my compute kernel is correct. I did this confirmation by making my object a simple cube and having only one instance of this cube. Something happens to either corrupt the acceleration structure data or to make it inaccessible during certain times when RealityKit needs to use the GPU. Any advice would be appreciated. I have not submitted a bug report since I am still not sure if this is just my lack of advanced knowledge of multiple actors requiring GPU use or if there is something more serious here. Thanks in advance, -Matt
2
0
924
Feb ’22
Sudden error being logged continuously in XCode console
I've an app I've been working on for quite some time now. It uses SpriteKit. The windows with the scenes in them are generated programmatically. All of a sudden, when it's up and running through XCode, the console throws out a continuous stream of errors - probably in the order of 1 per second. The main one that makes up this stream is: 2022-03-07 20:07:38.765930+0000 My*App[8461:465673] [] CurrentVBLDelta returned 0 for display 1 -- ignoring unreasonable value But others that appear slightly less frequently are: 2022-03-07 20:07:38.447800+0000 My*App[8461:465143] Metal GPU Frame Capture Enabled 2022-03-07 20:07:38.448070+0000 My*App[8461:465143] Metal API Validation Enabled 2022-03-07 20:07:38.613097+0000 My*App[8461:465640] [] [0x7f9596053820] CVCGDisplayLink::setCurrentDisplay: 1 2022-03-07 20:07:38.613415+0000 My*App[8461:465640] [] [0x7f9596053820] Bad CurrentVBLDelta for display 1 is zero. defaulting to 60Hz. 2022-03-07 20:07:38.613442+0000 My*App[8461:465640] [] [0x7f9596053800] CVDisplayLinkCreateWithCGDisplays count: 1 [displayID[0]: 0x1] [CVCGDisplayLink: 0x7f9596053820] 2022-03-07 20:07:38.613467+0000 My*App[8461:465640] [] [0x7f9596053800] CVDisplayLinkStart 2022-03-07 20:07:38.613487+0000 My*App[8461:465640] [] [0x7f9596053820] CVDisplayLink::start 2022-03-07 20:07:38.613541+0000 My*App[8461:465640] [] [0x7f9596053800] CVDisplayLinkStart 2022-03-07 20:07:38.613575+0000 My*App[8461:465640] [] [0x7f9596053820] CVDisplayLink::start 2022-03-07 20:07:38.613634+0000 My*App[8461:465671] [] [0x600000c09f10] CVXTime::reset 2022-03-07 20:07:38.613718+0000 My*App[8461:465671] [] CurrentVBLDelta returned 0 for display 1 -- ignoring unreasonable value 2022-03-07 20:07:38.639810+0000 My*App[8461:465671] [] CurrentVBLDelta returned 0 for display 1 -- ignoring unreasonable value 2022-03-07 20:07:38.639887+0000 My*App[8461:465671] [] [0x7f9596053820] CVCGDisplayLink::getDisplayTimes display: 1 -- SLSDisplayGetCurrentVBLDelta returned 0 => generating fake times 2022-03-07 20:07:38.673702+0000 My*App[8461:465653] [] [0x7f9596053820] CVCGDisplayLink::setCurrentDisplay: 1 2022-03-07 20:07:38.673748+0000 My*App[8461:465653] [] [0x7f9596053820] CVDisplayLink::stop 2022-03-07 20:07:38.673862+0000 My*App[8461:465653] [] [0x7f9596053820] Bad CurrentVBLDelta for display 1 is zero. defaulting to 60Hz. 2022-03-07 20:07:38.673883+0000 My*App[8461:465653] [] [0x7f9596053820] CVDisplayLink::start I really don't know how else to describe this. I can only imagine having to hand over the entire project as lately I've only been working on the storyboard file so how would this affect it? To be honest the only change I can see is my upgrade to OSX 12.2.1. It's definitely something to do with SpriteKit since not initialising a window with a scene in it stops the errors from showing.
20
4
7.4k
Mar ’22
Unreal Engine / Path tracer
I‘am architect in switzerland. In our office we use since 20 years only apple computers. And we love them! For our visualisations we also use twinmotion and unreal engine. One key feature the pathe tracer are not supportet for mac os. Some people say thats because apple wont support hardware accelarated gpu. Now im wondering if its on your roadmap? Or are you in discoussion with the decelopers from epic? Would be great to have this feature :) best Kevin
13
3
6.2k
Mar ’22
RealityKit MeshResource generated from SwiftUI shape
On Scenekit, using SCNShapewe can create SCN geometry from SwiftUI 2D shapes/beziers:https://developer.apple.com/documentation/scenekit/scnshape Is there an equivalent in RealityKit? Could we use the generate(from:) for that?https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate https://developer.apple.com/documentation/realitykit/meshresource/3768520-generate
2
0
871
Mar ’22
Blank scene with xcode connected and "Metal -> API Validation" turned off.
I'm not sure which combination of iOS/XCode/Mac OS is causing this issue, but all of a sudden when I try to run our SceneKit app and the "Scheme -> Diagnostics -> Metal -> API Validation" setting is turned off the scene won't render and the console is just full of the following errors: Execution of the command buffer was aborted due to an error during execution. Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource) [SceneKit] Error: Main command buffer execution failed with status 5, error: Error Domain=MTLCommandBufferErrorDomain Code=9 "Invalid Resource (00000009:kIOGPUCommandBufferCallbackErrorInvalidResource)"  ) If you run the app outside of xcode it's fine, also enabling the "API Validation" option stops the issue. One of my schemes has this option disabled since the project began and never had an issue before. Just throwing this out there incase someone else has spent hours of their life trying to figure out why this is not working for them. Also you can just create a new SceneKit project and turn that diagnostic option off and the app won't render anything.
2
0
1.5k
Apr ’22
Apple's Metal Simulation fails on SIGABRT on MTLTextureUsageShaderRead
I've downloaded Apple's metal simulator example from here https://developer.apple.com/documentation/metal/supporting_simulator_in_a_metal_app and tried simulating an iphone 11 with 11.5 and 12 using iOS, and iPad 9th gen. I get the error below when running the simulator on all the above devices. -[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5252: failed assertion `Draw Errors Validation Fragment Function(blendFragmentShader): Shader reads texture (prevColor[1]) whose usage (0x04) doesn't specify MTLTextureUsageShaderRead (0x01) The error occurs in the drawBox:rendererEncoder at the code below when boxIndex = 1 for(MTKSubmesh *submesh in _meshes[boxIndex].submeshes)   {     [renderEncoder drawIndexedPrimitives:submesh.primitiveType                  indexCount:submesh.indexCount                   indexType:submesh.indexType                  indexBuffer:submesh.indexBuffer.buffer               indexBufferOffset:submesh.indexBuffer.offset];   } I'm using a Macbook pro, macos 12.2 and Xcode 13.3. I hoped that this example would be an easy way to get experience with metal, but I've not got the experience I need to interpret what I should change to address the "doesn't specify MTLTextureUsageShaderRead" error. Any help is greatly appreciated.
3
0
1.3k
Jun ’22
iOS app on MacOS M1 - Window Resize or Full Screen
MacOS M1 machines can run iOS applications. We have an iOS application that runs a fullscreen metal game. The game can also run across all desktop platforms via Steam. In additional to Steam, we would like to make it available through the AppStore on MacOS. We'd like to utilise our iOS builds for this so that the Apple payment (micro-transactions) and sign-in processes can be reused. While the app runs on MacOS, it runs in a small iPad shaped window that cannot be resized. We do not want to add iPad multitasking support (portrait orientation is not viable), but would like the window on MacOS to be expandable to full screen. Currently there is an option to make it full screen, but the metal view (MTKView) delegate does not receive a drawableSizeWillChange event for this, meaning the new resolution of the window cannot be received. Is there another method of retrieving a window size change event in this context? What is the recommended way of enabling window resizing on MacOS but not iPad for a single iOS app?
3
0
5.8k
Jun ’22
Problems with mesh shaders that dispatch large amount of threadgroups
I was familiarising myself with the Metal mesh shaders and run into some issues. First, a trivial application that uses mesh shaders to generate simple rectangular geometry hangs the GPU when dispatching 2D grids of mesh shader threadgroups, but it's really weird as it is sensitive to the grid shape. E.g. // these work! meshGridProperties.set_threadgroups_per_grid(uint3(512, 1, 1)); meshGridProperties.set_threadgroups_per_grid(uint3(16, 8, 1)); meshGridProperties.set_threadgroups_per_grid(uint3(32, 5, 1)); // these (and anything "bigger") hang! meshGridProperties.set_threadgroups_per_grid(uint3(16, 9, 1)); meshGridProperties.set_threadgroups_per_grid(uint3(32, 6, 1)); The sample shader code is attached. The invocation is trivial enough: re.drawMeshThreadgroups( MTLSizeMake(1, 1, 1), threadsPerObjectThreadgroup: MTLSizeMake(1, 1, 1), threadsPerMeshThreadgroup: MTLSizeMake(1, 1, 1) ) For apple engineers: a bug has been submitted under FB10367407 Mesh shader code: 2d_grid_mesh_shader_hangs.metal I also have a more complex application where mesh shaders are used to generate sphere geometry: each mesh shader thread group generates a single slice of the sphere. Here the problem is similar: once there more than X slices to render, some of the dispatched mesh threadtroups don't seem to do anything (see screenshot below). But the funny thing is that the geometry is produced, as it would occasionally flicker in and out of existence, and if I manually block out some threadgroups from running (e.g. by using something like if(threadgroup_index > 90) return; in the mesh shader, the "hidden" geometry works! It almost looks like different mesh shaders thread group would reuse the same memory allocation for storing the output mesh data and output of some threadgroups is overwritten. I have not submitted this as a bug, since the code is more complex and messy, but can do so if someone from the Apple team wants to have a look.
1
2
1k
Jun ’22
Cannot create a shared texture with iosurface on Apple silicon mac.
Trying to call [MTLDevice newTextureWithDescriptor:iosurface:plane:] on an Apple Silicon mac, where the descriptor specifies MTLStorageModeShared, I am getting a failed assertion error: -[MTLDebugDevice newTextureWithDescriptor:iosurface:plane:]:2387: failed assertion `Texture Descriptor Validation IOSurface textures must use MTLStorageModeManaged I don't really understand why we have this limitation, MTLStorageModeShared textures are supported on Apple silicon (despite the documentation in https://developer.apple.com/documentation/metal/mtldevice/1433378-newtexturewithdescriptor?language=objc which claims overwise.
9
4
3.6k
Jul ’22
Regarding Gamekit samples in apple/unityplugins
I built the Gamekit Sample in apple/unityplugins for iOS and ran "_localPlayer = await GKLocalPlayer.Authenticate();" It worked fine up to the point where the GameCenter popup was displayed. However, the "OnRealtimeMatchmake()" and "OnToggleAccessPoint()" will result in the following error "MissingMethodException: Default constructor not found for type Apple.GameKit.Multiplayer.GKMatchRequest at System.RuntimeType.CreateInstanceMono" and "MissingMethodException: Default constructor not found for type Apple.GameKit.GKAccessPoint at System.RuntimeType.CreateInstanceMono" errors and does not work properly. Please tell me what is the problem here. The environment is as follows iPhone SE (iOS15.5), Unity2020.3.33f1, XCode13.4.1
4
0
2.2k
Jul ’22
View and Projection Matrix in Metal for a First Person Camera
So I'm trying to make a simple scene with some geometry of sorts and a movable camera. So far I've been able to render basic geometry in 2D alongside transforming set geometry using matrices. Following this I moved on to the Calculating Primitive Visibility Using Depth Testing Sample ... also smooth sailing. Then I had my first go at transforming positions between different coordinate spaces. I didn't get quite far with my rather blurry memory from OpenGL, all dough when I compared my view and projection matrix with the ones from the OpenGL glm::lookAt() and glm::perspective() functions there seemed to be no fundamental differences. Figuring Metal doing things differently I browsed the Metal Sample Code library for a sample containing a first-person camera. The only one I could find was Rendering Terrain Dynamically with Argument Buffers. Luckily it contained code for calculating view and projection matrices, which seemed to differ from my code. But I still have problems Problem Description When positioning the camera right in front of the geometry the view as well as the projection matrix produce seemingly accurate results: Camera Positon(0, 0, 1); Camera Directio(0, 0, -1) When moving further away though, parts of the scene are being wrongfully culled. Notably the ones farther away from the camera: Camera Position(0, 0, 2); Camera Direction(0, 0, -1) Rotating the Camera also produces confusing results: Camera Position: (0, 0, 1); Camera Direction: (cos(250°), 0, sin(250°)), yes I converted to radians My Suspicions The Projection isn't converting the vertices from view space to Normalised Device Coordinates correctly. Also when comparing two first two images, the lower part of the triangle seems to get bigger as the camera moves away which also doesn't appear to be right. Obviously the view matrix is also not correct as I'm pretty sure what's describe above isn't supposed to happen. Code Samples MainShader.metal #include <metal_stdlib> #include <Shared/Primitives.h> #include <Shared/MainRendererShared.h> using namespace metal; struct transformed_data {     float4 position [[position]];     float4 color; }; vertex transformed_data vertex_shader(uint vertex_id [[vertex_id]],                                       constant _vertex *vertices [[buffer(0)]],                                       constant _uniforms& uniforms [[buffer(1)]]) {     transformed_data output;     float3 dir = {0, 0, -1};     float3 inEye = float3{ 0, 0, 1 }; // position     float3 inTo = inEye + dir; // position + direction     float3 inUp = float3{ 0, 1, 0};          float3 z = normalize(inTo - inEye);     float3 x = normalize(cross(inUp, z));     float3 y = cross(z, x);     float3 t = (float3) { -dot(x, inEye), -dot(y, inEye), -dot(z, inEye) };     float4x4 viewm = float4x4(float4 { x.x, y.x, z.x, 0 },                               float4 { x.y, y.y, z.y, 0 },                               float4 { x.z, y.z, z.z, 0 },                               float4 { t.x, t.y, t.z, 1 });          float _nearPlane = 0.1f;     float _farPlane = 100.0f;     float _aspectRatio = uniforms.viewport_size.x / uniforms.viewport_size.y;     float va_tan = 1.0f / tan(0.6f * 3.14f * 0.5f);     float ys = va_tan;     float xs = ys / _aspectRatio;     float zs = _farPlane / (_farPlane - _nearPlane);     float4x4 projectionm = float4x4((float4){ xs,  0,  0, 0},                                     (float4){  0, ys,  0, 0},                                     (float4){  0,  0, zs, 1},                                     (float4){  0,  0, -_nearPlane * zs, 0 } );          float4 projected = (projectionm*viewm) * float4(vertices[vertex_id].position,1);     vector_float2 viewport_dim = vector_float2(uniforms.viewport_size);     output.position = vector_float4(0.0, 0.0, 0.0, 1.0);     output.position.xy = projected.xy / (viewport_dim / 2);     output.position.z = projected.z;     output.color = vertices[vertex_id].color;     return output; } fragment float4 fragment_shader(transformed_data in [[stage_in]]) {return in.color;} These are the vertices definitions let triangle_vertices = [_vertex(position: [ 480.0, -270.0, 1.0], color: [1.0, 0.0, 0.0, 1.0]),                          _vertex(position: [-480.0, -270.0, 1.0], color: [0.0, 1.0, 0.0, 1.0]),                          _vertex(position: [   0.0,  270.0, 0.0], color: [0.0, 0.0, 1.0, 1.0])] // TO-DO: make this use 4 vertecies and 6 indecies let quad_vertices = [_vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [ 480.0,  270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0]),                      _vertex(position: [-480.0, -270.0, 0.5], color: [0.5, 0.5, 0.5, 1.0])] This is the initialisation code of the depth stencil descriptor and state _view.depthStencilPixelFormat = MTLPixelFormat.depth32Float _view.clearDepth = 1.0 // other render initialisation code let depth_stencil_descriptor = MTLDepthStencilDescriptor() depth_stencil_descriptor.depthCompareFunction = MTLCompareFunction.lessEqual depth_stencil_descriptor.isDepthWriteEnabled = true; depth_stencil_state = try! _view.device!.makeDepthStencilState(descriptor: depth_stencil_descriptor)! So if you have any idea on why its not working or have some code of your own that's working or know of any public samples containing a working first-person camera, feel free to help me out. Thank you in advance! (please ignore any spelling or similar mistakes, english is not my primary language)
3
1
2.7k
Aug ’22
Is it possible to make Unity iOS App into iOS widget extension?
HELLO WORLD! I am currently developing an amazing Unity mini Game. The game development has come to the deploying stage. Using Unity allows me to deploy this game on MacOS, iOS, Android and Windows which is fantastic . However, I really need this game to have the iOS widget extension as a key feature on mobile. Just like the "Steve" dinosaur game which can be played in widget. I had seen a lot of tutorials and youtube videos but still cant find a solution which make Unity iOS app into a widget. If I really want to make widget Game on iOS, do I need to develop the whole game all over again in xcode or using GameKit? Really hoping there's a way to simply convert the Unity iOS app into a widget.
2
1
1.8k
Sep ’22