Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

Post

Replies

Boosts

Views

Activity

OpenGL on future iPhones and Macs?
Hello everyone! After some time to think about I proceed with graphics api, I figured opengl will be my first since I'm completely new to graphics programming. As in my last post you may find, I was speaking on moltenvk and might just use metal instead, along with the demos I found using metal. So for now, and I know this is said MANY TIMES, apple deprecated opengl but wish to use it because I'm new to graphics programming and want to develop an app(a rendering engine really) for the iPhone 14 Pro Max and macOS Ventura 13.2(I think this is the latest). So what do you guys think? Can I still use opengl es on the 14 max, along with opengl 4+ on latest macOS even though is deprecated?
15
0
5.6k
Feb ’23
Reality Composer Exporting USDZ files - slow animation
It seems that something must of changed in the reality composer export feature to USDZ. Importing any animated .usdz file into reality composer and then exporting it is reducing the playback frame rate to about 30%. The same file imported and then exported as a .reality file plays back just fine. Anyone else experiencing this issue, as its happening for every usdz file imported and also across 2 different apple laptops running the software?
2
1
1.6k
Feb ’23
Core Haptics Unity Errors In Xcode
I am trying to get the Apple Core Haptics plug-in to work with Unity but am having issues when I try to build the project. I created a completely blank project with just the Core and CoreHaptics plug-ins installed. I then put in a single script (below) which is a replication of the script shared in the WWDC22 video on this subject. Then when I try to build and run that project I get a series of “Undefined symbol:” errors in Xcode. These effect the CoreHaptics, UIFeedbackGenerator frameworks. If I remove the script and just build an empty project with Core and CoreHaptics installed the build runs successfully. What am I doing wrong or missing that is causing these errors? using Apple.CoreHaptics; using System.Collections; using UnityEngine; public class Haptics : MonoBehaviour { private CHHapticEngine _hapticEngine; private CHHapticPatternPlayer _hapticPlayer; [SerializeField] private AHAPAsset _hapticAsset; private void PrepareHaptics() { _hapticEngine = new CHHapticEngine(); _hapticEngine.Start(); _hapticPlayer = _hapticEngine.MakePlayer(_hapticAsset.GetPattern()); } private void Play() { _hapticPlayer.Start(); } }
3
0
1.3k
Mar ’23
Metal Shader Library - invalid UUID
Hi, I am generating a Metal library that I build using the command line tools on macOS for iphoneos, following the instructions here. Then I serialise this to a binary blob that I load at runtime, which seems to work ok as everything renders as expected. When I am doing a frame capture and open up a shader function it tries to load the symbols and fails. I tried pointing it to the directory (and the file) containing the symbols file, but it never resolves those. In the bottom half of the Import External Sources dialogue there is one entry in the Library | Debug Info section: The library name is Library 0x21816b5dc0 and below Debug Info it says Invalid UUID. The validation layer doesn't flag any invalid behaviour so I am a bit lost and not sure what to try next?
1
0
870
Mar ’23
Modify the ProRAW pixel buffer to write a modified DNG
Hello, In one of my apps, I'm trying to modify the pixel buffer from a ProRAW capture to then write the modified DNG. This is what I try to do: After capturing a ProRAW photo, I work in the delegate function func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { ... } In here I can access the photo.pixelBuffer and get its base address: guard let buffer = photo.pixelBuffer else { return } CVPixelBufferLockBaseAddress(buffer, []) let pixelFormat = CVPixelBufferGetPixelFormatType(buffer) // I check that the pixel format corresponds with ProRAW . This is successful, the code enters the if block if (pixelFormat == kCVPixelFormatType_64RGBALE) { guard let pointer = CVPixelBufferGetBaseAddress(buffer) else { return } // We have 16bits per component, 4 components let count = CVPixelBufferGetWidth(buffer) * CVPixelBufferGetHeight(buffer) * 4 let mutable = pointer.bindMemory(to: UInt16.self, capacity: count) // As a test, I want to replace all pixels with 65000 to get a white image let finalBufferArray : [Float] = Array.init(repeating: 65000, count: count) vDSP_vfixu16(finalBufferArray, 1, mutable, 1, vDSP_Length(finalBufferArray.count)) // I create an vImage Pixel buffer. Note that I'm referencing the photo.pixelBuffer to be sure that I modified the underlying pixelBuffer of the AVCapturePhoto object let imageBuffer = vImage.PixelBuffer<vImage.Interleaved16Ux4>(referencing: photo.pixelBuffer!, planeIndex: 0) // Inspect the CGImage let cgImageFormat = vImage_CGImageFormat(bitsPerComponent: 16, bitsPerPixel: 64, colorSpace: CGColorSpace(name: CGColorSpace.displayP3)!, bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.last.rawValue | CGBitmapInfo.byteOrder16Little.rawValue))! let cgImage = imageBuffer.makeCGImage(cgImageFormat: cgImageFormat)! // I send the CGImage to the main view controller. This is successful, I can see a white image when rendering the CGImage into a UIImage. This lets me think that I successfully modified the photo.pixelBuffer firingFrameDelegate?.didSendCGImage(image: cgImage) } // Now I try to write data. Unfortunately, this does not work. The photo.fileDataRepresentation() writes the data corresponding to the original, unmodified pixelBuffer `if let photoData = photo.fileDataRepresentation() { // Sending the data to the view controller and rendering it in the UIImage displays the original photo, not the modified pixelBuffer firingFrameDelegate?.didSendData(data: photoData) thisPhotoData = photoData }` CVPixelBufferUnlockBaseAddress(buffer, []) The same happens if I try to write the data to disk. The DNG file displays the original photo and not the data corresponding to the modified photo.pixelBuffer. Do you know why this code should not work? Do you have any ideas on how I can modify the ProRAW pixel buffer so that I can write the modified buffer into a DNG file? My goal is to write a modified file, so, I'm not sure I can use CoreImage of vImage to output a ProRAW file.
1
0
1.1k
Mar ’23
I will quit from Apple development no profit from my developer account unfortunately besides other companies I work to I have profit!
Hello community this post is for show my complete unsatisfied with Apple specially on developing games for Apples platforms there is lack of support for it for example some new gaming technologies and still that there is no profit or worth from all the work and money invested to develop for it I will close the journey with Apple very unsatisfied I'm going to give opportunities with my business to other platforms that are really worth it and give support to all new technologies in gaming and yes Apple destroyed other gaming makers with their new services like arcade and seems no future for gaming in Apples platforms. Quit goodbye and good luck to everyone.
3
0
1.4k
Mar ’23
How To Resize An Image and Retain Wide Color Gamut
I'm trying to resize NSImages on macOS. I'm doing so with an extension like this. extension NSImage { // MARK: Resizing /// Resize the image to the given size. /// /// - Parameter size: The size to resize the image to. /// - Returns: The resized image. func resized(toSize targetSize: NSSize) -> NSImage? { let frame = NSRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height) guard let representation = self.bestRepresentation(for: frame, context: nil, hints: nil) else { return nil } let image = NSImage(size: targetSize, flipped: false, drawingHandler: { (_) -> Bool in return representation.draw(in: frame) }) return image } } The problem is, as far as I can tell, the image that comes out of the drawing handler has lost the original color profile of the original image rep. I'm testing it with a wide color gamut image, attached. This becomes pure red when examing the image result. If this was UIKit I guess I'd use the UIGraphicsImageRenderer and select the right UIGraphicsImageRendererFormat.Range and so I'm suspecting I need to use the NSGraphicsContext here to do the rendering but I can't see what on that I would set to make it use wide color or how I'd use it?
2
0
850
Apr ’23
Code Signing an app including a binary Metallib
Hi! I am currently trying to upload my iOS app to App Store Connect. Unfortunately, code signing fails with the following error: "Code object is not signed at all.", referencing a binary Metallib (created with metal-tt and an mtlp-json script). I am using Xcode's automatically managed signing and the binary metallib is located inside the "Resources" directory of a framework that I am including with "Embed and sign" in the app. Could anyone give some guidance on what I need to change to make code signing work? Thank you.
4
0
615
May ’23
Jax-Metal - error: failed to legalize operation 'mhlo.cholesky'
After building jaxlib as per the instructions and installing jax-metal, upon testing upon an existing model which works fine using CPU (and GPU on linux), I get the following error. jax._src.traceback_util.UnfilteredStackTrace: jaxlib.xla_extension.XlaRuntimeError: UNKNOWN: /Users/adam/Developer/Pycharm Projects/gpy_flow_test/sparse_gps.py:66:0: error: failed to legalize operation 'mhlo.cholesky' /Users/adam/Developer/Pycharm Projects/gpy_flow_test/sparse_gps.py:66:0: note: called from /Users/adam/Developer/Pycharm Projects/gpy_flow_test/sparse_gps.py:66:0: note: see current operation: %406 = "mhlo.cholesky"(%405) {lower = true} : (tensor<50x50xf32>) -> tensor<50x50xf32> A have tried to reproduce this with the following minimal example, but this works fine. from jax import jit import jax.numpy as jnp import jax.random as jnr import jax.scipy as jsp key = jnr.PRNGKey(0) A = jnr.normal(key, (100,100)) def calc_cholesky_decomp(test_matrix): psd_test_matrix = test_matrix @ test_matrix.T col_decomp = jsp.linalg.cholesky(psd_test_matrix, lower=True) return col_decomp calc_cholesky_decomp(A) jitted_calc_cholesky_decomp = jit(calc_cholesky_decomp) jitted_calc_cholesky_decomp(A) I am unable to attach the full error message has it exceeds all the restricts placed on uploads attached to a post. I am more than happy to try a more complex model if you have any suggestions.
7
3
1.1k
Jun ’23
iOS 17 AR QuickLook: Support for multiple UV channels
Is there support for using multiple UV channels in AR QuickLook in iOS17? One important use case would be to put a tiling texture in an overlapping tiling UV set while mapping Ambient Occlusion to a separate unwrapped non-overlapping UV set. This is very important to author 3D content combining high-resolution surface detail and high-quality Ambient Occlusion data while keeping file size to a minimum.
2
0
1.4k
Jun ’23
Sample Code for visionOS Metal?
There is a project tutorial for visionOS Metal rendering in immersive mode here (https://developer.apple.com/documentation/compositorservices/drawing_fully_immersive_content_using_metal?language=objc), but there is no downloadable sample project. Would Apple please provide sample code? The set-up is non-trivial.
4
1
2.1k
Jun ’23
Game porting toolkit build error
I followed the instruction from this page https://www.applegamingwiki.com/wiki/Game_Porting_Toolkit#Game_compatibility_list to install the toolkit. Everything worked fine until I ran the command brew -v install apple/apple/game-porting-toolkit. It threw the following error: Error: apple/apple/game-porting-toolkit 1.0 did not build Logs: /Users/user_name/Library/Logs/Homebrew/game-porting-toolkit/00.options.out /Users/user_name/Library/Logs/Homebrew/game-porting-toolkit/wine64-build /Users/user_name/Library/Logs/Homebrew/game-porting-toolkit/01.configure.cc /Users/user_name/Library/Logs/Homebrew/game-porting-toolkit/02.make.cc /Users/user_name/Library/Logs/Homebrew/game-porting-toolkit/01.configure /Users/user_name/Library/Logs/Homebrew/game-porting-toolkit/02.make Do not report this issue to Homebrew/brew or Homebrew/homebrew-core! Error: You are using macOS 14. We do not provide support for this pre-release version. It is expected behaviour that some formulae will fail to build in this pre-release version. It is expected behaviour that Homebrew will be buggy and slow. Do not create any issues about this on Homebrew's GitHub repositories. Do not create any issues even if you think this message is unrelated. Any opened issues will be immediately closed without response. Do not ask for help from Homebrew or its maintainers on social media. You may ask for help in Homebrew's discussions but are unlikely to receive a response. Try to figure out the problem yourself and submit a fix as a pull request. We will review it but may or may not accept it. Did anyone have the same issue and found a fix for it?
27
3
25k
Jun ’23
Original Reality Composer (non pro) in Vision OS
When I create an USDZ file from the original Reality Composer (non pro) and view it in the Vision OS simulator, the transforms and rotations don't look similar. For example a simple Tap and Flip behaviour does not rotate similar in Vision OS. Should we regard RC as discontinued sw and only work with RC-pro? Hopefully Apple will combine the features from the original RC into the new RC pro !
1
1
746
Jul ’23
MTKView fullscreen stutter
hello, when I do Metal drawing into an MTKView in full screen, there's an issue with frame scheduling, it seems. There is visible stutter, and the Metal HUD shows the frame rate jittering about. happens in Ventura and Sonoma b2 on my Macbook Pro. here's a really minimal example. Not even actively drawing anything, just presenting the drawable. #import "ViewController.h" #import <Metal/Metal.h> #import <MetalKit/MetalKit.h> @interface ViewController() <MTKViewDelegate> @property (nonatomic, weak) IBOutlet MTKView *mtkView; @property (nonatomic) id<MTLDevice> device; @property (nonatomic) id<MTLCommandQueue> commandQueue; @end @implementation ViewController - (void)viewDidLoad { [super viewDidLoad]; _device = MTLCreateSystemDefaultDevice(); _mtkView.device = _device; _mtkView.delegate = self; _commandQueue = [_device newCommandQueue]; } - (void)drawInMTKView:(MTKView *)view { MTLRenderPassDescriptor *viewRPD = view.currentRenderPassDescriptor; if(viewRPD) { id<MTLCommandBuffer> commandBuffer = [_commandQueue commandBuffer]; [commandBuffer presentDrawable:view.currentDrawable]; [commandBuffer commit]; } } - (void)mtkView:(MTKView *)view drawableSizeWillChange:(CGSize)size { NSLog(@"%@", NSStringFromSize(size)); } Looks like there's some collision between display and render timer, or something. what gives? I would like to be able to render stutter free on this very nice machine? how would I go about that?
12
0
1.4k
Jul ’23
High CPU usage with CoreImage vs Metal
I am processing CVPixelBuffers received from camera using both Metal and CoreImage, and comparing the performance. The only processing that is done is taking a source pixel buffer and applying crop & affine transforms, and saving the result to another pixel buffer. What I do notice is CPU usage is as high a 50% when using CoreImage and only 20% when using Metal. The profiler shows most of the time spent is in CIContext render: let cropRect = AVMakeRect(aspectRatio: CGSize(width: dstWidth, height: dstHeight), insideRect: srcImage.extent) var dstImage = srcImage.cropped(to: cropRect) let translationTransform = CGAffineTransform(translationX: -cropRect.minX, y: -cropRect.minY) var transform = CGAffineTransform.identity transform = transform.concatenating(CGAffineTransform(translationX: -(dstImage.extent.origin.x + dstImage.extent.width/2), y: -(dstImage.extent.origin.y + dstImage.extent.height/2))) transform = transform.concatenating(translationTransform) transform = transform.concatenating(CGAffineTransform(translationX: (dstImage.extent.origin.x + dstImage.extent.width/2), y: (dstImage.extent.origin.y + dstImage.extent.height/2))) dstImage = dstImage.transformed(by: translationTransform) let scale = max(dstWidth/(dstImage.extent.width), CGFloat(dstHeight/dstImage.extent.height)) let scalingTransform = CGAffineTransform(scaleX: scale, y: scale) transform = CGAffineTransform.identity transform = transform.concatenating(scalingTransform) dstImage = dstImage.transformed(by: transform) if flipVertical { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: 1, y: -1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: 0, y: dstImage.extent.size.height)) } if flipHorizontal { dstImage = dstImage.transformed(by: CGAffineTransform(scaleX: -1, y: 1)) dstImage = dstImage.transformed(by: CGAffineTransform(translationX: dstImage.extent.size.width, y: 0)) } var dstBounds = CGRect.zero dstBounds.size = dstImage.extent.size _ciContext.render(dstImage, to: dstPixelBuffer!, bounds: dstImage.extent, colorSpace: srcImage.colorSpace ) Here is how CIContext was created: _ciContext = CIContext(mtlDevice: MTLCreateSystemDefaultDevice()!, options: [CIContextOption.cacheIntermediates: false]) I want to know if I am doing anything wrong and what could be done to lower CPU usage in CoreImage?
4
1
1.4k
Jul ’23
Diablo IV - Entering new areas, opening Character menu causes RAM Memory Overflow and screen freezes/crashes
I have a Macbook Pro 16 M1Pro 16Gb Ram MacOS 14 Sonoma Beta 4 and GPT 1.0.2 and currently testing Diablo 4 V 1.04 (latest Update on 08.08.2023). The game is awesome and it runs in 2560x1440 in 50-60fps on my 4K-LG Display over HDMI very smoothly, until .... see Problem 1 and Problem 2 Graphics details are in full detail. smooth shadows and even FSR2 works perfectly. Diablo4 needs around 9-11 GB Ram on my system. There are no background activities running! Problem 1: exploring new areas causes Ram Buffer overflow, freezes the screen, and crashes, and a new system reboot is needed. Problem 2: when trying to buy/sell an item, or just the characters menu will be opened, the game freezes. A game reboot is necessary! While running the HUD in games I can see what's going on and could analyze, that while it's the case Problem 1 and Problem 2 happening, The RAM jumps from 9-11 GB to 16-18 GB. This is much more than the System can deliver and cause the screen freezes and crash. Either the whole system reboot or mostly just the game reboot is needed. Would be very nice if Apple could fix/adjusts GPT in the next versions of Diablo 4. Many thanks in advanced
4
2
3.1k
Jul ’23
Metal Shader Converter Help
So I have been working on using the new Metal Shader Converter to create a graphics abstraction between D3D12 and Metal. One thing I cannot wrap my head around is how someone would do bindless buffers in Metal. Take this for example... the metal shader converter easily converts this code into Metal `ByteAdressBuffer bindless_buffers[] : register(space1); v2f vertMain(vertIn in){ Mesh m = bindless_buffers[1].Load(0); v2f out; out.pos = in.pos * m.pos; return out;` And using the new Shader Converter one can easily create a DescriptorTable from a root signature that holds this unbounded array of ByteAdressBuffers. But when you try to fill an argument buffer using the DescriptorTableEntry struct, it looks like you can only place one resource at a time and cannot even access the array in the descriptor. For textures this is okay because you can supply a MTLTexture that holds other textures. But you can't have a MTLBuffer hold different kinds of buffers. Is it possible to do this ByteAdressBuffer style of full bindless in Metal? The shader converter allows it but I don't know how to get the API to supply the shaders with the correct data... Any Help would be GREATLY appreciated and all the work I do will be posted online for others to learn about using Metal :)
1
0
819
Jul ’23