Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Created

USDZ files with camera can't be opened on IOS 18.2/iPadOS 18.2 correctly.
Hi experts, When I open a USDZ file which contains perspective cameras by "Files" app in IOS 18.2/iPadOS 18.2, I can't see anything. And when I open the USDZ file in IOS 18.1/iPadOS 18.1, it works well. On the other hand, when I open a USDZ file which contains orthographic cameras in IOS 18.1 or IOS 18.2, the scene is stuck. Could you help to solve these issues please? Thanks.
4
2
642
Dec ’24
Disable IBL in IOS QUICKLOOK
https://developer.apple.com/documentation/arkit/arkit_in_ios/specifying_a_lighting_environment_in_ar_quick_look How can I disable it? or at least use a custom texture that's just black? I don't see the purpose of having the real-time environment probe that captures IBL, but always add this fake studio IBL that you can't remove...
0
0
343
Dec ’24
Jurassic World Evolution 2 Likely Fails Due to Missing Tiled Resources Support
I’ve been trying to run Jurassic World Evolution 2 using the Game Porting Toolkit on macOS, but the game doesn’t launch and crashes immediately. Based on the error and research, it seems the issue is related to missing support for D3D12_TILED_RESOURCES_TIER_2 in the Metal API. If this is the case, does anyone know if support for tiled resources is planned for future updates of the toolkit? Or are there any potential workarounds for bypassing this limitation?
1
0
723
Dec ’24
Snap to Item with Assistive Touch does not work when building an Game from Unity.
Hi all, I have been trying to get Apple's assistive touch's snap to item to work for a unity game built using Apple's Core & Accessibility API. The switch control recognises these buttons however, eye tracking will not snap to them. The case in which it needs to snap is when an external eye tracking device is connected and utilises assistive touch & assistive touch's snap to item. All buttons in the game have a AccessibilityNode with the trait 'Button' on them & an appropriate label, which, following the documentation and comments on the developer forum, should allow them to be recognised by snap to item. This is not the case, devices (iPads and iPhones) do not recognise the buttons as a snap to target. Does anyone know why this is the case, and if this is a bug?
0
0
580
Dec ’24
Metal-cpp-extensions isn't working inside frameworks
I am making a framework in C++ using metal-cpp, basically a small game engine. I am also consequently using metal-cpp-extensions provided in LearnMetalCPP to make applications work. For one of my classes, I needed to add AppKit.hpp inside a public header file, so I moved it and its associate headers(NSApplication.hpp, NSMenu.hpp, etc.) from Project headers to Public in Build Phases' Headers, however, it started giving me the error "cast of C pointer type 'void *' to Objective-C pointer type 'Class' requires a bridged cast" at several points in the AppKit headers. They don't appear when AppKit and its associates are in the Project headers, or when they are in the Private headers and no headers import it. I imagined that disabling Objective-C ARC and Using __bridge casts outside of ARC in Build Settings would solve it, but it didn't budge. I imagined it wouldn't involve actively changing the headers would be the answer, but even if I try to put __bridge before the problematic casts, it didn't recognize __bridge. How do I solve this? And why is it only happening in Public and not Project headers?
2
0
798
Dec ’24
How to use MTKTextureLoader to load png data
I am trying to load some PNG data with MTKTextureLoader newTextureWithData,but the result shows wrong at the alpha area. Here is the code. I have an image URL, after it downloads successfully, I try to use the data or UIImagePNGRepresentation (image), they all show wrong. UIImage *tempImg = [UIImage imageWithData:data]; CGImageRef cgRef = tempImg.CGImage; MTKTextureLoader *loader = [[MTKTextureLoader alloc] initWithDevice:device]; id<MTLTexture> temp1 = [loader newTextureWithData:data options:@{MTKTextureLoaderOptionSRGB: @(NO), MTKTextureLoaderOptionTextureUsage: @(MTLTextureUsageShaderRead), MTKTextureLoaderOptionTextureCPUCacheMode: @(MTLCPUCacheModeWriteCombined)} error:nil]; NSData *tempData = UIImagePNGRepresentation(tempImg); id<MTLTexture> temp2 = [loader newTextureWithData:tempData options:@{MTKTextureLoaderOptionSRGB: @(NO), MTKTextureLoaderOptionTextureUsage: @(MTLTextureUsageShaderRead), MTKTextureLoaderOptionTextureCPUCacheMode: @(MTLCPUCacheModeWriteCombined)} error:nil]; id<MTLTexture> temp3 = [loader newTextureWithCGImage:cgRef options:@{MTKTextureLoaderOptionSRGB: @(NO), MTKTextureLoaderOptionTextureUsage: @(MTLTextureUsageShaderRead), MTKTextureLoaderOptionTextureCPUCacheMode: @(MTLCPUCacheModeWriteCombined)} error:nil]; }] resume];
5
0
619
Dec ’24
M1 GPU violates atomic_thread_fence across threadgroups
I have an M1 Pro with a 16-core GPU. When I run a shader with 8193 threads, atomic_thread_fence is violated across the boundary between thread 8191 (the last thread in the 7th threadgroup) and 8192 (the first thread in the 9th threadgroup). I've attached the Metal and Swift files, but I'll repost the relevant kernel here. It's a function that launches N threads to iterate through a binary tree from the leaves, where the first thread to reach the parent terminates and the second one populates it with the sum of the nodes two children. // clang-format off void sum(device const int& size, device const int* __restrict__ in, device int* __restrict__ out, device atomic_int* visited, uint i [[thread_position_in_grid]]) { // clang-format on int val = in[i]; uint cur = (size + i - 1); out[cur] = val; atomic_thread_fence(mem_flags::mem_device, memory_order_seq_cst); cur = (cur - 1) / 2; int proceed = atomic_fetch_add_explicit(&visited[cur], 1, memory_order_relaxed); while (proceed == 1) { uint left = 2 * cur + 1; uint right = 2 * cur + 2; uint val_left = out[left]; uint val_right = out[right]; uint val_cur = val_left + val_right; out[cur] = val_cur; if (cur == 0) { break; } cur = (cur - 1) / 2; atomic_thread_fence(mem_flags::mem_device, memory_order_seq_cst); proceed = atomic_fetch_add_explicit(&visited[cur], 1, memory_order_relaxed); } } What I'm observing is that thread 8192 hits the atomic_fetch_add first and terminates, while thread 8191 hits it second (observes that thread 8192 had incremented it by 1) and proceeds into the loop. Thread 8191 reads out[16383] (which it populated with 8191) and out[16384] (which thread 8192 populated with 8192 prior to the atomic_thread_fence). Instead of reading 8192 from out[16384] though, it reads 0. Maybe I'm missing something but this seems like a pretty clear violation of the atomic_thread_fence which (I thought) was supposed to guarantee that the write from thread 8192 to out[16384] would be visible to any thread observing the effects of the following atomic_fetch_add. Is atomic_fetch_add not a store operation? Modifying it to an atomic_store or atomic_exchange still results in the bug. Adding another atomic_thread_fence between the atomic_fetch_add and reading of out also doesn't change anything. I only begin to observe this on grid sizes of 8193 and upwards. That's 9 threadgroups per grid, which I assume could be related to my M1 Pro GPU having 16 cores. Running the same example on an A17 Pro GPU doesn't show any of this behavior up through a tested grid size of 4194303 (2^22-1), at which point testing larger grid sizes starts to run into other issues so I can't test anything larger. Removing the atomic_thread_fences on both the M1 and A17 cause the test to fail at much smaller grid sizes, as expected. sum.metal main.swift
2
0
527
Dec ’24
Tile Shaders performance when writing to tile texture vs. resolve texture
I am working on a custom resolve tile shader for a client. I see a big difference in performance depending on where we write to: 1- the resolve texture of the color attachment 2- a rw tile shader texture set via [renderEncoder setTileTexture: myResolvedTexture] Option 2 is more than twice as slow than option 1. Our compute shader writes to 4 UAVs so just using the resolve texture entry is not possible. Why such a difference as there is no more data being written? Can option 2 be as fast as option 1? I can demonstrate the issue in a modified version of the Multisample code sample.
5
0
562
Dec ’24
Safe Places to Find Dependable App Developers
Hello! Brand new to the Apple developer community, so, hello everyone! I'm a game developer, we just launched our first game on PC and I'm looking to port it to ios. Time is something I'm kind of short on, and I hear it takes some jumping through hoops to get the know-how to port something to mobile. Any good sites you'd recommend for finding programmers to port your game? It's fairly simple - just a visual novel. Any and all suggestions welcome! All the best! Elijah
3
0
605
Dec ’24
BillboardComponent causing Model Entity tap recognition issues on iOS 18
Hi, When I attach BillboardComponent to anchor entities, I am no longer able to retrieve the tapped entity anymore because the collision shapes of the entity are messed up due to always orienting it towards the camera. And it does not updated the collision shapes because if I try pressing everywhere that is not my model entity, I get a hit out of nowhere. I tried updating the collision shapes of the entity every frame: for child in existingPassport.mainEntity!.children { child.generateCollisionShapes(recursive: true) } However, nothing comes of it, and it is not a smart solution in the first places because it is too heavy to recreate the shapes every frame. I am using the usual AR View Controller that works when I comment out the BillboardComponent line just fine: private func setupTapRecognizer() { let tapRecognizer = UITapGestureRecognizer(target: self, action: #selector(handleTap)) arView.addGestureRecognizer(tapRecognizer) } @objc func handleTap(_ recognizer: UITapGestureRecognizer) { print("handle tap URL 1") let location = recognizer.location(in: arView) if let entity = arView.entity(at: location) { print("handle tap URL 2") // Assuming each entity has a URL stored in a component if let urlComponent = entity.components[URLComponent.self] { webViewPresenter?.presentFullScreenWebView(url: urlComponent.url) print("handle tap URL: \(urlComponent.url)") } } } How should we tackle this issue on iOS 18? Thanks!
1
0
696
Dec ’24
How to use imageblock_slice
Is there a working example of imageblock_slice with implicit layout somewhere? I get a compilation error when i write this: imageblock_slilce color_slice = img_blk.slice(frag->color); Error: No matching member function for call to 'slice' candidate template ignored: couldn't infer template argument 'E' candidate function template not viable: requires 2 arguments, but 1 was provided Too few template arguments for class template 'imageblock_slice' It seems the syntax has changed since the Imageblocks presentation https://developer.apple.com/videos/play/tech-talks/603/ I tried supplying the struct type of the image block between <> but it still does not work.
1
0
662
Dec ’24
RealityKit fails with EXC_BAD_ACCESS at CMClockGetAnchorTime in the simulator
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView. I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085 I've included a crash log with the feedback. If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior. Please let me know if there's anything I can do to help. Thank you.
1
0
624
Dec ’24
AVAssetReaderTrackOutput read HDR frame from a video file.
Hello, I am trying to read video frames using AVAssetReaderTrackOutput. Here is the sample code: //prepare assets let asset = AVURLAsset(url: some_url) let assetReader = try AVAssetReader(asset: asset) guard let videoTrack = try await asset.loadTracks(withMediaCharacteristic: .visual).first else { throw SomeErrorCode.error } var readerSettings: [String: Any] = [ kCVPixelBufferIOSurfacePropertiesKey as String: [String: String]() ] //check if HDR video var isHDRDetected: Bool = false let hdrTracks = try await asset.loadTracks(withMediaCharacteristic: .containsHDRVideo) if hdrTracks.count > 0 { readerSettings[AVVideoAllowWideColorKey as String] = true readerSettings[kCVPixelBufferPixelFormatTypeKey as String] = kCVPixelFormatType_420YpCbCr10BiPlanarFullRange isHDRDetected = true } //add output to assetReader let output = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: readerSettings) guard assetReader.canAdd(output) else { throw SomeErrorCode.error } assetReader.add(output) guard assetReader.startReading() else { throw SomeErrorCode.error } //add writer ouput settings let videoOutputSettings: [String: Any] = [ AVVideoCodecKey: AVVideoCodecType.hevc, AVVideoWidthKey: 1920, AVVideoHeightKey: 1080, ] let finalPath = "//some URL oath" let assetWriter = try AVAssetWriter(outputURL: finalPath, fileType: AVFileType.mov) guard assetWriter.canApply(outputSettings: videoOutputSettings, forMediaType: AVMediaType.video) else { throw SomeErrorCode.error } let assetWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings) let sourcePixelAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: isHDRDetected ? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_32ARGB, kCVPixelBufferWidthKey as String: 1920, kCVPixelBufferHeightKey as String: 1080, ] //create assetAdoptor let assetAdaptor = AVAssetWriterInputTaggedPixelBufferGroupAdaptor( assetWriterInput: assetWriterInput, sourcePixelBufferAttributes: sourcePixelAttributes) guard assetWriter.canAdd(assetWriterInput) else { throw SomeErrorCode.error } assetWriter.add(assetWriterInput) guard assetWriter.startWriting() else { throw SomeErrorCode.error } assetWriter.startSession(atSourceTime: CMTime.zero) //prepare tranfer session var session: VTPixelTransferSession? = nil guard VTPixelTransferSessionCreate(allocator: kCFAllocatorDefault, pixelTransferSessionOut: &session) == noErr, let session else { throw SomeErrorCode.error } guard let pixelBufferPool = assetAdaptor.pixelBufferPool else { throw SomeErrorCode.error } //read through frames while let nextSampleBuffer = output.copyNextSampleBuffer() { autoreleasepool { guard let imageBuffer = CMSampleBufferGetImageBuffer(nextSampleBuffer) else { return } //this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 23:58 timestamp let attachment = [ kCVImageBufferYCbCrMatrixKey: kCVImageBufferYCbCrMatrix_ITU_R_2020, kCVImageBufferColorPrimariesKey: kCVImageBufferColorPrimaries_ITU_R_2020, kCVImageBufferTransferFunctionKey: kCVImageBufferTransferFunction_SMPTE_ST_2084_PQ, ] CVBufferSetAttachments(imageBuffer, attachment as CFDictionary, .shouldPropagate) //now convert to CIImage with HDR data let image = CIImage(cvPixelBuffer: imageBuffer) let cropped = "" //here perform some actions like cropping, flipping, etc. and preserve this changes by converting the extent to CGImage first: //this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 24:30 timestamp guard let cgImage = context.createCGImage( cropped, from: cropped.extent, format: .RGBA16, colorSpace: CGColorSpace(name: CGColorSpace.itur_2100_PQ)!) else { continue } //finally convert it back to CIImage let newScaledImage = CIImage(cgImage: cgImage) //now write it to a new pixelBuffer let pixelBufferAttributes: [String: Any] = [ kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true, ] var pixelBuffer: CVPixelBuffer? CVPixelBufferCreate( kCFAllocatorDefault, Int(newScaledImage.extent.width), Int(newScaledImage.extent.height), kCVPixelFormatType_420YpCbCr10BiPlanarFullRange, pixelBufferAttributes as CFDictionary, &pixelBuffer) guard let pixelBuffer else { continue } context.render(newScaledImage, to: pixelBuffer) //context is a CIContext reference var pixelTransferBuffer: CVPixelBuffer? CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelTransferBuffer) guard let pixelTransferBuffer else { continue } // Transfer the image to the pixel buffer. guard VTPixelTransferSessionTransferImage(session, from: pixelBuffer, to: pixelTransferBuffer) == noErr else { continue } //finally append to taggedBuffer } } assetWriterInput.markAsFinished() await assetWriter.finishWriting() The result video is not in correct color as the original video. It turns out too bright. If I play around with attachment values, it can be either too dim or too bright but not exactly proper as the original video. What am I missing in my setup? I did find that kCVPixelFormatType_4444AYpCbCr16 can produce proper video output but then I can't convert it to CIImage and so I can't do the CIImage operations that I need. Mainly cropping and resizing the CIImage
0
0
654
Dec ’24
What are the CAMetalLayer.nextDrawable threading rules?
What evidence exists that it's safe to call nextDrawable() on CAMetalLayer off the main thread? I have seen developers claiming that it's OK, but the official docs are silent on the topic. Attempting to do so with Strict Concurrency Checking set to Complete complains that CAMetalLayer is not @Sendable. I want to call it off the main thread since there doesn't seem to be any way to prevent it from blocking the UI for up to a second. I have read hints and allegations that this won't happen if you avoid asking for too many drawables, but that doesn't seem to be true 100% of the time in my experience. Supposing it is allowed, I wonder how races are handled such as when the layer's size is changed on the main thread, or if the layer is removed from the layer hierarchy.
0
0
512
Dec ’24
Converting JPG to JP2 using ImageMagicK library on iOS
I am trying to convert a JPG image to a JP2 (JPEG 2000) format using the ImageMagick library on iOS. However, although the file extension is changing to .jp2, the format of the image does not seem to be changing. The output image is still being treated as a JPG file, and not as a true JP2 format. Here is the code (IBAction)convertButtonClicked:(id)sender { NSString *jpgPath = [[NSBundle mainBundle] pathForResource:@"Example" ofType:@"jpg"]; NSString *tempFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"Converted.jp2"]; MagickWand *wand = NewMagickWand(); if (MagickReadImage(wand, [jpgPath UTF8String]) == MagickFalse) { char *description; ExceptionType severity; description = MagickGetException(wand, &severity); NSLog(@"Error reading image: %s", description); MagickRelinquishMemory(description); return; } if (MagickSetFormat(wand, "JP2") == MagickFalse) { char *description; ExceptionType severity; description = MagickGetException(wand, &severity); NSLog(@"Error setting image format to JP2: %s", description); MagickRelinquishMemory(description); } if (MagickWriteImage(wand, [tempFilePath UTF8String]) == MagickFalse) { NSLog(@"Error writing JP2 image"); return; } NSLog(@"Image successfully converted."); } @end
1
0
427
Dec ’24
Performance Regression: In iOS 18.2 3D rotation in volume rendering is not smooth as compared to iOS 18.1
We as a team of engineers work on an app intended to visualize medical images. The type of situations where the app is used involves time critical decision making for acute clinical conditions. Stability of the app and performance are of utmost importance and can directly help timely treatment action. The app we are developing uses multiple libraries and tools like vtk, webgl, opengl, webkit, gl-matrix etc. The problem specifically can be described as follows, it has been observed that when 3D volume is rendered in the app and we try to rotate the volume the rotation is slow, unresposive and laggy. Specifically, we have noticed that iOS 18.1 the volume rotation is much smoother as compared to latest iOS 18.2. Eariler, we have faced somewhat similar issue with iOS 17 but it got improved in iOS 18.1. This performance regression is affecting the user experience in our healthcare application. We have taken reference from the cornerstone.js code and you can reproduce the issue using the following example: https://www.cornerstonejs.org/live-examples/volumeviewport3d Steps to Reproduce: Load the above mentioned test example on an iPhone running version 18.2 using safari. Perform volume rendering using the provided dataset. Measure the time taken by volume for each rotate or drag action. Repeat the same steps on an iPhone running version 18.1 for comparison. Additional Information: Device Model Tested: iPhone12, iPhone13, iPhone14 iOS Version With Issue: 18.2 18.3(Beta) I would appreciate any insights or suggestions on how to address this performance regression. If additional information is needed, please let me know. Thank you.
4
6
978
Dec ’24
How to clip ModelEntity
I am trying to model something similar to an odometer in RealityKit, where 3D numbers scroll up or down, as they increase or decrease, within a container entity. Is there a way for an Entity to clip its children so that anything that extends beyond its dimensions is not rendered?
1
0
519
Dec ’24
Texture Definitions for MPSSVGF Denoise
I am trying to use the SVGF denoiser to denoise my ray traced shadows (and also other textures later). I do get a smoothed image, but with wonky denoising. I need the depth-normal textures and motion textures for the SVGF and assume that these are badly filled in my case. However, neither in the above linked documentation nor in the WWDC19 video I find how they should be defined. I am looking to answers to: Is depth in red or alpha channel for the depth-normal texture? Are the normals in screen space? Is depth linear? Is it distance or z coordinate in view space? Or even logarithmically scaled or something else? Are the motion vectors supposed to be in pixels per frame? What is the orientation of the axis? Is y up or down? Are there are other restrictions on the formats? Also the linked code did not help me (I have not found any SVGF so far; also all the code is in Objective-C++, not Swift, but that's a different topic). So how should I fill these textures. Can someone point me to the documentation where these kinds of questions are answered?
0
0
547
Dec ’24
metal-cpp syntax for MTL::Buffer float2 parameter
I'm trying to pass a buffer of float2 items from CPU to GPU. In the kernel, I can provide a parameter for the buffer: device const float2* values for example. How do I specify float2 as the type for the MTL::Buffer? I managed to get the code to work by "cheating" by defining a simple class that has the same data members as a float2, but there is probably a better way. class Coord_f { public: float x{0.0f}; float y{0.0f}; }; then using code to allocate like this: NS::TransferPtr(device->newBuffer(n_elements * sizeof(Coord_f), MTL::ResourceStorageModeManaged)) The headers for metal-cpp do not appear to define vector objects like float2, but I'm doubtless missing something. Thanks.
2
0
648
Dec ’24