Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Created

How to use imageblock_slice
Is there a working example of imageblock_slice with implicit layout somewhere? I get a compilation error when i write this: imageblock_slilce color_slice = img_blk.slice(frag->color); Error: No matching member function for call to 'slice' candidate template ignored: couldn't infer template argument 'E' candidate function template not viable: requires 2 arguments, but 1 was provided Too few template arguments for class template 'imageblock_slice' It seems the syntax has changed since the Imageblocks presentation https://developer.apple.com/videos/play/tech-talks/603/ I tried supplying the struct type of the image block between <> but it still does not work.
1
0
637
Dec ’24
RealityKit fails with EXC_BAD_ACCESS at CMClockGetAnchorTime in the simulator
Starting with iOS 18.0 beta 1, I've noticed that RealityKit frequently crashes in the simulator when an app launches and presents an ARView. I was able to create a small sample app with repro steps that demonstrates the issue, and I've submitted feedback: FB16144085 I've included a crash log with the feedback. If possible, I'd appreciate it if an Apple engineer could investigate and suggest a workaround. It's awkward to be restricted to the iOS 17 simulator, which does not exhibit this behavior. Please let me know if there's anything I can do to help. Thank you.
1
0
569
Dec ’24
AVAssetReaderTrackOutput read HDR frame from a video file.
Hello, I am trying to read video frames using AVAssetReaderTrackOutput. Here is the sample code: //prepare assets let asset = AVURLAsset(url: some_url) let assetReader = try AVAssetReader(asset: asset) guard let videoTrack = try await asset.loadTracks(withMediaCharacteristic: .visual).first else { throw SomeErrorCode.error } var readerSettings: [String: Any] = [ kCVPixelBufferIOSurfacePropertiesKey as String: [String: String]() ] //check if HDR video var isHDRDetected: Bool = false let hdrTracks = try await asset.loadTracks(withMediaCharacteristic: .containsHDRVideo) if hdrTracks.count > 0 { readerSettings[AVVideoAllowWideColorKey as String] = true readerSettings[kCVPixelBufferPixelFormatTypeKey as String] = kCVPixelFormatType_420YpCbCr10BiPlanarFullRange isHDRDetected = true } //add output to assetReader let output = AVAssetReaderTrackOutput(track: videoTrack, outputSettings: readerSettings) guard assetReader.canAdd(output) else { throw SomeErrorCode.error } assetReader.add(output) guard assetReader.startReading() else { throw SomeErrorCode.error } //add writer ouput settings let videoOutputSettings: [String: Any] = [ AVVideoCodecKey: AVVideoCodecType.hevc, AVVideoWidthKey: 1920, AVVideoHeightKey: 1080, ] let finalPath = "//some URL oath" let assetWriter = try AVAssetWriter(outputURL: finalPath, fileType: AVFileType.mov) guard assetWriter.canApply(outputSettings: videoOutputSettings, forMediaType: AVMediaType.video) else { throw SomeErrorCode.error } let assetWriterInput = AVAssetWriterInput(mediaType: .video, outputSettings: videoOutputSettings) let sourcePixelAttributes: [String: Any] = [ kCVPixelBufferPixelFormatTypeKey as String: isHDRDetected ? kCVPixelFormatType_420YpCbCr10BiPlanarFullRange : kCVPixelFormatType_32ARGB, kCVPixelBufferWidthKey as String: 1920, kCVPixelBufferHeightKey as String: 1080, ] //create assetAdoptor let assetAdaptor = AVAssetWriterInputTaggedPixelBufferGroupAdaptor( assetWriterInput: assetWriterInput, sourcePixelBufferAttributes: sourcePixelAttributes) guard assetWriter.canAdd(assetWriterInput) else { throw SomeErrorCode.error } assetWriter.add(assetWriterInput) guard assetWriter.startWriting() else { throw SomeErrorCode.error } assetWriter.startSession(atSourceTime: CMTime.zero) //prepare tranfer session var session: VTPixelTransferSession? = nil guard VTPixelTransferSessionCreate(allocator: kCFAllocatorDefault, pixelTransferSessionOut: &session) == noErr, let session else { throw SomeErrorCode.error } guard let pixelBufferPool = assetAdaptor.pixelBufferPool else { throw SomeErrorCode.error } //read through frames while let nextSampleBuffer = output.copyNextSampleBuffer() { autoreleasepool { guard let imageBuffer = CMSampleBufferGetImageBuffer(nextSampleBuffer) else { return } //this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 23:58 timestamp let attachment = [ kCVImageBufferYCbCrMatrixKey: kCVImageBufferYCbCrMatrix_ITU_R_2020, kCVImageBufferColorPrimariesKey: kCVImageBufferColorPrimaries_ITU_R_2020, kCVImageBufferTransferFunctionKey: kCVImageBufferTransferFunction_SMPTE_ST_2084_PQ, ] CVBufferSetAttachments(imageBuffer, attachment as CFDictionary, .shouldPropagate) //now convert to CIImage with HDR data let image = CIImage(cvPixelBuffer: imageBuffer) let cropped = "" //here perform some actions like cropping, flipping, etc. and preserve this changes by converting the extent to CGImage first: //this part copied from (https://developer.apple.com/videos/play/wwdc2023/10181) at 24:30 timestamp guard let cgImage = context.createCGImage( cropped, from: cropped.extent, format: .RGBA16, colorSpace: CGColorSpace(name: CGColorSpace.itur_2100_PQ)!) else { continue } //finally convert it back to CIImage let newScaledImage = CIImage(cgImage: cgImage) //now write it to a new pixelBuffer let pixelBufferAttributes: [String: Any] = [ kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true, ] var pixelBuffer: CVPixelBuffer? CVPixelBufferCreate( kCFAllocatorDefault, Int(newScaledImage.extent.width), Int(newScaledImage.extent.height), kCVPixelFormatType_420YpCbCr10BiPlanarFullRange, pixelBufferAttributes as CFDictionary, &pixelBuffer) guard let pixelBuffer else { continue } context.render(newScaledImage, to: pixelBuffer) //context is a CIContext reference var pixelTransferBuffer: CVPixelBuffer? CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelTransferBuffer) guard let pixelTransferBuffer else { continue } // Transfer the image to the pixel buffer. guard VTPixelTransferSessionTransferImage(session, from: pixelBuffer, to: pixelTransferBuffer) == noErr else { continue } //finally append to taggedBuffer } } assetWriterInput.markAsFinished() await assetWriter.finishWriting() The result video is not in correct color as the original video. It turns out too bright. If I play around with attachment values, it can be either too dim or too bright but not exactly proper as the original video. What am I missing in my setup? I did find that kCVPixelFormatType_4444AYpCbCr16 can produce proper video output but then I can't convert it to CIImage and so I can't do the CIImage operations that I need. Mainly cropping and resizing the CIImage
0
0
583
Dec ’24
What are the CAMetalLayer.nextDrawable threading rules?
What evidence exists that it's safe to call nextDrawable() on CAMetalLayer off the main thread? I have seen developers claiming that it's OK, but the official docs are silent on the topic. Attempting to do so with Strict Concurrency Checking set to Complete complains that CAMetalLayer is not @Sendable. I want to call it off the main thread since there doesn't seem to be any way to prevent it from blocking the UI for up to a second. I have read hints and allegations that this won't happen if you avoid asking for too many drawables, but that doesn't seem to be true 100% of the time in my experience. Supposing it is allowed, I wonder how races are handled such as when the layer's size is changed on the main thread, or if the layer is removed from the layer hierarchy.
0
0
498
Dec ’24
Converting JPG to JP2 using ImageMagicK library on iOS
I am trying to convert a JPG image to a JP2 (JPEG 2000) format using the ImageMagick library on iOS. However, although the file extension is changing to .jp2, the format of the image does not seem to be changing. The output image is still being treated as a JPG file, and not as a true JP2 format. Here is the code (IBAction)convertButtonClicked:(id)sender { NSString *jpgPath = [[NSBundle mainBundle] pathForResource:@"Example" ofType:@"jpg"]; NSString *tempFilePath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"Converted.jp2"]; MagickWand *wand = NewMagickWand(); if (MagickReadImage(wand, [jpgPath UTF8String]) == MagickFalse) { char *description; ExceptionType severity; description = MagickGetException(wand, &severity); NSLog(@"Error reading image: %s", description); MagickRelinquishMemory(description); return; } if (MagickSetFormat(wand, "JP2") == MagickFalse) { char *description; ExceptionType severity; description = MagickGetException(wand, &severity); NSLog(@"Error setting image format to JP2: %s", description); MagickRelinquishMemory(description); } if (MagickWriteImage(wand, [tempFilePath UTF8String]) == MagickFalse) { NSLog(@"Error writing JP2 image"); return; } NSLog(@"Image successfully converted."); } @end
1
0
414
Dec ’24
Performance Regression: In iOS 18.2 3D rotation in volume rendering is not smooth as compared to iOS 18.1
We as a team of engineers work on an app intended to visualize medical images. The type of situations where the app is used involves time critical decision making for acute clinical conditions. Stability of the app and performance are of utmost importance and can directly help timely treatment action. The app we are developing uses multiple libraries and tools like vtk, webgl, opengl, webkit, gl-matrix etc. The problem specifically can be described as follows, it has been observed that when 3D volume is rendered in the app and we try to rotate the volume the rotation is slow, unresposive and laggy. Specifically, we have noticed that iOS 18.1 the volume rotation is much smoother as compared to latest iOS 18.2. Eariler, we have faced somewhat similar issue with iOS 17 but it got improved in iOS 18.1. This performance regression is affecting the user experience in our healthcare application. We have taken reference from the cornerstone.js code and you can reproduce the issue using the following example: https://www.cornerstonejs.org/live-examples/volumeviewport3d Steps to Reproduce: Load the above mentioned test example on an iPhone running version 18.2 using safari. Perform volume rendering using the provided dataset. Measure the time taken by volume for each rotate or drag action. Repeat the same steps on an iPhone running version 18.1 for comparison. Additional Information: Device Model Tested: iPhone12, iPhone13, iPhone14 iOS Version With Issue: 18.2 18.3(Beta) I would appreciate any insights or suggestions on how to address this performance regression. If additional information is needed, please let me know. Thank you.
4
6
939
Dec ’24
How to clip ModelEntity
I am trying to model something similar to an odometer in RealityKit, where 3D numbers scroll up or down, as they increase or decrease, within a container entity. Is there a way for an Entity to clip its children so that anything that extends beyond its dimensions is not rendered?
1
0
502
Dec ’24
Texture Definitions for MPSSVGF Denoise
I am trying to use the SVGF denoiser to denoise my ray traced shadows (and also other textures later). I do get a smoothed image, but with wonky denoising. I need the depth-normal textures and motion textures for the SVGF and assume that these are badly filled in my case. However, neither in the above linked documentation nor in the WWDC19 video I find how they should be defined. I am looking to answers to: Is depth in red or alpha channel for the depth-normal texture? Are the normals in screen space? Is depth linear? Is it distance or z coordinate in view space? Or even logarithmically scaled or something else? Are the motion vectors supposed to be in pixels per frame? What is the orientation of the axis? Is y up or down? Are there are other restrictions on the formats? Also the linked code did not help me (I have not found any SVGF so far; also all the code is in Objective-C++, not Swift, but that's a different topic). So how should I fill these textures. Can someone point me to the documentation where these kinds of questions are answered?
0
0
528
Dec ’24
metal-cpp syntax for MTL::Buffer float2 parameter
I'm trying to pass a buffer of float2 items from CPU to GPU. In the kernel, I can provide a parameter for the buffer: device const float2* values for example. How do I specify float2 as the type for the MTL::Buffer? I managed to get the code to work by "cheating" by defining a simple class that has the same data members as a float2, but there is probably a better way. class Coord_f { public: float x{0.0f}; float y{0.0f}; }; then using code to allocate like this: NS::TransferPtr(device->newBuffer(n_elements * sizeof(Coord_f), MTL::ResourceStorageModeManaged)) The headers for metal-cpp do not appear to define vector objects like float2, but I'm doubtless missing something. Thanks.
2
0
618
Dec ’24
Can you reduce drawcalls with FlattenedClone() and PNGs with transparency?
Hello, I want to use flattenedClone() node for the trees in my landscape and reduce the draw call. If my texture uses a jpg file the result is good. If I use a PNG with transparency there is no draw call reduction. Can I use PNGs with transparency with the flattenedClone() ? Below is the structure I'm using: for child in scene_temp.rootNode.childNodes { let newtree = tree.clone() sum_tree.addChildNode(newtree) } let sum_tree_flat = sum_tree.flattenedClone() scene.rootNode.addChildNode(sum_tree_flat) If the tree texture contains a JPG file (diffuse) I get a draw call reduction (< 200 for my complete map) as expected (but no transparency with my leaves). If I use a PNG file, my draw calls are not reduced (> 2000) but I do have my trees with my leaves in transparency. I've tried replacing tree.clone() with tree.flattenedClone() with no results. Thanks for your help. Translated with DeepL.com (free version)
1
0
648
Dec ’24
RealityView session not terminating when leaving view on iOS
Everything works fine, except when tapping the navigation Back link and returning to the previous view, the AR session inside RealityView does not terminate. The green dot camera indicator stays on, it is still scanning the environment, and if the package has audio in it, the audio will still play, albeit extremely panned on the right channel. I have no issues terminating QuickLook or ARSCNView. I have a simple NavigationLink opening the RealityView... NavigationLink(destination: MyRealityView()) { Text("Open AR") } struct MyRealityView : View { var body: some View { RealityView { content in // Create horizontal plane anchor for the content let anchor = AnchorEntity(.plane(.horizontal, classification: .floor, minimumBounds: [0.5,0.5])) let scene = await loadEntity(named: "Scene") // Add model to anchor anchor.addChild(scene!) content.add(anchor) // View Settings content.camera = .spatialTracking } placeholder: { ProgressView() } .onDisappear { //print("RealityView is disappearing. Cleanup actions here.") } .edgesIgnoringSafeArea(.all) // Activate onTap from Reality Composer Pro .gesture(TapGesture().targetedToAnyEntity().onEnded { value in _ = value.entity.applyTapForBehaviors() }) }} I have experimented with several ways of trying to close it, and I can't figure it out. I have tried State variables and custom Back buttons. I was also trying to working with pause(), but I can't seem to get that to function either. Anyone else have this issue or know of a solution? What am I missing?
1
1
526
Jan ’25
GCDualSenseAdaptiveTrigger API set trigger mode not working
I'm trying to add support to PS5 DualSense controller. when I try to use the API from here: https://developer.apple.com/documentation/gamecontroller/gcdualsenseadaptivetrigger?language=objc None of the API works, am I missed anything? The code is like this: if ( [ controller.extendedGamepad isKindOfClass:[ GCDualSenseGamepad class ] ] ) { GCDualSenseGamepad * dualSenseGamePad = ( GCDualSenseGamepad * )controller.extendedGamepad; auto funcSetEffectTrigger = []( TriggerEffectParams& params, GCDualSenseAdaptiveTrigger *trigger ) { if ( params.m_mode == TriggerEffectMode::Off ) { [ trigger setModeOff ]; NSLog(@"setModeOff trigger.mode:%d", trigger.mode ); } else if ( params.m_mode == TriggerEffectMode::Feedback ) { [ trigger setModeFeedbackWithStartPosition: 0.2f resistiveStrength: 0.5f ]; } else if ( params.m_mode == TriggerEffectMode::Weapon ) { [ trigger setModeWeaponWithStartPosition: 0.2f endPosition: 0.4f resistiveStrength: 0.5f ]; } else if ( params.m_mode == TriggerEffectMode::Vibration ) { [ trigger setModeVibrationWithStartPosition: position amplitude: amplitude frequency: frequency ]; } }; if ( L2 ) { funcSetEffectTrigger( params, dualSenseGamePad.leftTrigger ); } if ( R2 ) { funcSetEffectTrigger( params, dualSenseGamePad.rightTrigger ); } } I've also tested to add "Game Controllers" capability to Target, still not working. Can't find anything else from the document or forums. I've no idea what need to do.
5
0
643
Jan ’25
How to create CGContext with 10 bits per component?
I’m trying to create a CGContext that matches my screen using the following code let context = CGContext( data: pixelBuffer, width: pixelWidth, // 3456 height: pixelHeight, // 2234 bitsPerComponent: 10, // PixelEncoding = "--RRRRRRRRRRGGGGGGGGGGBBBBBBBBBB" bytesPerRow: bytesPerRow, // 13824 space: CGColorSpace(name: CGColorSpace.extendedSRGB)!, bitmapInfo: CGImageAlphaInfo.none.rawValue | CGImagePixelFormatInfo.RGBCIF10.rawValue | CGImageByteOrderInfo.order16Little.rawValue ) But it fails with an error CGBitmapContextCreate: unsupported parameter combination: RGB 10 bits/component, integer 13824 bytes/row kCGImageAlphaNone kCGImageByteOrder16Little kCGImagePixelFormatRGBCIF10 Valid parameters for RGB color space model are: 16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast 32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents 128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents See Quartz 2D Programming Guide (available online) for more information. Why is it unsupported if it matches the 6th option? (32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little)
3
0
665
Jan ’25
Entity cross multiple portals at once?
If I have one portal on the ceiling and one on the floor, can a tall Entity cross multiple portals at once? Will the opposing portal directions cause it to fail? No matter what I try for the crossingMode and clippingMode of the PortalComponent I can only get it to fully work for one portal at a time. I have tried flipping the normals for the crossingMode and clippingMode planes. I have also tried creating a ceiling portal plane with inverted normals. It seems like whatever Entity is passing through a portal has one portal it wants to deal with at a time and that's it. My other option is to create portals using occlusion but I prefer the simplest way.
1
0
464
Jan ’25
Comparing colors of two ModelEntities
I want to compare the colors of two model entities (spheres). How can i do it? The method i'm currently trying to apply is as follows case let .color(controlColor) = controlMaterial.baseColor, controlColor == .green { // Flip target sphere colour if let targetMaterial = targetsphere.model?.materials.first as? SimpleMaterial, case let .color(targetColor) = targetMaterial.baseColor, targetColor == .blue { targetsphere.model?.materials = [SimpleMaterial(color: .green, isMetallic: false)] // Change to |1⟩ } else { targetsphere.model?.materials = [SimpleMaterial(color: .blue, isMetallic: false)] // Change to |0⟩ } } This method (baseColor) was deprecated in swift 15.0 changes to 'color' but i cannot compare the value color to each other.👾
1
0
621
Jan ’25
Operation not permitted loading image in SDL2 with Xcode
Hello, I am making a project in SDL, and with that I am using SDL_Image. I am doing all of this on Xcode. I've been able to initialize everything just fine, but issues spring up when I try to load an image. When I give the code a path to look for an image: Unable to load image! IMG_Error: Couldn't open [Insert image path here]: Operation not permitted I get that error. Keep in mind "Unable to load image" is a general error I put in the code should loading said image fail, the specific error which I called with IMG_GetError() is what we really need to know. I've seen before that this might occur if a program does not have full disk access. Because of this, I've tried giving Xcode full disk access, but this didn't work and I still got the same error.
1
0
671
Jan ’25
CGContextAddArc not respecting clipping rects for open paths
macOS 15.2, MBP M1, built-in display. The following code produces a line outside the bounds of my clipping region when drawing to CGLayers, to produce a clockwise arc: CGContextBeginPath(m_s.outContext); CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1); CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha()); CGContextSetLineWidth(m_s.outContext, width); CGContextStrokePath(m_s.outContext); Drawing other shapes such as rects or ellipses doesn't cause a problem. I can work around the issue by bringing the path back to the start of the arc: CGContextBeginPath(m_s.outContext); CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -startRads, -endRads, 1); // add a second arc back to the start CGContextAddArc(m_s.outContext, leftX + radius, topY - radius, radius, -endRads, -startRads, 0); CGContextSetRGBStrokeColor(m_s.outContext, col.fRed(), col.fGreen(), col.fBlue(), col.fAlpha()); CGContextSetLineWidth(m_s.outContext, width); CGContextStrokePath(m_s.outContext); But this does appear to be a bug.
1
0
492
Jan ’25