Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics
Posts under Graphics & Games topic

Post

Replies

Boosts

Views

Activity

Core Image Tiling and ROI
Our application uses Core Image to apply custom CIFilters to still images and video. I'm running into issues when the supplied image is large enough (>4096) that the image is automatically tiled. The simplest of these to describe is a filter that performs various mirroring effects - backwards, upside-down etc. The implementation portion of the filter provides a sampler (src) and passes this into the kernel with an roiCallback that uses the destRect, inset by -1 in both dimensions: return [mirrorsKernel applyWithExtent:[src extent] roiCallback:^CGRect(int index, CGRect destRect) { return CGRectInset(destRect, -1, -1); } arguments:@[src] ]; The kernel is very simple, sampling from the X coordinate equal to the src width - current coordinate: float4 backwards(sampler image, destination dest) { float2 dc = dest.coord(); dc.x = image.size().x - dc.x; return image.sample(image.transform(dc))); } When this runs on an image that is wider than 4096, tiling happens, with the result being that destRect is not the entire image and therefore the resulting output image is incorrect. If the ROI uses [src extent] instead of destRect, the result is correct, but this will lead to serious performance issues when src gets too large. All of this makes sense to me. What I'd like to know is if there is a way to handle this filter's requirements for sampling from the entire source while still limiting the ROI to maintain performance? I think the answer is probably no within our current structure and performance limits. But I wanted to see if there's anything we're missing. I am aware that the simple kernel above can be replaced with an affine transform, which is an option for backwards and upside-down mirroring. We have other kernels in this filter that perform mirroring of either half of the source image or one quadrant of the source image. In these cases, I suppose it might be possible (up to a point) to create a custom ROI that is only the portion of the source that is being mirrored. We have not attempted that yet. Any thoughts/input appreciated, thanks!
3
0
720
Oct ’24
Safe Places to Find Dependable App Developers
Hello! Brand new to the Apple developer community, so, hello everyone! I'm a game developer, we just launched our first game on PC and I'm looking to port it to ios. Time is something I'm kind of short on, and I hear it takes some jumping through hoops to get the know-how to port something to mobile. Any good sites you'd recommend for finding programmers to port your game? It's fairly simple - just a visual novel. Any and all suggestions welcome! All the best! Elijah
3
0
586
Dec ’24
How to attach point cloud(or depth data) to heic?
I'm developing 3D Scanner works on iPad. I'm using AVCapturePhoto and Photogrammetry Session. photoCaptureDelegate is like below: extension PhotoCaptureDelegate: AVCapturePhotoCaptureDelegate { func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) { let fileUrl = CameraViewModel.instance.imageDir!.appendingPathComponent("\(PhotoCaptureDelegate.name)\(id).heic") let img = CIImage(cvPixelBuffer: photo.pixelBuffer!, options: [ .auxiliaryDepth: true, .properties: photo.metadata ]) let depthData = photo.depthData!.converting(toDepthDataType: kCVPixelFormatType_DepthFloat32) let colorSpace = CGColorSpace(name: CGColorSpace.sRGB) let fileData = CIContext().heifRepresentation(of: img, format: .RGBA8, colorSpace: colorSpace!, options: [ .avDepthData: depthData ]) try? fileData!.write(to: fileUrl, options: .atomic) } } But, Photogrammetry session spits warning messages: Sample 0 missing LiDAR point cloud! Sample 1 missing LiDAR point cloud! Sample 2 missing LiDAR point cloud! Sample 3 missing LiDAR point cloud! Sample 4 missing LiDAR point cloud! Sample 5 missing LiDAR point cloud! Sample 6 missing LiDAR point cloud! Sample 7 missing LiDAR point cloud! Sample 8 missing LiDAR point cloud! Sample 9 missing LiDAR point cloud! Sample 10 missing LiDAR point cloud! The session creates a usdz 3d model but scale is not correct. I think the point cloud can help Photogrammetry session to find right scale, but I don't know how to attach point cloud.
3
2
1.2k
Oct ’24
Sample Project for WWDC24 10092 Metal with Passthrough?
It’s great that we’ll be able to use Metal custom renderers in passthrough mode on visionOS. https://developer.apple.com/wwdc24/10092 This is a lot of complicated set-up, however. It’s also unclear how occlusion and custom algorithms / raytracing will work in tandem with scene understanding. May we have a project template and/or sample? Preferably with the C api and not just swift. This would be much-appreciated and helpful to everyone who wants this set-up. I’d like to see the whole process. Thank you for introducing this feature!
3
1
1.1k
Nov ’24
BlendShapes don’t animate while playing animation in RealityKit
Hi everyone, I’m running into an issue with RealityKit when trying to animate BlendShapes (ShapeKeys) while a skeletal animation is playing. The model is a rigged character in .usdz format with both predefined skeletal animations and BlendShapes (exported from Blender). The problem: when I play any animation using entity.playAnimation(...), the BlendShapes stop responding. Calling setBlendShapes(...) still logs that weights are being updated, but the visual changes are not visible. The exact same blend shape animation works perfectly when no animation is playing. In SceneKit the same model works as expected: shape keys get animated during animation playback. But not in realitykit Still, as soon as an animation starts, the shape keys don’t animate anymore. Here’s the test project on GitHub that demonstrates the issue clearly: https://github.com/IAMTHEBURT/RealityKitWitnBlendShapesSample The goal is to play facial expressions (like blinking or talking) while a body animation (like waving) is playing. Is this a known limitation in RealityKit? Or is there a recommended way to combine skeletal animations with real-time BlendShape updates? Thanks in advance for any insights.
3
3
250
Jul ’25
Image textures cause runtime crashes - what's the workaround?
I've had no issue calling image files in my .swift files, but they are causing crashes when used in my .SKS files. When I set a sprite texture to an image in the inspector/ editor bar, at runtime when that sprite is being called I get the error: "Cannot get value with size 16. The type encoded as {CGRect={CGPoint=dd}{CGSize=dd}} is expected to be 32 bytes." From my research it has something to do with Apple switching from 32 to 64 bite machines. From chatGPT “SpriteKit under the hood uses NSKeyedUnarchiver to load your .sks file. That unarchiver decodes each archived property by reading a fixed‑size blob of bytes and mapping it into a C struct. In your case it ran into a mismatch”. I am using a 64-bite machine to write my code and 64-bite simulators and physical devices, so there isn't a clear cause of the mismatch. My scenes play fine in Xcode 16's preview window and my code builds, it just crashes at runtime. When I don’t use image textured assets in the SKS file it works fine. It loads animated labels, and plain color squares. I’ve been able to work around this for static things like a sprite with a background texture by. in a normal non-game swift file, writing code like: if let scene = SKScene(fileNamed: "GameScene2") { let bg = SKSpriteNode(imageNamed: "YourBackgroundImage") bg.position = CGPoint(x: scene.frame.midX, y: scene.frame.midY) bg.zPosition = -1 scene.addChild(bg) } The issue now is I want to make a particle emitter and other non static sprites, but my understanding of their properities isn’t deep enough to create them without the editor. Also when I set SKTexture in a swift file that causes the same runtime crash with the 16/32 error. Could you help me figure out how to fix the bug so I can use the editor again? Otherwise could you help me figure out how to write a workaround like I do for background images? I have a feeling the answer is in writing my own NSKeyedUnarchiver but I don’t know how to make sure it’s called instead of the default one. I've already tried cleaning my code multiple times and deleting and reading sprite nodes. Thank you.
3
0
626
Aug ’25
CGSetDisplayTransferByTable no longer working on macOS Tahoe
For an app of mine I use CGSetDisplayTransferByTable to adjust the gamma table of the device. Since macOS Tahoe, these modifications are silently ignored. The display's actual gamma curve remains unchanged despite the API reporting successful completion. I've filed a FB for it a few weeks ago, and would love to figure out what could be causing this. FB18559786
3
1
270
Aug ’25
Is there any future for screensavers on macOS?
I haven't been looking at screensavers for a long time because of Apple's lack of will (or resources?) to provide a public version of the private modern SDK used by Apple for a very long time now. I'm now looking at the Screen Saver pane in System Settings (the What-If version of System Preferences in an alternate universe where all screens are in portrait mode). In macOS Sequoia, it seems like 3rd party screensavers are not welcome considering that they are relegated to the "Other" section at the bottom of the list and you have to click Show All to start seeing 3rd party screen savers. I also had a quick look at macOS Tahoe Beta 3 and it looks like that all the real screensavers are gone (3rd party and the ones from Apple: Hello, Message, Flurry, etc.) or at least it requires to be a Nobel Prize to find them (and the Search field is not useful). I tried to install a 3rd party screen saver on macOS Tahoe Beta 3, it doesn't show up in the list. To summarize: No public access to modern APIs AFAIK. UI that is hostile to 3rd party screen savers on macOS Sequoia. Apparently only screensavers that are slideshows or movies curated by Apple in macOS Tahoe b3. Hence the question: Is there any future for screen savers on macOS? Because if there's none, I won't waste my time trying to update some old screen savers.
3
0
479
Aug ’25
Metal useResource vs. MTLFence
Hello, I'm tracking down a bug where useResource doesn't seem to apply proper synchronization when a resource is produced by the render pass then consumed by the compute pass, but when I use MTLFence between the to signal and wait between the render/compute encoders, the artifact goes away. The resource is created with MTLHazardTrackingModeTracked and useResource is called on the compute encoder after the render pass. Metal API Validation doesn't report any warnings/errors. Am I misunderstanding the difference between the two APIs? I dug through the Metal documentation and it looks like useResource should handle synchronization given the resource has MTLHazardTrackingModeTracked but on the other hand, MTLFence should be used to ensure proper synchronization between command encoders. Can someone can clarify the difference between the two APIs and when to use them.
3
0
122
Jul ’25
Vision to detect corners and or 3 lines.
End goal: to detect 3 lines, and 2 corners accurately. Trying contours but they are a bit off. Is there a way or settings in contours to detect corners and lines more accurately, maybe less an sharper edged/corner contours? Or some other API or methods please? I would love an email please ;) thank you. 2. also an overlay/scale issue
3
0
644
Jan ’25
Updated Object Capure -- needs LiDAR?
I have two apps released -- ReefScan and ReefBuild -- that are based on the WWDC21 sample photogrammetry apps for iOS and MacOS. Those run fine without LiDAR and are used mostly for underwater models where LiDAR does not work at all. It now appears that the updated photogrammetry session requires LiDAR data, and building my app on current xcode results in a non-working app. Has the "old" version of photgrammetry session been broken by this update? It worked very well previously so I would hate to see this regression to needing LiDAR. Most of my users do not have that.
3
0
524
Mar ’25
How set leaderboards to live
I’ve added two recurring leaderboards to my app and the user interface for the Game Center leaderboards pops up when expected and it lists the leaderboards but when you click on them it says “Have Fun With Friends” and lists some of my contacts instead of any scores. The status under Apple Connect is “Not Live” and I’m wondering how to activate them. App has been approved for external beta testing. On a related note, on my iPad it doesn’t display correctly, it says “Game Center” at the top but the rest of the UI doesn’t appear, just blackness.
3
0
583
Jan ’25
RealityView content scale factor
Hi, following the recent deprecation of SceneKit, I'm trying to move a couple of my SceneKit projects to RealityKit. One thing I can't seem to find is how to change the content scale factor when using a RealityView in SwiftUI. It was really easy to do in SceneKit with just a SCNView property, and it seems that it's also possible when using ARView, but I can't find a way to do it with a RealityView. Maybe it's a SwiftUI limitation?
3
1
123
Jul ’25
How to create CGContext with 10 bits per component?
I’m trying to create a CGContext that matches my screen using the following code let context = CGContext( data: pixelBuffer, width: pixelWidth, // 3456 height: pixelHeight, // 2234 bitsPerComponent: 10, // PixelEncoding = "--RRRRRRRRRRGGGGGGGGGGBBBBBBBBBB" bytesPerRow: bytesPerRow, // 13824 space: CGColorSpace(name: CGColorSpace.extendedSRGB)!, bitmapInfo: CGImageAlphaInfo.none.rawValue | CGImagePixelFormatInfo.RGBCIF10.rawValue | CGImageByteOrderInfo.order16Little.rawValue ) But it fails with an error CGBitmapContextCreate: unsupported parameter combination: RGB 10 bits/component, integer 13824 bytes/row kCGImageAlphaNone kCGImageByteOrder16Little kCGImagePixelFormatRGBCIF10 Valid parameters for RGB color space model are: 16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst 32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast 32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast 64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little 128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents 128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents See Quartz 2D Programming Guide (available online) for more information. Why is it unsupported if it matches the 6th option? (32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little)
3
0
665
Jan ’25
GestureComponent does not support DragGesture
The following code using the new GestureComponent demonstrates inconsistency. The tap gesture prints output, but the drag gesture does not. I already checked this post, which points to this seemingly outdated sample code I assume that example is deprecated in favour of the now built in version of GestureComponent. Nonetheless, there are no compiler warnings or errors, it just fails silently. TapGesture, LongPressGesture, MagnifyGesture, RotateGesture all work, so this feels like an oversight. RealityView { content in let testEntity = ModelEntity(mesh: .generateBox(size: .init(x: 1, y: 1, z: 1))) testEntity.position = SIMD3<Float>(0,0,-1) testEntity.components.set(InputTargetComponent()) testEntity.components.set(CollisionComponent( shapes: [.generateBox(size: .init(x: 1, y: 1, z: 1))] )) let testGesture = TapGesture() .onEnded { value in print("Tapped") } testEntity.components.set(GestureComponent(testGesture)) let dragGesture = DragGesture() .onEnded { value in print("Dragged") } testEntity.components.set(GestureComponent(dragGesture)) content.add(testEntity) }
3
1
355
Jul ’25