Hello,
I've been tinkering with PortalComponent on visionOS a bit but noticed that the content of the WorldComponent is always clipped to the mesh geometry of whatever entities have the PortalComponent applied. Now I'm wondering if there is any way or trick to allow contents of the portal to peek out – similar to the Encounter Dinosaurs experience on Vision Pro (I assume it also uses PortalComponent?).
I saw that PortalComponent has a clippingPlane property (https://developer.apple.com/documentation/realitykit/portalcomponent/clippingplane-swift.property). But so far I haven't been able to achieve a perceptible visual difference with it.
If possible I would like to avoid hacky tricks using duplicate meshes or similar to achieve this.
Thanks for any hints!
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I wanted to drag EntityA while also dragging EntityB independently.
I've tried to separate them by entity but it only recognizes the latest drag gesture
RealityView { content, attachments in
...
}
.gesture(
DragGesture()
.targetedToEntity(EntityA)
.onChanged { value in
...
}
)
.gesture(
DragGesture()
.targetedToEntity(EntityB)
.onChanged { value in
...
}
)
also tried using the simultaneously but didn't work too, maybe i'm missing something
.gesture(
DragGesture()
.targetedToEntity(EntityA)
.onChanged { value in
...
}
.simultaneously(with:
DragGesture()
.targetedToEntity(EntityB)
.onChanged { value in
...
}
)
My MacBook Air M1 has installed Mac OS Sonoma 14.3.1, and I tried to install game-poring-toolkit tonight. After the step which it requires me to input the command "brew -v install apple/apple/game-porting-toolkit", Terminal ran for minutes. But at the end this error appeared: Error: apple/apple/game-porting-toolkit 1.1 did not build.
I don't know anything about coding and software. Could someone please tell me what cause this error and how to fix it after you read my post? I will appreciate your help!
Does anyone know how I can disable foveation for an ImmersiveSpace? I'm aware that I could use a CompositorLayer and my own Metal rendering to control foveation, but I'm hoping that I can configure an existing/underlying LayerRenderer (or similar) to disable it for an immersive scene.
Or if there's another approach I should be taking, any pointers are appreciated. Thank you!
Hi,
I face a problem that I could not scan a specific Code 39 barcode with Vision framework. We have multiple barcode in a label and almost all Code 39 can be scanned, but not for specific one.
One more information, regardless the one that is not recognized with Vision can be read by a general barcode scanner.
Have anyone faced similar situation?
Is there unique condition to make it hard to scan the barcode when using Vision?(size, intensity, etc)
Regards,
Hello,
This exact question was already asked in this forum (8 years ago) but I can't find a definitive answer:
Does Metal allow using the same color texture as both an input and output (color attachment) of a fragment shader? Is the behavior defined somewhere?
I believe this results in undefined behavior under both DirectX and OpenGL, so I'd assume the same for Metal, but then why doesn't Metal warn me about this as it does on some many other "misconfigurations"? It also seems to work correctly in my case, as I found out by accident.
Would love to get a clarification!
Thanks ahead!
I have attempted to use VideoMaterial with HDR HLS stream, and also a TextureResource.DrawableQueue with rgba16Float in a ShaderGraphMaterial.
I'm capturing to 64RGBAHalf with AVPlayerItemVideoOutput and converting that to rgba16Float.
I don't believe it's displaying HDR properly or behaving like a raw AVPlayer.
Since we can't configure any EDR metadata or color space for a RealityView, how do we display HDR video? Is using rgba16Float supposed to be enough?
Is expecting the 64RGBAHalf capture to handle HDR properly a mistake and should I capture YUV and do the conversion myself?
Thank you
Hey folks,
I have a legacy game that is running OpenGL ES - and it no longer works on the simulators that are running Apple Silicon, ie iPhone 15 Pro, or the 13" iPads. And yes, i'm also running on Apple Silicon (M1 Max).
The apps work fine on the actual devices, but the simulator crashes on any glDrawElements with a stack that looks like the following:
I have not yet seen an announcement about this not working but i've seen mention in other apps of stopping to support GL (https://github.com/maplibre/maplibre-native/issues/2351)
Can anyone shed some light? I'm obviously going to try to fix it, or find a recent sample app from which to start to see what might be up. Or move to metal, but i hadn't bargained for that level of effort atm ;)
Any suggestions appreciated!
It’s great that we’ll be able to use Metal custom renderers in passthrough mode on visionOS.
https://developer.apple.com/wwdc24/10092
This is a lot of complicated set-up, however. It’s also unclear how occlusion and custom algorithms / raytracing will work in tandem with scene understanding. May we have a project template and/or sample? Preferably with the C api and not just swift. This would be much-appreciated and helpful to everyone who wants this set-up. I’d like to see the whole process.
Thank you for introducing this feature!
Hi,
Introducing Swift Concurrency to my Metal app has been a bit challenging as Swift Concurrency is limited by the cooperative thread pool.
GPU work is obviously not CPU bound and can block forward moving progress, especially when using waitUntilCompleted on the command buffer. For concurrent render work this has the potential of under utilizing the CPU and even creating dead locks.
My question is, what is the Metal's teams general recommendation when it comes to concurrency? It seems to me that Dispatch or OperationQueues are still the preferred way for Metal bound tasks in order to gain maximum performance?
To integrate with Swift Concurrency my idea is to use continuations that kick off render jobs via Dispatch or Queues? Would this be the best solution to bridge async tasks with Metal work?
Thanks!
My app isn’t a game app, but iOS 18 always enable Game Mode for my app automatically. I didn’t find a switch to disable it in Xcode.
I've got an iOS app that is using MetalKit to display raw video frames coming in from a network source. I read the pixel data in the packets into a single MTLTexture rows at a time, which is drawn into an MTKView each time a frame has been completely sent over the network. The app works, but only for several seconds (a seemingly random duration), before the MTKView seemingly freezes (while packets are still being received).
Watching the debugger while my app was running revealed that the freezing of the display happened when there was a large spike in memory. Seeing the memory profile in Instruments revealed that the spike was related to a rapid creation of many IOSurfaces and IOAccelerators. Profiling CPU Usage shows that CAMetalLayerPrivateNextDrawableLocked is what happens during this rapid creation of surfaces. What does this function do?
Being a complete newbie to iOS programming as a whole, I wonder if this issue comes from a misuse of the MetalKit library. Below is the code that I'm using to render the video frames themselves:
class MTKViewController: UIViewController, MTKViewDelegate {
/// Metal texture to be drawn whenever the view controller is asked to render its view.
private var metalView: MTKView!
private var device = MTLCreateSystemDefaultDevice()
private var commandQueue: MTLCommandQueue?
private var renderPipelineState: MTLRenderPipelineState?
private var texture: MTLTexture?
private var networkListener: NetworkListener!
private var textureGenerator: TextureGenerator!
override public func loadView() {
super.loadView()
assert(device != nil, "Failed creating a default system Metal device. Please, make sure Metal is available on your hardware.")
initializeMetalView()
initializeRenderPipelineState()
networkListener = NetworkListener()
textureGenerator = TextureGenerator(width: streamWidth, height: streamHeight, bytesPerPixel: 4, rowsPerPacket: 8, device: device!)
networkListener.start(port: NWEndpoint.Port(8080))
networkListener.dataRecievedCallback = { data in
self.textureGenerator.process(data: data)
}
textureGenerator.onTextureBuiltCallback = { texture in
self.texture = texture
self.draw(in: self.metalView)
}
commandQueue = device?.makeCommandQueue()
}
public func mtkView(_ view: MTKView, drawableSizeWillChange size: CGSize) {
/// need implement?
}
public func draw(in view: MTKView) {
guard
let texture = texture,
let _ = device
else { return }
let commandBuffer = commandQueue!.makeCommandBuffer()!
guard
let currentRenderPassDescriptor = metalView.currentRenderPassDescriptor,
let currentDrawable = metalView.currentDrawable,
let renderPipelineState = renderPipelineState
else { return }
currentRenderPassDescriptor.renderTargetWidth = streamWidth
currentRenderPassDescriptor.renderTargetHeight = streamHeight
let encoder = commandBuffer.makeRenderCommandEncoder(descriptor: currentRenderPassDescriptor)!
encoder.pushDebugGroup("RenderFrame")
encoder.setRenderPipelineState(renderPipelineState)
encoder.setFragmentTexture(texture, index: 0)
encoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4, instanceCount: 1)
encoder.popDebugGroup()
encoder.endEncoding()
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
private func initializeMetalView() {
metalView = MTKView(frame: CGRect(x: 0, y: 0, width: streamWidth, height: streamWidth), device: device)
metalView.delegate = self
metalView.framebufferOnly = true
metalView.colorPixelFormat = .bgra8Unorm
metalView.contentScaleFactor = UIScreen.main.scale
metalView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
view.insertSubview(metalView, at: 0)
}
/// initializes render pipeline state with a default vertex function mapping texture to the view's frame and a simple fragment function returning texture pixel's value.
private func initializeRenderPipelineState() {
guard let device = device, let library = device.makeDefaultLibrary() else {
return
}
let pipelineDescriptor = MTLRenderPipelineDescriptor()
pipelineDescriptor.rasterSampleCount = 1
pipelineDescriptor.colorAttachments[0].pixelFormat = .bgra8Unorm
pipelineDescriptor.depthAttachmentPixelFormat = .invalid
/// Vertex function to map the texture to the view controller's view
pipelineDescriptor.vertexFunction = library.makeFunction(name: "mapTexture")
/// Fragment function to display texture's pixels in the area bounded by vertices of `mapTexture` shader
pipelineDescriptor.fragmentFunction = library.makeFunction(name: "displayTexture")
do {
renderPipelineState = try device.makeRenderPipelineState(descriptor: pipelineDescriptor)
}
catch {
assertionFailure("Failed creating a render state pipeline. Can't render the texture without one.")
return
}
}
}
My question is simply: what gives?
Hey, i have created a game in unity with the apple core and apple gamekit plugins present. I setup 5 leaderboards on the app store connect. I made a unity build and did the whole testflight build loop to test everything. When i open my gamecenter panel via the button i see my leaderboards but they show as MISSING TITLE which is weird because i have for sure set them up correctly they have a leaderboard reference name and leaderboard id as well. When debugging i can see that when i call my submit score function it gets submitted with no error but then i also dont see the score appear anywhere .
Keep in mind the leaderboards are not live and are being tested on testflight first
I'm an iOS developer, and I've been testing our app in iOS 18.0 Beta. I noticed that there's a problem with the font rendering, and after troubleshooting, I've found out that it's caused by the removal of the PingFang.ttc font in 18.0.
I would like to ask the reason for removing this font file and which font should be used to display Chinese in the future?
My test device is an iPhone 11 Pro and the system version is iOS 18.0 (22A5297). I have also tested Beta 1 and it has the same issue.
In previous versions of the system, the PingFang font is located in this directory /System/Library/Fonts/LanguageSupport/PingFang.ttc. But in iOS 18.0, the font file in this directory has become Kohinoor.ttc, and I've tested that this font can't display Chinese either.
I traversed the following system font directories and could not find the PingFang.ttc font file.
/System/Library/Fonts/AppFonts
/System/Library/Fonts/Core
/System/Library/Fonts/CoreAddition
/System/Library/Fonts/CoreUI
/System/Library/Fonts/LanguageSupport
/System/Library/Fonts/UnicodeSupport
/System/Library/Fonts/Watch
Looking for answers, thanks for the help!
I have a very simple Mac app with just a MTKView in it which shows a single color. I want to move the rendering code to C++. For this I created a C++ framework target which interoperates with the Swift code - main project target. I am trying to link metal-cpp library to the C++ framework target using these instructions. Approach described in this article works with simple C++ Mac console apps. But in my mixed Swift/C++ project Xcode cannot find Foundation/Foundation.hpp (and probably other headers) to include into the C++ header.
I inserted metal-cpp folder into my project and added it to C++ target's header search paths, as written in the instructions.
The title is self-exploratory. I wasn't able to find the CAMetalDisplayLink on the most recent metal-cpp release (metal-cpp_macOS15_iOS18-beta). Are there any plans to include it in the next release?
Does anyone know why the following call fails?
CGPDFOperatorTableSetCallback(operatorTable, "ID", &callback);
The PDF specification seems to indicate that ID is an operator?
BTW what is the proper topic/subtopic for questions about Quartz? Wasn't sure what topic on the new forums to post this under.
I'm testing on an iPhone 12 Pro, running iOS 17.5.1.
Playing an HDR video with AVPlayer without explicitly specifying a pixel format (but specifying Metal Compatibility as below) gives buffers with the pixel format kCVPixelFormatType_Lossless_420YpCbCr10PackedBiPlanarVideoRange (&xv0).
_videoOutput = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:@{ (NSString*)kCVPixelBufferMetalCompatibilityKey: @(YES)
}
I can't find an appropriate metal format to use for these buffers to access the data in a shader. Using MTLPixelFormatR16Unorm for the Y plane and MTLPixelFormatRG16Unorm for UV plane causes GPU command buffer aborts.
My suspicion is that this compressed format isn't actually metal compatible due to the lack of padding bytes between pixels. Explicitly selecting kCVPixelFormatType_420YpCbCr10BiPlanarVideoRange (which uses 16 bits per pixel) for the AVPlayerItemVideoOutput works, but I'd ideally like to use the compressed formats if possible for the bandwidth savings.
With SDR video, the pixel format is the lossless 8-bit one, and there are no problems binding those buffers to metal textures.
I'm just looking for confirmation there's currently no appropriate metal format for binding the packed 10-bit planes. And if that's the case, is it a bug that AVPlayerVideoOutput uses this format despite requesting Metal compatibility?
I have an AR game using ARKit with SceneKit that works just fine in iOS 17.
In the iOS 18 betas, the AR background image shows black instead of showing the real world. As a result there's no tracking and obviously the whole game is useless.
I narrowed down the issue to showing the Game Center Access Point.
My app has ViewController 1 (VC1) showing the main menu and that's where I want to show the GC Access Point. From there you open VC2 which shows a list of levels. Selecting any level will open VC3 which has the ARScene.
Following is the code I use to start Game Center in VC1:
GKLocalPlayer.local.authenticateHandler = { gcAuthVC, error in
let isGameCenterReady = (gcAuthVC == nil) && (error == nil)
if let viewController = gcAuthVC {
self.present (viewController, animated: true, completion: nil)
}
if error != nil {
print(error?.localizedDescription ?? "")
}
if isGameCenterReady {
GKAccessPoint.shared.location = .topLeading
GKAccessPoint.shared.showHighlights = true
GKAccessPoint.shared.isActive = true
}
}
When switching to VC2 I run GKAccessPoint.shared.isActive = false so that the Access Point will no longer show in any of the following VCs. I tried running it in VC1, VC2, and again in VC3 - it doesn't change anything. Once I reach VC3, the background is black.
If in VC1 I don't run GKAccessPoint.shared.isActive = true, so I don't activate the access point, the behavior is as follows:
If I wait until after the Game Center login animation completes and closes on its own and then I proceed to VC2 and VC3, the camera image will show correctly
If I quickly move to VC2 before the Game Center login animation has completed, so my code will close it by setting active = false, and then I continue to VC3, I will see the black background problem.
So it does look like activating the access point and then de-activating it causes the issue. BTW, if I activate the access point and leave it on in all VCs, the same black background issue persists.
Other than that, when I'm in VC3 with the black background and I switch to another app (so my game moves to the background), when it returns to the foreground, the camera suddenly shows the real world correctly!
I tried to manually reset the AR session by pausing and restarting it, but that didn't change anything. Also, when I check with the debugger, it looks like when the app comes back to the foreground it also doesn't run the session start code.
But something does seem to reset itself so I wonder what that is. Maybe I could trigger the same manually in my cdoe???
I repeat that everything works just fine in iOS 17 and below. This problem only started with the iOS 18 beta (currently on beta 5, but it started in some of the previous betas as well).
So could this be a bug in iOS 18?
As a workaround I could check the iOS version and if it's iOS18 not activate the access point, hoping that the user will not jump to VC2 too quickly, and show my own button which will open Game Center. But I'd rather give the users the full experience with their own avatar and the highlights showing up. Plus, certainly some users will move quickly to VC2 and that will be an awful experience.
Any help would be greatly appreciated. Thanks!
I have a game for iOS where I use CADisplayLink to animate a simulation, and for some reason the animation is not getting the full 120hz on capable devices (like iPhone 15 Pro). When I enable a 120hz refresh target, the animation is capped at only 90hz. This looks terrible because the animation works best when doubled (30, 60, 120, 240, etc).
The really bizarre thing is that when I turn on Screen Recording, my frame rate instantly jumps to 120, and everything looks perfectly smooth. My game has never looked better on iPhone! When recording is stopped, the animation drops back down to 90 fps. What in the world is going on?
[displayLink setPreferredFrameRateRange:CAFrameRateRangeMake(100,240,120)]; //Min. Max, Preferred [displayLink addToRunLoop:[NSRunLoop currentRunLoop] forMode:NSDefaultRunLoopMode];
(Also, CADisableMinimumFrameDurationOnPhone is set to True in info.plist)