Hi,
seems MSL is missing support for a clock() shader instruction available in other graphics APIs like Vulkan or OpenGL for example..
useful for counting cost in number of clock cycles of some code insider shader with much finer granularity than launching a micro kernel with same instructions and measuring cycles cost from CPU..
also useful for MoltenVK to support that extensions..
thanks..
Build captivating gaming experiences for Apple platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
hi
When analyzing our game using Instruments, I've always been confused about the two items "Drawable Present" and "Drawable Presented" in the GPU column. The timing of Drawable Present seems to be when the CPU layer calls commandbuffer:present, rather than when the actual encoding is completed on the GPU. Also, what does drawable presented specifically mean? In our case, when a CPU stall occurs, it appears that the vsync interval changes in the next frame, and a surface that has already been calculated is not displayed. Why is this happening?
i am trying to build some projects please check out my project
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!
A few users have recently reported no longer being able to capture point clouds using our app, specifically on iPhone 15 Pro devices. We recently found an in-house device that exhibits this behavior and found that the confidenceMap contains only low confidence values, regardless of the environment being captured. Our app uses a higher confidence threshold; setting the threshold to a lower value produces noisy results as expected, so that is a non-viable option.
Other LiDAR based apps have been tested with this device and the results are the same. No points, or noisy point clouds in apps that allow a lower confidence threshold setting. On devices that exhibit this behavior the "Displaying a point cloud using scene depth" Apple sample app can be used to visualize the issue.
First reports of this new behavior occurred as early as iOS 18.4.
Looking for recommendations on which team(s) at Apple to reach out to with these findings since the behavior manifests on only a small sample of devices.
Imagine a native macOS app that acts as a "launcher" for a Java game.** For example, the "launcher" app might use the Swift Process API or a similar method to run the java command line tool (lets assume the user has installed Java themselves) to run the game.
I have seen How to Enable Game Mode. If the native launcher app's Info.plist has the following keys set:
LSApplicationCategoryType set to public.app-category.games
LSSupportsGameMode set to true (for macOS 26+)
GCSupportsGameMode set to true
The launcher itself can cause Game Mode to activate if the launcher is fullscreened. However, if the launcher opens a Java process that opens a window, then the Java window is fullscreened, Game Mode doesn't seem to activate. In this case activating Game Mode for the launcher itself is unnecessary, but you'd expect Game Mode to activate when the actual game in the Java window is fullscreened.
Is there a way to get Game Mode to activate in the latter case?
** The concrete case I'm thinking of is a third-party Minecraft Java Edition launcher, but the issue can also be demonstrated in a sample project (FB13786152). It seems like the official Minecraft launcher is able to do this, though it's not clear how. (Is its bundle identifier hardcoded in the OS to allow for this? Changing a sample app's bundle identifier to be the same as the official Minecraft launcher gets the behavior I want, but obviously this is not a practical solution.)
Hello Apple Developers i am here write down my experience with the IOS 26 Beta
first off i would like to say that i kind of do/don't like the new iquid glass UI/UIX Designs in some parrts of the ios like in m ust 3rd Party Apps like Uber Lyrith MJ Access Link Moblie app DoorDash VLC And Apple Music App just to name of few
since i have installed the beta i have ran into a few bugs i have alread sent to the feeback app via iphone but i'm going to write them here as will i'm not looking for troubleshoots or tech support i'm just shareing my experience with the apple community and the Apple Development/Enginer Team to fine tone for release Time
please note that i am a user with Vision impairment so please by respect to me due to my writteing issues and grammer and spelling
so here i go my first bug that i ran into on the first day was when i was listening to some music trakcs in the apple muisc app when scrolling down or up fast the app will froze for mill secend then contiune as noraml
my second bug that i ran into dureing music play was with my Crossfade settings not working on some tracks im not sure if this is due to BMP alignment or AI algorithm integration with in the software itseif but for me this takes me out of the listening experince that i have when i enjoy listening to music
My suggestion Move the AutoMix and Crossfade Settings in to the Apple Music its seif and give the user more controll over how long or how short they want the crossfade or autmix to happen dureing the ending of each tracked play also for the cross fade option is set at 12 increase this to 30 secs or more if possiable or add an BPM options for the Automix to mix in the next track via BMP for simple of my rock track is at 148 BMP the next track should be pop or kpop or rap synceing up with that same BMP speed or similar at 148 BMP my next suggestion for the Apple Music App Shameless track mode (this mode to can be Incorporated) in to the Automix Featrue this redue some music tracks that ends abruptly some MP3 tracks added outside of the Apple Music App seems to broke dureing playback
My 3rd bug that i ran in to with the Glass UI for controll Center like i stated before i am Vision impaired with the clear Glass over lapping the current UI for me this hard for me to tell what icons i am looking atside from the Voolum and Brightness Bars i am asking please make this more dark theme and make the icons brigher or add name undernearth the icons or Flip the Dark them or dan the current UI over lapping the Controlor center or add White colors with Black Arrows/icons for all Apple App that has this Glass UI in side of cause this is driving me nuts
My 4th Bug that i ran into was with the lock screen/restart/reboot ohh girl where do i began with this one let's with the notifications i don't know who through it was a good idea to have a Clear Bright UI over lapping the Notifications this is very annoying via imessage Texting cause my custom wallpaper Blends in with a white background and this is Worst
My suggestion for this is very simple darkering the background on the lock screen abit more so the text is more reader or increase the Notification bars (this is for users like myseif that use Dark mode)
My 5th Bug involves my Back ups/Restore/Corrupated < is seif explanatory when i tryed to downgrand back to Version 18.5/18.6 nothing happened so please fix this or make it a bit more easyer for users to be able to back up/Retore their Devices now i has to wait until (Tomttow morning Friday) to factory reset my phone
my conclusion since Beta Users and Developers and Engiers are Still testing please take look at my suggestion and try to bring not all but some of them in to public Release
Thank You!
Update i would like to Downgrad from IOS 26 Developer Beta back to IOS 18.5
So I'm testing a microapp that is contained in an IPFS folder. I use a web3 website that is used to view NFTs and their IPFS files. The app has gyro controls, which are enabled through a confirmation gesture.
In iOS 18.5, when I press "Request Permission" button I get the popup to allow the app to acess movement and orientation. In iOS26, pressing the button does nothing. Keep in mind that this only happens through the website, that uses iframes. When I load the IPFS file from a direct link, the popup appears with no issue.
I think this might be because iOS26 uses WebGPU or it might be a bug since iOS26 is still in beta.
Hi everyone,
I'm not an experienced developer. I'm interested in the low-latency related APIs in UIUpdateLink, but I failed to write even a minimal demo that works.
UIUpdateInfo.isImmediatePresentationExpected is always false here. My understanding must be wrong. I've totally no idea so I'm asking for help here. I appreciate anyone who gives suggestions of any kind.
Here's my (failed) demo about tracking touch inputs (of the 1st finger) and draw some shape at that place:
import UIKit
class ContentUIView: UIView {
// MARK: - About UIUpdateLink and drawing
required init?(coder: NSCoder) {
super.init(coder: coder)
initializeUpdateLink()
}
override init(frame: CGRect) {
super.init(frame: frame)
initializeUpdateLink()
}
private func initializeUpdateLink() {
self.updateLink = UIUpdateLink(view: self)
self.updateLink.addAction(to: .beforeCADisplayLinkDispatch,
target: self,
selector: #selector(update))
self.updateLink.wantsImmediatePresentation = true
self.updateLink.isEnabled = true
}
@objc func update(updateLink: UIUpdateLink,
updateInfo: UIUpdateInfo) {
print(updateInfo.isImmediatePresentationExpected) // FIXME: Why always false?
CATransaction.begin()
defer { CATransaction.commit() }
layer.setNeedsDisplay()
layer.displayIfNeeded()
}
override func draw(_ rect: CGRect) {
// FIXME: Any way to support opacity?
guard let context = UIGraphicsGetCurrentContext() else { return }
context.clear(rect)
guard let lastTouch = self.lastTouch else { return }
let location = lastTouch.location(in: self)
let circleBounds = CGRect(x: location.x - 16, y: location.y - 16, width: 32, height: 32)
context.setFillColor(.init(red: 1/2, green: 1/2, blue: 1/2, alpha: 1))
context.addLines(between: [])
context.fillEllipse(in: circleBounds)
}
// MARK: - Touch input
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesBegan(touches, with: event)
guard lastTouch == nil else { return }
lastTouch = touches.first
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesEnded(touches, with: event)
guard let lastTouch, touches.contains(lastTouch) else { return }
self.lastTouch = nil
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
self.touchesEnded(touches, with: event)
}
private var lastTouch: UITouch?
private var updateLink: UIUpdateLink!
}
#Preview { ContentUIView() }
Anyway, I'm not meant to find alternative APIs and I'd be willing to know what it can't do.
What is the current [most recent] best practice to instancing Meshes in RealityKit?
I see both MeshInstanceComponent and MeshInstanceCollection.
My intent is to bind a transform to a Circle Agent (GameplayKit Agent), and feed that result to Instancing.
在正常游戏中,如果非常频繁的调用assetBundle.Unload接口,会导致游戏应用画面卡死,但是游戏的背景音乐仍然正常播放。这类问题仅发生在iphone16 和iphone17的手机上,低版本的手机没有任何问题,请问该如何解决这个问题?
I want to create GIF file and then use UIImage to it.
hello apple through this message i want to draw you attention to some problems with gptk and rosetta some games like marvel spiderman 2 have broken animations and t pose issues and other like uncharted and the last of us have severe memory leak issues so its my request please fix it asap
As the title states, I’ve been trying to emulate some older Direct9 games, and rosetta can’t handle it because of that
https://github.com/WineAndAqua/rosettax87 I’ve had to use this, but it really seems like something that I shouldn’t have to do
I’ve tried Wineskin, wine, D9VK, MoltenVK, GPTk, and the only thing that’s close to working is using devel wine + d9vk with the xrosetta87 running like its a VPN, and then you play
Without xrosetta87 it’s 0-0.5 FPS? with it, it’s like a buttery smooth 60+
Topic:
App & System Services
SubTopic:
Core OS
Tags:
Graphics and Games
macOS
Hypervisor
Game Porting Toolkit
Hi everyone,
I’ve been developing a custom, end-to-end 3D rendering engine called Crescent from scratch using C++20 and Metal-cpp (targeting macOS and visionOS). My primary goal is to build a zero-bottleneck, GPU-driven pipeline that maximizes the potential of Apple Silicon’s Unified Memory and TBDR architecture.
While the fundamental systems are stable, I am looking for architectural feedback from Metal framework engineers regarding specific synchronization and latency challenges.
Current Core Implementations:
GPU-Driven Instance Culling: High-performance occlusion culling using a Hierarchical Z-Buffer (HZB) approach via Compute Shaders.
Clustered Forward Shading: Support for high-count dynamic lights through view-space clustering.
Temporal Stability: Custom TAA with history rejection and Motion Blur resolve.
Asset Infrastructure: Robust GUID-based scene serialization and a JSON-driven ECS hierarchy.
The Architectural Challenge:
I am currently seeing slight synchronization overhead when generating the HZB mip-chain. On Apple Silicon, I am evaluating the cost of encoder transitions versus cache-friendly barriers.
&& m_hzbInitPipeline && m_hzbDownsamplePipeline && !m_hzbMipViews.empty();
if (canBuildHzb) {
MTL::ComputeCommandEncoder* hzbInit = commandBuffer->computeCommandEncoder();
hzbInit->setComputePipelineState(m_hzbInitPipeline);
hzbInit->setTexture(m_depthTexture, 0);
hzbInit->setTexture(m_hzbMipViews[0], 1);
if (m_pointClampSampler) {
hzbInit->setSamplerState(m_pointClampSampler, 0);
} else if (m_linearClampSampler) {
hzbInit->setSamplerState(m_linearClampSampler, 0);
}
const uint32_t hzbWidth = m_hzbMipViews[0]->width();
const uint32_t hzbHeight = m_hzbMipViews[0]->height();
const uint32_t threads = 8;
MTL::Size tgSize = MTL::Size(threads, threads, 1);
MTL::Size gridSize = MTL::Size((hzbWidth + threads - 1) / threads * threads,
(hzbHeight + threads - 1) / threads * threads,
1);
hzbInit->dispatchThreads(gridSize, tgSize);
hzbInit->endEncoding();
for (size_t mip = 1; mip < m_hzbMipViews.size(); ++mip) {
MTL::Texture* src = m_hzbMipViews[mip - 1];
MTL::Texture* dst = m_hzbMipViews[mip];
if (!src || !dst) {
continue;
}
MTL::ComputeCommandEncoder* downEncoder = commandBuffer->computeCommandEncoder();
downEncoder->setComputePipelineState(m_hzbDownsamplePipeline);
downEncoder->setTexture(src, 0);
downEncoder->setTexture(dst, 1);
const uint32_t mipWidth = dst->width();
const uint32_t mipHeight = dst->height();
MTL::Size downGrid = MTL::Size((mipWidth + threads - 1) / threads * threads,
(mipHeight + threads - 1) / threads * threads,
1);
downEncoder->dispatchThreads(downGrid, tgSize);
downEncoder->endEncoding();
}
if (m_instanceCullHzbPipeline) {
dispatchInstanceCulling(m_instanceCullHzbPipeline, true);
}
}
My Questions:
Encoder Synchronization: Would you recommend moving this loop into a single ComputeCommandEncoder using MTLBarrier between dispatches to maintain L2 cache residency, or is the overhead of separate encoders negligible for depth-downsampling on TBDR?
visionOS Bindless Latency: For stereo rendering on visionOS, what are the best practices for managing MTL4ArgumentTable updates at 90Hz+? I want to ensure that updating bindless resources for each eye doesn't introduce unnecessary CPU-to-GPU latency.
Memory Management: Are there specific hints for Memoryless textures that could be applied to intermediate HZB levels to save bandwidth during this process?
I’ve attached a screenshot of a scene rendered with the engine (PBR, SSR, and IBL).
So i and many other big youtubers out there use iphones for intense gaming such as callcof duty and fortnite. However apple never listens especially to those of us that have been your guys's clients since the early 2000's. Users have different guinsetups for their games and a lot of the time when in an intense conpetition and the the low battery animation pops up it blocks our screens and that 3 to 4 seconds it takes to disappear makes us lose a game. Apple needs to implement a feature that allows us to toggle the notification even if our phone dies or a more efficient route would be allow us to toggle or set it by default to show that your battery is low on the top of the screen where the black line expands when you start charging your phone so it doesn't affect my gameplay whatsoever. This is a crucial thing apple needs to do, many people won't report it because apple never listens. Another great feature would be if apple could make a charging port on the side for claw players.
I'm new to graphics and game design and I just wanted to know if a compute pipeline could be as efficient as a render pipeline for rasterization and an explanation on how and why. Also is it possible to manually perform rasterization with a render pipeline as in manipulate individual pixel data in a metal texture yourself but do it with a render pipeline?
Can you Make an GLOBAL IN REAL LIFE SIMULATOR ROLEPLAY.. Try to get 8 Different Developers to Help do it..
(Similar to Sims Play others Kind of that as well.. Gta Greenfiel rp in Roblox Minigame others as well..) Family Life Simulator Make it Realistic as can be..
The Server Could Add Billionaires or More then That on the Server & Add Multiple ppl on there in Online & Offline.. They Could Play it on ALL KINDS OF DEVICES & Android IPad Google Play IOS other Stuff too) Pls Pls Pls Pls.. I will Make a Image of it then I will Show it to you Guys to See what I want to be in the Game thnxs.. they could Have Wifi/Internet Hotspots & Offline..
If I want to edit image in preview app. But there is only option to rotate left and right 90degree rotations. No option to rorate in any prticular angle. So Please look into this and provide option in next update
Topic:
Media Technologies
SubTopic:
Photos & Camera
Tags:
Image I/O
Graphics and Games
App Review
Media
I'm having a heck of a time getting this to work. I'm trying to add an event notification at the end of a timeline animation to trigger something in code but I'm not receiving the notification from RC Pro. I've watched that Compose Interactive 3D Content video quite a few times now and have tried many different ways. RC Pro has the correct ID names on the notifications. I'm not a programmer at all. Just a lowly 3D artist. Here is my code...
import SwiftUI
import RealityKit
import RealityKitContent
extension Notification.Name {
static let button1Pressed = Notification.Name("button1pressed")
static let button2Pressed = Notification.Name("button2pressed")
static let button3Pressed = Notification.Name("button3pressed")
}
struct MainButtons: View {
@State private var transitionToNextSceneForButton1 = false
@State private var transitionToNextSceneForButton2 = false
@State private var transitionToNextSceneForButton3 = false
@Environment(AppModel.self) var appModel
@Environment(\.dismissWindow) var dismissWindow
// Notification publishers for each button
private let button1PressedReceived = NotificationCenter.default.publisher(for: .button1Pressed)
private let button2PressedReceived = NotificationCenter.default.publisher(for: .button2Pressed)
private let button3PressedReceived = NotificationCenter.default.publisher(for: .button3Pressed)
var body: some View {
ZStack {
RealityView { content in
// Load your RC Pro scene that contains the 3D buttons.
if let immersiveContentEntity = try? await Entity(named: "MainButtons", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
}
// Optionally attach a gesture if you want to debug a generic tap:
.gesture(
TapGesture().targetedToAnyEntity().onEnded { value in
print("3D Object tapped")
_ = value.entity.applyTapForBehaviors()
// Do not post a test notification here—rely on RC Pro timeline events.
}
)
}
.onAppear {
dismissWindow(id: "main")
// Remove any test notification posting code.
}
// Listen for distinct button notifications.
.onReceive(button1PressedReceived) { (output) in
print("Button 1 pressed notification received")
transitionToNextSceneForButton1 = true
}
.onReceive(button2PressedReceived.receive(on: DispatchQueue.main)) { _ in
print("Button 2 pressed notification received")
transitionToNextSceneForButton2 = true
}
.onReceive(button3PressedReceived.receive(on: DispatchQueue.main)) { _ in
print("Button 3 pressed notification received")
transitionToNextSceneForButton3 = true
}
// Present next scenes for each button as needed. For example, for button 1:
.fullScreenCover(isPresented: $transitionToNextSceneForButton1) {
FacilityTour()
.environment(appModel)
}
// You can add additional fullScreenCover modifiers for button 2 and 3 transitions.
}
}
Topic:
Spatial Computing
SubTopic:
Reality Composer Pro
Tags:
Graphics and Games
Xcode
SwiftUI
Reality Composer Pro