Due to the release of ProMotion devices, the system may switch frame rates in certain scenarios, resulting in the loss of reference value for data collected through CADisplayLink callbacks at a fixed 60Hz frame rate. We cannot distinguish whether the slow callback of CADisplayLink is due to a stutter or a system switch in frame rate.
I know Hitch Time Ratio, but I can't use this scheme for some reasons.
How can I distinguish between stuck and frame rate gear shift in CADisplaylink callback?
In iOS 15, CADisplayLink.preferredFrameRateRange.preferred always returns 0, while minimum and maximum do change. Can I use these minimum and maximum range values as criteria to distinguish between frame rate switching and stuttering?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I use SCNScene.write To file ".usdz",
then open the ".usdz" by SCNScene.url , all Nodes color fades.
Export and load only once. The fading is not obvious. Return and repeat 4 or 5 times. It is obvious that the color is inconsistent with the original color and has become much lighter.
When I take a frame capture of my application in Xcode, it shows a warning that reads "Your application created separate command encoders which can be combined into a single encoder. By combining these encoders you may reduce your application's load/store bandwidth usage."
In the minimal reproduction case I've identified for this warning, I have two render pipeline states: The first writes to the current drawable, the depth buffer, and a secondary color buffer. The second writes only to the current drawable.
Because these are writing to a different set of outputs, I was initially creating two separate render command encoders to handle the draws under each of these states.
My understanding is that Xcode is telling me I could only create one, however when I try to do that, I get runtime asserts when attempting to apply the second render pipeline state since it doesn't have a matching attachment configured for the second color buffer or for the depth buffer, so I can't just combine the encoders.
Is the only solution here to detect and propagate forward the color/depth attachments from the first state into the creation of the second state?
Is there any way to suppress this specific warning in Xcode?
Topic:
Graphics & Games
SubTopic:
Metal
Hi
Hopefully someone can share some ideas on how to accomplish this.
I know we can load models from realityKitContentBundle like
let model = try? await Entity(named: “testModel”, in: realityKitContentBundle)
But this is in the root of RealityKitContent.rkassets , if I have the models in some subfolder then I have to add the complete path like
let model = try? await Entity(named: “/superModels/testModel”, in: realityKitContentBundle)
What I want is to be able to search recursively in all folders for that file as I have several subfolders with different models.
Any suggestion ?
Thanks in advance.
Guillermo
I want to use reality to create a custom material that can use my own shader and support Mesh instancing (for rendering 3D Gaussian splating), but I found that CustomMaterial does not support VisionOS. Is there any other interface that can achieve my needs? Where can I find examples?
Topic:
Graphics & Games
SubTopic:
RealityKit
Using Reality Composer Pro 2.0, I created a simple shader graph that displays a texture on an unlit surface:
On visionOS 2 beta, I can successfully use ShaderGraphMaterial(named:from:in:) to load that shader graph material and assign it to a model entity.
However, on visionOS 1.2 and earlier, either in Simulator or on the device, ShaderGraphMaterial(named:from:in:) fails and I see the following logged to the console:
If, using Reality Composer Pro 1.0, I experimentally open the same project and delete and recreate exactly the same nodes above, then ShaderGraphMaterial(named:from:in:) works as expected on visionOS 1.2.
Is it a known issue that Reality Composer 2 can't be used with visionOS 1?
Is this intentional behavior?
I've submitted feedback as FB14828873, including a sample project and repro steps.
If possible, I would appreciate guidance from an Apple engineer, like "This is a known issue for [list of node types]" or "Reality Composer Pro 2 is not supported for visionOS 1 development, please refer to [documentation]" or "We recommend [workaround]."
Thank you.
I have a scene built up in RealityComposerPro, in which I've added a ParticleEmitter with isEmitting set to False and 'Loop' set to True.
In my app, when I toggle isEmitting to True there can be a delay of a few seconds before the ParticleEmitter starts.
However, if I programatically add the emitter in code at that point, it starts immediately.
To be clear, I'm seeing this on the VisionOS simulator - I don't have access to a device at this time.
Am I misunderstanding how to control the ParticleEmitter when I need precise control on when it starts.
I would like a monthly recurring leaderboard, but the most days one can set for the recurring leaderboard is 30, and some months have 31 days (or 28/29). This is dumb. I guess I have to manually reset a classic leaderboard each month to get this result?
Additionally once it closes and is about to reset (I also have daily recurring leaderboards), I'd like to grant the top placers on the leaderboard a corresponding achievement, but I don't see any way of doing this.
I believe I can do all these things on PlayFab, but it'll take a bit more work, and eventually cost.
Any one have advise?
Topic:
Graphics & Games
SubTopic:
GameKit
Hello!
I'm developing a GPU (shader) language, where I aim to target multiple backends with a common frontend. I wanted to avoid having to round trip through Metal, and go straight to IR just like I have with SPIRV, in order to have a fast and efficient compilation process.
I've been looking for a reference page where I can read about Metals IR, and as far as I'm aware, it exists, but I can't seem to find it anywhere.
Furthermore, if such a reference is available, is there also a toolkit where I can run validation on the output IR, and perhaps even run optimizations, much like spv-tools for SPIRV?
Any help would be appreciated!
Thanks,
Gustav
Hello RealityKit developers,
I'm currently working on physics simulations in my visionOS app and am trying to adapt the concepts from the official sample Simulating physics joints in your RealityKit app.
In the sample, a sphere is connected to the ceiling using a PhysicsRevoluteJoint to create a hinge-like simulation. I've successfully modified this setup to use a PhysicsSphericalJoint instead.
The basic replacement works as expected: pin1 (attached to the sphere) rotates freely around pin0 (attached to the ceiling), much like a ball-and-socket joint should, removing all translational degrees of freedom.
My challenge lies with the PhysicsSphericalJoint's angularLimitInYZ property. The documentation mentions that this property allows limiting the rotation around the Y and Z axes, defining an "elliptical cone shape around the x-axis of pin0." However, I'm struggling to understand how to specify these values to achieve a desired rotational limit.
If I have a sphere that is currently capable of rotating 360 degrees around pin0 (like a free-spinning ball on a string), how would I use angularLimitInYZ to restrict its rotation to a certain height or angular range, preventing it from completing a full circle?
Specifically, I'm trying to achieve a "swing" like behavior where the sphere oscillates back and forth but cannot rotate completely overhead or underfoot. What values or approach should I use for the angularLimitInYZ tuple to define such a restricted pendulum-like motion?
Any insights, code examples, or explanations on how to properly configure angularLimitInYZ for this kind of behavior would be incredibly helpful!
The following code is modified from the sample.
extension MainView {
func addPinsTo(ballEntity: Entity, attachmentEntity: Entity) throws {
let hingeOrientation = simd_quatf(from: [1, 0, 0], to: [0, 0, 1])
let attachmentPin = attachmentEntity.pins.set(
named: "attachment_hinge",
position: .zero,
orientation: hingeOrientation
)
let relativeJointLocation = attachmentEntity.position(
relativeTo: ballEntity
)
let ballPin = ballEntity.pins.set(
named: "ball_hinge",
position: relativeJointLocation,
orientation: hingeOrientation
)
// Create a PhysicsSphericalJoint between the two pins.
let revoluteJoint = PhysicsSphericalJoint(pin0: attachmentPin, pin1: ballPin)
try revoluteJoint.addToSimulation()
}
}
The following image is a screenshot of the operation when changing to PhysicsSphericalJoint.
Thank you in advance for your assistance.
Hello,
I'm getting this error when launching a SpriteKit Swift game in iOS 18+ on an iPhone 11 Pro, whose shell is partly damaged in the back:
CHHapticEngine.mm:1206 -[CHHapticEngine doStartWithCompletionHandler:]_block_invoke: ERROR: Player start failed: The operation couldn’t be completed. (com.apple.CoreHaptics error 1852797029.)
Haptics do not work on this device, due to the damaged shell, so some error — which obviously occurs when calling start(completionHandler:) — is definitely expected; what is not expected is the main thread sometimes blocking for up to 5 seconds — although the method is not called from the main thread... the error itself is always displayed from some other secondary (system) thread. During this time, the main thread does not access the haptics engine at all; on average, it blocks once every four or five launches. In each launch (blocking or not), the 'nope' error is displayed ~5 seconds after trying to start the engine.
After going nuts with all kinds of breakpoints and instrumentation, I'm at a loss as to why the main thread would sometimes block...
Ideas, anyone?
Thank you,
D.
I have a SPFx React application where I am printing the HTML page content using the javascript default window.print() functionality. Once I save the page as pdf from the print preview window and open it using Adobe Acrobat, the links(for eg -> Google) within the content are not clickable and appearing as plain text.
I have tried to print random pages post searching with any keywords in Google and saved the files as pdfs, but, unfortunately, the links are still not clickable there as well.
To check whether it is an Adobe Acrobat issue, I have performed the same print functionality from Android devices and shared the pdf file across the iOS devices and in that case, when opened using Adobe Acrobat, the links are appearing to be clickable.
I am wondering whether it is something related to how the default print functionality works for iPadOS and iOS devices. Any insights on this would be really helpful. Thanks!!!
Note: The links are clickable for MacOS as well as for Windows.
#ios #ipados #javascript #spfx #react
I am working on a custom resolve tile shader for a client. I see a big difference in performance depending on where we write to:
1- the resolve texture of the color attachment
2- a rw tile shader texture set via [renderEncoder setTileTexture: myResolvedTexture]
Option 2 is more than twice as slow than option 1.
Our compute shader writes to 4 UAVs so just using the resolve texture entry is not possible.
Why such a difference as there is no more data being written? Can option 2 be as fast as option 1?
I can demonstrate the issue in a modified version of the Multisample code sample.
I am building a MacOS desktop app (https://anukari.com) that is using Metal compute to do real-time audio/DSP processing, as I have a problem that is highly parallelizable and too computationally expensive for the CPU.
However it seems that the way in which I am using the GPU, even when my app is fully compute-limited, the OS never increases the power/performance state. Because this is a real-time audio synthesis application, it's a huge problem to not be able to take advantage of the full clock speeds that the GPU is capable of, because the app can't keep up with real-time.
I discovered this issue while profiling the app using Instrument's Metal tracing (and Game tracing) modes. In the profiling configuration under "Metal Application" there is a drop-down to select the "Performance State." If I run the application under Instruments with Performance State set to Maximum, it runs amazingly well, and all my problems go away.
For comparison, when I run the app on its own, outside of Instruments, the expensive GPU computation it's doing takes around 2x as long to complete, meaning that the app performs half as well.
I've done a ton of work to micro-optimize my Metal compute code, based on every scrap of information from the WWDC videos, etc. A problem I'm running into is that I think that the more efficient I make my code, the less it signals to the OS that I want high GPU clock speeds!
I think part of why the OS is confused is that in most use cases, my computation can be done using only a small number of Metal threadgroups. I'm guessing that the OS heuristics see that only a small fraction of the GPU is saturated and fail to scale up the power/clock state.
I'm not sure what to do here; I'm in a bit of a bind. One possibility is that I intentionally schedule busy work -- spin threadgroups just to waste energy and signal to the OS that I need higher clock speeds. This is obviously a really bad idea, but it might work.
Is there any other (better) way for my app to signal to the OS that it is doing real-time latency-sensitive computation on the GPU and needs the clock speeds to be scaled up?
Note that game mode is not really an option, as my app also runs as an AU plugin inside hosts like Garageband, so it can't be made fullscreen, etc.
This week, I developed a small multiplatform RealityKit project. I also created a demo scene in Reality Composer Pro. Afterward, I imported the local package into the project. Running the project on macOS works perfectly. However, when I tried to run it on my iPhone, I encountered a permission error indicating that it couldn’t read the package. This seems unusual to me because I assumed that the dependency is bundled into the binary file. In an attempt to resolve the issue, I pushed the RCP package to GitHub, hoping it would work. Fortunately, everything compiles successfully now, but the loading time is significantly long, and the animations don’t play on tap gestures.
Could someone please help me identify the root cause of this problem?
I've loaded a ShaderGraphMaterial from a RealityKit content bundle and I'm attempting to access the initial values of its parameters using getParameter(handle:), but this method appears to always return nil:
let shaderGraphMaterial = try await ShaderGraphMaterial(named: "MyMaterial", from: "MyFile")
let namedParameterValue = shaderGraphMaterial.getParameter(name: "myParameter")
// This prints the value of the `myParameter` parameter, as expected.
print("namedParameterValue = \(namedParameterValue)")
let handle = ShaderGraphMaterial.parameterHandle(name: "myParameter")
let handleParameterValue = shaderGraphMaterial.getParameter(handle: handle)
// Expected behavior: prints the value of the `myParameter` parameter, as above.
// Observed behavior: prints `nil`.
print("handleParameterValue = \(handleParameterValue)")
Is this expected behavior?
Based on the documentation at https://developer.apple.com/documentation/realitykit/shadergraphmaterial/getparameter(handle:) I'd expect getParameter(handle:) to return the value of the parameter, just as getParameter(name:) does.
I've tested this on iOS 18.5 and iOS 26.0 beta 2.
Assuming this getParameter(handle:) works as designed, is the following ShaderGraphMaterial extension an appropriate workaround, or can you recommend a better approach?
Thank you.
public extension ShaderGraphMaterial {
/// Reassigns the values of all named material parameters using the handle-based API.
///
/// This works around an issue where, at least as of RealityKit 26.0 beta 2 and
/// earlier, `getParameter(handle:)` will always return `nil` when used to read the
/// initial value of a shader graph material parameter read using
/// `ShaderGraphMaterial(named:from:in:)`, whereas `getParameter(name:)` will work
/// as expected.
private mutating func copyNamedParametersToHandles() {
for parameterName in self.parameterNames {
if let value = self.getParameter(name: parameterName) {
let handle = ShaderGraphMaterial.parameterHandle(name: parameterName)
do {
try self.setParameter(handle: handle, value: value)
} catch {
assertionFailure("Cannot set parameter value")
}
}
}
}
}
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
RealityKit
Reality Composer Pro
Shader Graph Editor
visionOS
Is there any limitation in Vision Pro when loading scenes with large-scale models?
Test Case:
Asset: Composite USDA file containing 10 individual models (total triangles count: ~4.2M)
Simulator: Loads and renders correctly
Real Device:
Loads asset successfully but failure during rendering phase:
Environment abruptly dims
System spontaneously reboots
How can we resolve this issue?
Below are excerpted logs preceding the crash:
<<<< FigAudioSession(AV) >>>> audioSessionAVAudioSession_CopyMXSessionProperty signalled err=-19224 (kFigAudioSessionError_UnsupportedOperation) (getMXSessionProperty unsupported) at FigAudioSession_AVAudioSession.m:606
Attempted to add ornament: <MRUIPlatterOrnament: 0x10a658f00; _isInternal: YES; _displaceWindowChrome: NO; _canCaptureUI: NO; _isBeingRemoved: NO; contentAnchorPoint3D: "{0.5, 0.5, 0}"; position: <MRUIPlatterOrnamentRelativePosition: 0x105b68e70; anchorPoint: {0.5, 0.5, 1}>; rotation: "{{0, 0, 0}, 0}"; opacity: 1.000000; canFollowUser: YES; effectiveOffset: "{0, 0, 0}"; presentingViewController: 0x0; billboardingBehavior: 0x0; scalingBehavior: 0x0; relativeToParent: NO; nonHeritableDepthDisplacement: 0.000000; order: 0.000000; _window._determinedSize: {0, 0}; _window: (null)> to nil or non-supporting UIScene: <UIWindowScene: 0x10a8a0000; role: UISceneSessionRoleImmersiveSpaceApplication; persistentIdentifier: test.test:SFBSystemService-BA3A21A3-D1AB-42E2-8AF0-AE0AB83BE528; activationState: UISceneActivationStateUnattached>. No action taken.
Failed to set dependencies on asset 2823930584475958382 because NetworkAssetManager does not have an asset entity for that id.
apply fence tx failed (client=0x98490e18) [0x10000003 (ipc/send) invalid destination port]
Failed to commit transaction (client=0xa86516e2) [0x10000003 (ipc/send) invalid destination port]
If I create a bitmap image and then try to get ready to draw into it, like so:
NSBitmapImageRep* newRep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: nullptr
pixelsWide: 128
pixelsHigh: 128
bitsPerSample: 8
samplesPerPixel: 4
hasAlpha: YES
isPlanar: NO
colorSpaceName: NSDeviceRGBColorSpace
bitmapFormat: NSBitmapFormatAlphaNonpremultiplied |
NSBitmapFormatThirtyTwoBitBigEndian
bytesPerRow: 4 * 128
bitsPerPixel: 32];
[NSGraphicsContext setCurrentContext:
[NSGraphicsContext graphicsContextWithBitmapImageRep: newRep]];
then the log shows this error:
CGBitmapContextCreate: unsupported parameter combination:
RGB
8 bits/component, integer
512 bytes/row
kCGImageAlphaLast
kCGImageByteOrderDefault
kCGImagePixelFormatPacked
Valid parameters for RGB color space model are:
16 bits per pixel, 5 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaNoneSkipLast
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedFirst
32 bits per pixel, 8 bits per component, kCGImageAlphaPremultipliedLast
32 bits per pixel, 10 bits per component, kCGImageAlphaNone|kCGImagePixelFormatRGBCIF10|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast
64 bits per pixel, 16 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
64 bits per pixel, 16 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents|kCGImageByteOrder16Little
128 bits per pixel, 32 bits per component, kCGImageAlphaPremultipliedLast|kCGBitmapFloatComponents
128 bits per pixel, 32 bits per component, kCGImageAlphaNoneSkipLast|kCGBitmapFloatComponents
See Quartz 2D Programming Guide (available online) for more information.
If I don't use NSBitmapFormatAlphaNonpremultiplied as part of the format, I don't get the error message. My question is, why does the constant NSBitmapFormatAlphaNonpremultiplied exist if you can't use it like this?
If you're wondering why I wanted to do this: I want to extract the RGBA pixel data from an image, which might have non-premultiplied alpha. And elsewhere online, I saw advice that if you want to look at the pixels of an image, draw it into a bitmap whose format you know and look at those pixels. And I don't want the process of drawing to premultiply my alpha.
Hi, is there any way to use front camera to do the motion capture?
I want recognize if the user raised there hands up with the front camera on iPhone.
I was able to do it with the back camera, not the front.
Also, if there is any sample code, or document, I would be super happy.
Waiting for your reply!!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
Swift Student Challenge
Swift
ARKit
Swift Playground
Hello, I am experiencing an issue with a character animation using RealityKit.
I have a file created in Blender that contains the rigged Character, a sword, and a shield. The sword and the shield have bones connected to the character's hands so they can follow the character's animation. When I run the animation in Blender and preview the exported USDZ file on Mac, I can see the sword and shield attached to the hands and the animation is fine. But when I add the USDZ model in RealityKit and play the animation, only the character is animating, the sword and the shield are not moving at all.
This is the code I use to animate the character:
private func loadAnimations() {
let unifiedAnimations = children[0].availableAnimations.first!.definition
let animationResource = try! AnimationResource.generate(with: unifiedAnimations)
self.children[0].playAnimation(
animationResource.repeat()
)
}
Here is a link with a Demo Xcode project, the Blender file, and a video:
https://www.dropbox.com/scl/fi/ypq2iwxc5f9dwzjggsvin/AppleTest.zip?rlkey=wiag3rg44urhjdh2wal8cdx2u&st=vbpf7x11&dl=0
STEPS TO REPRODUCE
I have created a demo project that displays the character on a horizontal surface, and the animation starts playing.
Run the App. The ARView will be set up, and a yellow square will appear in the middle of the screen.
When a horizontal surface is detected the yellow Square will change indicating that the surface is found.
Tap on the screen to load the USDZ model and position it in the yellow square's position.
The Animation will start playing and you can see that the character is animating, but the sword and shield remain still.
Thanks
Topic:
Graphics & Games
SubTopic:
RealityKit