I am using Apple's original Lightning Digital AV-adapter (Lightning-to-HDMI dongle) to connect my iPhone to an external display via a HDMI cable.
I need to synchronize rendering with the external display's refresh rate, so I create a new CADisplayLink tied to the external display's UIScreen: UIScreen.screens[externalDisplayIdx].displayLink(withTarget:, selector:).
The callback is being called regularly, but with increasing delay relative to the CADisplayLink.timestamp, so the next time the callback is called, I have less and less time to draw the next frame (see the snippet below).
Assuming 60 FPS, the value of secondsTillDeadline starts at an arbitrary value in the range of approx -0.0001 to 0.0166667, and then it slowly decreases towards zero (and for a brief period it goes into small negative numbers). Once it reaches zero, it flips back to 0.0166667 and continues to decrease again. This cycle repeats indefinitely.
Changing the external display's resolution (UIScreen's mode) or the CADisplayLink's preferredFrameRateRange to a lower FPS does not seem to have any effect on the temporal drifting (even the rate of change seem to be the same).
When I create a new CADisplayLink for the iPhone's main screen, the value of secondsTillDeadline is stable, it does not drift and it is very close to 0.0166667, as expected.
Is this drift caused by the external monitor or by Apple's Lightning-to-HDMI dongle ...or is the problem somewhere else?
Can the drifting be stopped?
func onDisplayLinkUpdate(displayLink: CADisplayLink) {
// Gradually decreases from 0.01667 to -0.0001, then flips back to 0.01667 and continues to decrease
let secondsTillDeadline = displayLink.targetTimestamp - CACurrentMediaTime()
}
Build captivating gaming experiences for Apple platforms.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
I'm experiencing a specific issue where when using any of the MacOS 26 Tahoe betas with Low Power Mode enabled and using Vsync in fullscreen, my application framerate gets limited to a hard 30 fps. I have not experienced this on any older OS. For example Low Power Mode on 13.6 Ventura with Vsync fullscreen lets my application run at full 60 fps without issues.
Is this a bug or a change in behavior of Low Power Mode on Tahoe?
My application is 3D, runs at 60 fps and is sensitive to tearing, so I need Vsync and it is mostly utilized in fullscreen. And Low Power Mode is a default for many Macs, so default experience on Tahoe currently is a halved 30 fps. However there also seems to be inconsistencies of on which machines this happens, but older OSes are always fine.
Hello,
**I'm Using **
Unity 6 LTS
Unity Apple GameKit + Core plugins
Turn-based matchmaking interface w/ 2 players max
App Store Connect API for rule-based matchmaking
I have already
enabled game center in app store connect (I think)
authenticated players and matched via friend request
I am stuck
Using queues to match players automatically
I'm working on a rule-based matchmaking system which aims to place two players against each other into a GKTurnBasedMatch. I have a simple Unity Project that correctly authenticates a user and proceeds to send a matchmaking request. The matchmaking script utilizes the Unity plugins' GKTurnBasedMatchmakerViewController.Request(...) request function with a GKMatchRequest.Init() request configured with a QueueName equal to the App Store Connect API Queue I created.
The queue I created is also linked to a ruleset with a very basic rule that checks if the properties contains a key called 'preference' that contains a string value for what side the player wants to play for this match. If during the matchmaking, the preferences between players are different, then the match is made and both players should join the match; each player gets to play the side they have chosen. I have my rule expression designed to just check if the preferences are not equal:
requests[0].properties.faction_preference != requests[1].properties.faction_preference
When I launch the game with two physical iPads and begin the matchmaking request, each player is immediately presented with two options:
Invite a friend, or
Start game
The Problem: Inviting a friend works to get two players into a game, but queue seems to not matter, and clicking start game will just put the current player into its own match (no one joins).
The Question: How do I get queue based matchmaking to work in Unity for a Turn-based match with only two players who are able to select the enemy side they want to play dictated by a rule that compares enemy play-side preferences?
Resources I've used:
Apple Unity GameKit Plugin: https://github.com/apple/unityplugins
Matchmaking: https://developer.apple.com/documentation/gamekit/matchmaking-rules
Multiplayer rulesets: https://developer.apple.com/documentation/gamekit/finding-players-using-matchmaking-rules
Topic:
Graphics & Games
SubTopic:
GameKit
Tags:
GameKit
Graphics and Games
App Store Connect
Apple Unity Plug-Ins
I'm using the Apple RoomPlan sdk to generate a .usdz file, which works fine, and gives me a 3D scan of my room.
But when I try to use Model I/O's MDLAsset to convert that output into an .obj file, it comes out as a completely flat model shaped like a rectangle. Here is my Swift code:
let destinationURL = destinationFolderURL.appending(path: "Room.usdz")
do {
try FileManager.default.createDirectory(at: destinationFolderURL, withIntermediateDirectories: true)
try finalResults?.export(to: destinationURL, exportOptions: .model)
let newUsdz = destinationURL;
let asset = MDLAsset(url: newUsdz);
let obj = destinationFolderURL.appending(path: "Room.obj")
try asset.export(to: obj)
}
Not sure what's wrong here. According to MDLAsset documentation, .obj is a supported format and exporting from .usdz to the other formats like .stl and .ply works fine and retains the original 3D shape.
Some things I've tried:
changing "exportOptions" to parametric, mesh, or model.
simply changing the file extension of "destinationURL" (throws error)
Hi folks,
I'm working on a Tile based Deferred renderer, similar to this Apple example. I'm wondering how to add MSAA to the renderer, and I see two choices:
Copy the single-sampled texture at the end of the GBuffer/Lighting render pass to a multi-sampled texture and resolve from that
Make all render targets (GBuffer) multi-sampled and deal with sampling/resolving all intermediate textures as well as the final, combined texture.
Which is the proper approach, and are there any examples of how to implement it?
Thanks!
XCode: 16.0
MacBook Pro: M1 Pro, Sonoma 14.5
Device: iPhone16, 18.0.1
Game: UnrealEngine 5.4.2 based
100% crash after gpu cature, even only renders a login UI
I have developed an application using Kotlin and Swift languages, which has been installed and run on an iPhone. It can also be installed on an iPad. Do I need to go through a testing process to publish it on the app store? Also, as a developer from China, are there convenient payment channels for developers?
I would like to implement zoom functionality in my SceneKit game: when the user performs the pinch gesture on a point on the screen, the scene zooms in to make that point larger.
Until now I simply changed SCNCamera.focalLength, but this simply zooms in to the center of what is currently visible on screen. Is it somehow possible to implement the zoom functionality described above by perhaps interactively rotating the camera at the same time towards the pinched point? Is there a formula for this? I would like to avoid suddenly rotating the camera to face the pinched point when the pinch gesture begins and then zoom in while the pinch is in progress.
Hey All,
I would appreciate suggestions on how to resolve this problem as this is my first time publishing app on iOS store.
So, I made a party game which simulates a user urinating into ****** and you have shoot some targets in ****** by controlling motion sensor.
During testing on test flight no problem was shown but during official rollout, App store review says it's to remove the content which is basically the whole gameplay.
I submitted a reply clarifying everything but it seems they have an automated response which was same as previous one.
On the other hand, there's a similar game on app store with somewhat same idea.
Can someone please guide me how to approach this problem. My game is published on Android without any problem but here.
Topic:
App Store Distribution & Marketing
SubTopic:
App Review
Tags:
Games
Graphics and Games
App Store
I facing to many lags in pubgmobile when i m playing its not running properly
Hello, I’m the developer of the “ StepSquad” app. Our app uses the Game Center achievement feature, but we’ve been encountering a problem: the “Global Players” metric always shows 0%, even though there are friends who have already achieved these achievements. Initially, I thought it might be because the app was newly launched. However, it’s now been over two months since release, and it’s still showing 0%. If anyone has any insight into this issue, please leave a comment.
For many years, I've noticed that although in native code I can handle continuous and simultaneous Apple pencil and touch inputs using UIKit, Safari and WKWebView's PointerEvents only seem to allow you to use one input type at a time. i.e. Apple Pencil down blocks touch input until lifted and touch input blocks Apple Pencil input. It's as though requiresexclusivetouchtype has been set in the underlying webkit implementation. There's decades of research (e.g. https://dl.acm.org/doi/10.1145/1866029.1866036 ) and several existing native applications in production showing that multimodal inputs open-up many unique and useful applications and interactions. Even a simple "hold object with finger" + "draw with stylus" controls are the norm. I recently built a native application using multimodal simultaneous inputs, but this is impossible to port to web due to the unexpected behavior of PointerEvents (and touch events, and mouse events; any variant exhibits the same behavior). I've researched and attempted to apply every possible flag, change, and css code to get this working, but I think the behind-the-scenes implementation is what's blocking the simultaneous touch types.
This is unexpected and undesired behavior because it's inconsistent with the native behavior. If it's unintended, it's a big priority to fix for creating better user experiences on the iPad. If it's intended, I do not believe that's reasonable (even if it might be more complex and used for more advanced applications). Please expose a way to support simultaneous touch types in iPadOS/iOS in both Safari and WKWebView.
At minimum, may we have a discussion on how to support the desired behavior? The simplest solution I can think of is to provide a webkit-platform-specific boolean in Safari and WKWebView called requiresExclusiveTouchType, which is set to False by default to keep the current behavior, and settable to True to get the more flexible behavior I'm expecting.
Hey! I'm facing an issue with Equipment collision when adding and moving TabletopKit equipment with different pose rotations.
Let me share a very simple TabletopKit setup as an example:
Table
struct Table: Tabletop {
var shape: TabletopShape = .rectangular(width: 1, height: 1, thickness: 0.01)
var id: EquipmentIdentifier = .tableID
}
Board
struct Board: Equipment {
let id: EquipmentIdentifier = .boardID
var initialState: BaseEquipmentState {
.init(
parentID: .tableID,
seatControl: .restricted([]),
pose: .init(position: .init(), rotation: .zero),
boundingBox: .init(center: .zero, size: .init(1.0, 0, 1.0))
)
}
}
Equipment
struct Object: EntityEquipment {
var id: ID
var size: SIMD2<Float>
var position: SIMD2<Double>
var rotation: Float
var entity: Entity
var initialState: BaseEquipmentState
init(id: Int, size: SIMD2<Float>, position: SIMD2<Double>, rotation: Float) {
self.id = EquipmentIdentifier(id)
self.size = size
self.position = position
self.rotation = rotation
self.entity = objectEntity
self.initialState = .init(
parentID: .boardID,
seatControl: .any,
pose: .init(
position: .init(x: position.x, z: position.y),
rotation: .degrees(Double(rotation))
),
entity: entity
)
}
}
Setup
class GameSetup {
var setup: TableSetup
init(root: Entity) {
setup = TableSetup(tabletop: Table())
setup.add(equipment: Board())
setup.add(seat: PlayerSeat())
let object1 = Object(
id: 2,
size: .init(x: 0.1, y: 0.1),
position: .init(x: 0.1, y: -0.1),
rotation: 0
)
let object2 = Object(
id: 3,
size: .init(x: 0.2, y: 0.1),
position: .init(x: -0.1, y: -0.1),
rotation: 90
)
setup.add(equipment: object1)
setup.add(equipment: object2)
}
}
The issue
When I add two equipment entities with different rotation poses, the collisions between them behave oddly. If one is 90º and the other 0º, for example, the former will intersect with the latter as if its bounding box was not rotated as you can see below:
But if both equipment have the example rotation (e.g. 0 or 90º), though, then there's no collision issue at all, which seems to indicate their bounding box were correctly rotated:
I'd really appreciate some help understanding if this is a bug or if I'm just missing something.
Thanks in advance!
Topic:
Graphics & Games
SubTopic:
TabletopKit
Tags:
Graphics and Games
RealityKit
visionOS
TabletopKit
Crash dump:
`Crashed Thread: 0 tid_103 Dispatch queue: com.apple.main-thread
Exception Type: EXC_BAD_ACCESS (SIGILL)
Exception Codes: KERN_PROTECTION_FAILURE at 0x000000016d3bfea0
Exception Codes: 0x0000000000000002, 0x000000016d3bfea0
Termination Reason: Namespace SIGNAL, Code 4 Illegal instruction: 4
Terminating Process: Unity [7873]
VM Region Info: 0x16d3bfea0 is in 0x169bbc000-0x16d3c0000; bytes after start: 58736288 bytes before end: 351
REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL
mapped file 169b00000-169ba8000 [ 672K] rw-/rwx SM=PRV Object_id=4d22156e
GAP OF 0x14000 BYTES
---> STACK GUARD 169bbc000-16d3c0000 [ 56.0M] ---/rwx SM=NUL stack guard for thread 0
Stack 16d3c0000-16dbbc000 [ 8176K] rw-/rwx SM=SHM thread 0
Thread 0 Crashed:: tid_103 Dispatch queue: com.apple.main-thread
0 libsystem_platform.dylib 0x1932ee7ac _platform_memset + 108
1 libmonobdwgc-2.0.dylib 0x33977abdc GC_clear_stack_inner + 60
2 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
3 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
4 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
5 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
6 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
7 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
8 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
9 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
10 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
11 libmonobdwgc-2.0.dylib 0x33977abf8 GC_clear_stack_inner + 88
12 libmonobdwgc-2.0.dylib 0x33976b518 GC_clear_stack + 76
13 libmonobdwgc-2.0.dylib 0x33973c074 mono_gc_alloc_obj + 112
14 libmonobdwgc-2.0.dylib 0x3396e0db4 mono_object_new_specific_checked + 72
15 libmonobdwgc-2.0.dylib 0x3396e116c ves_icall_object_new_specific + 28`
Hi,
I'm working on a game for the past few years using first Unreal Engine 4, and now Unreal Engine 5.4.4.
I'm experiencingan unusual crash on startup on some devices
. The crash is so fast that I'm barely able to see the launching screen sometimes because the app closes itself before that.
I got a EXC_CRASH (SIGABRT) so I know that it's a null pointer reference, but I can't quite wrap my head about the cause, I think that's something messed up in the packaging of the app, but here is where I'm blocked, I'm not that accustomed with apple devices.
If someone has some advise to give, please, any help will be very valuable. Many thanks.
Log :
Crash Log on Ipad
Hello everyone,
i created a game that is related to doctor operation simulator. In this game, user perform different operations on different organs like face, elbow, foot etc. Game is live on play store and here is the link:
https://play.google.com/store/apps/details?id=com.nexthope.doctor.hospital.surgery.games&hl=en_US&gl=US
But it was rejected by app store review team. Similar games are already there on app store. Please help me on this issue.
Hi,
seems MSL is missing support for a clock() shader instruction available in other graphics APIs like Vulkan or OpenGL for example..
useful for counting cost in number of clock cycles of some code insider shader with much finer granularity than launching a micro kernel with same instructions and measuring cycles cost from CPU..
also useful for MoltenVK to support that extensions..
thanks..
hi
When analyzing our game using Instruments, I've always been confused about the two items "Drawable Present" and "Drawable Presented" in the GPU column. The timing of Drawable Present seems to be when the CPU layer calls commandbuffer:present, rather than when the actual encoding is completed on the GPU. Also, what does drawable presented specifically mean? In our case, when a CPU stall occurs, it appears that the vsync interval changes in the next frame, and a surface that has already been calculated is not displayed. Why is this happening?
i am trying to build some projects please check out my project
In my Reality Composer Pro workflow for Vision Pro development, I’m using xcrun realitytool image to pre-compress textures into .ktx format, typically using ASTC block compression. These textures are used for cubemaps and environment assets.
I’ve noticed that regardless of the image content—whether it’s a highly detailed photo or a completely black image—once compressed with the same ASTC block size (e.g., ASTC_8x8), the resulting .ktx file size is nearly identical. There appears to be no content-aware logic that adapts the compression ratio to the actual texture complexity.
In contrast, Unreal Engine behaves differently: even when all cubemap faces are imported at the same resolution as DDS textures, the engine performs content-aware compression during packaging:
Low-complexity images are compressed more aggressively
The final packaged file size varies based on content complexity
Since Reality Composer Pro requires textures to be pre-compressed as .ktx, there’s no opportunity for runtime optimization or per-image compression adjustment.
Just wondering: is there any recommended way to implement content-aware compression for .ktx textures in Reality Composer Pro?
Or any best practices to optimize .ktx sizes based on image complexity?
Thanks!