Hi, I have a full screen iOS game. In the Simulator, preferredScreenEdgesDeferringSystemGestures works well to prevent a drag down from the top of the full-screen play area from immediately opening the Notification Center. (You got a tab, and had to swipe down on it.) But at least on my device, when testing, unlike in Simulator, Notification Center opens immediately when I swipe down from the top. Any suggestions?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
I have two apps released -- ReefScan and ReefBuild -- that are based on the WWDC21 sample photogrammetry apps for iOS and MacOS. Those run fine without LiDAR and are used mostly for underwater models where LiDAR does not work at all. It now appears that the updated photogrammetry session requires LiDAR data, and building my app on current xcode results in a non-working app. Has the "old" version of photgrammetry session been broken by this update? It worked very well previously so I would hate to see this regression to needing LiDAR. Most of my users do not have that.
Hi,
wanted to test if possible to use Mesa3D OGLon12+D3DMetal 2b3 to get GL>4.1 support on windows apps via D3D12Metal..
using simple wglgears.c app (similar glxgears) and running like:
GALLIUM_DRIVER=d3d12 wine64 wglgears64 -info
with overridden opengl32.dll using contents from:
https://github.com/pal1000/mesa-dist-win/releases/download/24.3.0-rc1/mesa3d-24.3.0-rc1-release-msvc.7z
I get:
[D3DMetal:LOG:5E53] Unsupported API: CreateCommandQueue1
caused by:
https://gitlab.freedesktop.org/mesa/mesa/-/commit/c022c9603d500b59ff5e6f93c8a214d1785ab20a
API:
https://learn.microsoft.com/en-us/windows/win32/api/d3d12/nf-d3d12-id3d12device9-createcommandqueue1
note setup is correct as using:
GALLIUM_DRIVER=llvmpipe wine64 wglgears64 -info
I get:
GL_RENDERER = llvmpipe (LLVM 19.1.3, 128 bits)
GL_VERSION = 4.5 (Compatibility Profile) Mesa 24.3.0-rc1 (git-85ba713d76)
GL_VENDOR = Mesa
GL_EXTENSIONS = GL_ARB_multisample GL_EXT_abgr GL_EXT_bgra GL_EXT_blend_color GL_EXT_blend_minmax GL_EXT_blend_subtract
r GL_EXT_texture.. etc..
I want to turn off my ray-tracing conditionally. There's is_null_acceleration_structure but when I don't bind an acceleration structure (or pass nil to setFragmentAccelerationStructure), I get the following API validation error:
-[MTLDebugRenderCommandEncoder validateCommonDrawErrors:]:5782: failed assertion `Draw Errors Validation
Fragment Function(vol_deferred_lighting): missing instanceAccelerationStructure binding at index 6 for accelerationStructure[0].
I can turn off API validation and it works, but it seems like I should be able to use nil for the acceleration structure w/o triggering a validation error. Seems like a bug, right?
I suppose I can work around this by creating a separate pipeline with the ray-tracing disabled via a function constant instead of using is_null_acceleration_structure.
(Can we get a ray-tracing tag for questions?)
I'm building a proof of concept application leveraging the PlaneDetectionProvider to generate UI and interactive elements on a horizontal plane the user is looking at. I'm able to create a cube at the centroid of the plane and change it's location via position. However, I can't seem to rotate the cube programmatically and from this forum post in September I'm not sure if the modelEntity.move functionality is still bugged or the documentation is not up to date.
if let planeCentroid = planeEntity.centroid {
// Create a cube at the centroid
let cubeMesh = MeshResource.generateBox(size: 0.1) // Create a cube with side length of 0.1 meters
let cubeMaterial = SimpleMaterial(color: .blue, isMetallic: false)
let cubeEntity = ModelEntity(mesh: cubeMesh, materials: [cubeMaterial])
cubeEntity.position = planeCentroid
cubeEntity.position.y += 0.3048
planeEntity.addChild(cubeEntity)
let rotationY = simd_quatf(angle: Float(45.0 * .pi/180.0), axis: SIMD3(x: 0, y: 1, z: 0))
let cubeTransform = Transform(rotation: rotationY)
cubeEntity.move(to: cubeTransform, relativeTo: planeEntity, duration: 5, timingFunction: .linear)
Ideally, I'd like to have the cube start/stop rotation when the user pinches on the plane mesh but I'd be happy just to see it rotate!
In my SceneKit game I'm able to connect two players with GKMatchmakerViewController. Now I want to support the scenario where one of them disconnects and wants to reconnect. I tried to do this with this code:
nonisolated public func match(_ match: GKMatch, player: GKPlayer, didChange state: GKPlayerConnectionState) {
Task { @MainActor in
switch state {
case .connected:
break
case .disconnected, .unknown:
let matchRequest = GKMatchRequest()
matchRequest.recipients = [player]
do {
try await GKMatchmaker.shared().addPlayers(to: match, matchRequest: matchRequest)
} catch {
}
@unknown default:
break
}
}
}
nonisolated public func player(_ player: GKPlayer, didAccept invite: GKInvite) {
guard let viewController = GKMatchmakerViewController(invite: invite) else {
return
}
viewController.matchmakerDelegate = self
present(viewController)
}
But after presenting the view controller with GKMatchmakerViewController(invite:), nothing else happens. I would expect matchmakerViewController(_:didFind:) to be called, or how would I get an instance of GKMatch?
Here is the code I use to reproduce the issue, and below the reproduction steps.
Code
Run the attached project on an iPad and a Mac simultaneously.
On both devices, tap the ship to connect to GameCenter.
Create an automatched match by tapping the rightmost icon on both devices.
When the two devices are matched, on iPad close the dialog and tap on the ship to disconnect from GameCenter.
Wait some time until the Mac detects the disconnect and automatically sends an invitation to join again.
When the notification arrives on the iPad, tap it, then tap the ship to connect to GameCenter again. The iPad receives the call player(_:didAccept:), but nothing else, so there’s no way to get a GKMatch instance again.
Hi,
I have a test app with a single "game centre achievement" and I am running it on my iPad and unable to list that achievement with
GameCenterManager.shared.loadAchievements
The app is still in version 1.0, prepare for submission status, (it is not ready for review submission). Game Center entitlement is added for the app and the achievement is added to the Game Center section of app. However it is marked as NotLive.
I am using a sandbox account to login to game centre on the iPad and I can't fetch this achievement. Is it because it is "NotLive". ? How do I test my Game Center achivement on the device without releasing it yet.
GKGameCenterViewController won't turn off.
With Core and GameKit from GitHub apple/unityplugins,
I succeeded in logging in and displaying GKGameCenterViewController.
Other leaderboards and everything work fine, but when I press X on GameCenter to return to the game, nothing happens.
I tried debugging by printing logs here and there in the plugin to check, but I didn't get any results.
When I press X, I couldn't get any logs or responses. It was like a button with no listener attached. No, it was more like an image.
Based on the community posts that said it worked fine before, it seems that the recent GameCenter update was not applied to the plugin, was omitted, or changed, causing a mismatch.
I am working on a SceneKit project where I use a CAShapeLayer as the content for SCNMaterial's diffuse.contents to display a progress bar. Here's my initial code:
func setupProgressWithCAShapeLayer() {
let progressLayer = createProgressLayer()
progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer
DispatchQueue.main.async {
var progress: CGFloat = 0.0
Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { timer in
progress += 0.01
if progress > 1.0 {
progress = 0.0
}
progressLayer.strokeEnd = progress // Update progress
}
}
}
// MARK: - ARSCNViewDelegate
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
progressBarPlane = SCNPlane(width: 0.2, height: 0.2)
setupProgressWithCAShapeLayer()
let planeNode = SCNNode(geometry: progressBarPlane)
planeNode.position = SCNVector3(x: 0, y: 0.2, z: 0)
node.addChildNode(planeNode)
}
This works fine, and the progress bar updates smoothly. However, when I change the code to use a class property (self.progressLayer) instead of a local variable, the rendering starts flickering on the screen:
func setupProgressWithCAShapeLayer() {
self.progressLayer = createProgressLayer()
progressBarPlane?.firstMaterial?.diffuse.contents = progressLayer
DispatchQueue.main.async { [weak self] in
var progress: CGFloat = 0.0
Timer.scheduledTimer(withTimeInterval: 0.1, repeats: true) { [weak self] timer in
progress += 0.01
if progress > 1.0 {
progress = 0.0
}
self?.progressLayer?.strokeEnd = progress // Update progress
}
}
}
After this change, the progressBarPlane in SceneKit starts flickering while being rendered on the screen.
My Question:
Why does switching from a local variable (progressLayer) to a class property (self.progressLayer) cause the flickering issue in SceneKit rendering?
I am using SCNTechnique in combination with ARSCNView.
The technique is doing so minor post-processing. I have written several filter variant for this post-processing, but I'm facing an issue when with one of the filters/fragment shaders, SCNTechnique discards my output and just presents the plain camera feed on screen instead. This is clearly visible in the Metal pipeline, using the GPU frame debugger.
Let me stress that my setup works for 90% of my filters, but not this one and I want to know why.
iOS 18.1, iPhone 13 Mini. Xcode 16.1.
Encoder 0 & 1 are injected by the system.
Render encoder 2 & 3 correspond to my SCNTechnique's render passes: one to manipulate pixel data (darken it in this case) and another to BLIT it back to the main texture. I know the separate buffer is not strictly for this particular operation, but it shouldn't matter.
Note that the issue occurs in encoder 4 (not mine but ARKit's).
In Render Encoder 4, scn_postprocess_AR_fragment handle my texture (#0, ending in f980) and another from the camera feed (Texture 2). I know this pass is typically used for grain because that's what it used to do before I disabled grain on ARSCNView (+ the buffer still contains grain paramaters).
I have other post-processing filters that work just fine. By what magic is ARKit determining to use Texture 2 instead of my Texture 0?
Sure, I could keep digging into the minute differences between my shaders to find out which LoC affects how some ARKit shader down the line operates, but it's awfully opaque so far.
I’m trying to build my project using Unreal Engine 5.4 for iOS. I use SocketIO to connect to the backend. The plugin SocketIOClient version 2.8.0, is used for this.
When trying to connect to the socket, a crash occurs. This only happens on iOS, only in Distribution builds, and only on Unreal 5.4. There are no problems on Unreal 5.2. Callstack:
crashlog.crash
The callstack may be slightly different, but the problem is always when allocating or deallocating memory. I suspect that this may be a race condition and is related to some peculiarities of working with memory on iOS. There are also several similar issues in the plugin repository, for example this one.
I tried using other versions of plugin, other versions of xcode, tried to build socket io using both c++20 and c++17, nothing helps. Does anyone know what can be done about this?
I plan to create a simple motion graphics software for macOS that animates text, basic shapes, and handles audio. I'll use SwiftUI for the UI.
What are the commonly used technologies for rendering animated graphics? Core Animation is suitable for UI animations but not for exporting and controlling UI animations.
Basic requirements:
Timeline user interface
Animation of text and basic shapes
Viewer in SwiftUI GUI with transport control (play, pause, scrub, …)
Export to video file
Is Metal or Core Graphics typically used directly? I want to keep it as simple as possible.
I have an oval UIBezierPath with a moving SKSpriteNode,
I stop its motion and record the stopped position. I then restart this motion and want it to restart where it initially stopped.
Works great if motion is not stopped. Movement is great around entire oval Path.
Also works great as long as this stop-restart sequence occurs along the top half of the oval UIBezierPath. However, I have problems along the bottom half of this Path -- it stops okay, but the restart position is not where it previously stopped.
My method to create this oval UIBezierePath is as follows:
func createTrainPath() {
trainRect = CGRect(x: tracksPosX - tracksWidth/2,
y: tracksPosY - tracksHeight/2,
width: tracksWidth,
height: tracksHeight)
// these methods come from @DonMag
trainPoints = generatePoints(inRect: trainRect,
withNumberOfPoints: nbrPathPoints)
trainPath = generatePathFromPoints(trainPoints!,
startingAtIDX: savedTrainIndex)
} // createTrainPath
My method to stop this motion is as follows:
func stopFollowTrainPath() {
guard (myTrain != nil) else { return }
myTrain.isPaused = true
savedTrainPosition = myTrain.position
// also from @DonMag
savedTrainIndex = closestIndexInPath(
trainPath,
toPoint: savedTrainPosition) ?? 0
} // stopFollowTrainPath
Finally, I call this to re-start this motion:
func startFollowTrainPath() {
var trainAction = SKAction.follow(trainPath.cgPath,
asOffset: false,
orientToPath: true,
speed: thisSpeed)
trainAction = SKAction.repeatForever(trainAction)
myTrain.run(trainAction, withKey: runTrainKey)
myTrain.isPaused = false
} // startFollowTrainPath
Again, great if motion is not stopped. Movement is great around entire oval Path.
Again, no problem for stopping and then restarting along top half of oval .. the ohoh occurs along bottom half.
Is there something I need to do within GameScene's update method that I am missing? For example, do I need to reconstruct my UIBezierPath? every time my node moves between the top half and the bottom half and therein account for the fact that the node is traveling in the opposite direction from the top half?
I decided to use a club to kick a ball and let it roll on the turf in RealityKit, but now I can only let it slide but can not roll.
I add collision on the turf(static), club (kinematic) and the ball(dynamic), and set some parameters: radius, mass.
Using these parameters calculate linear damping, inertia, besides, use time between frames and the club position to calculate speed. Code like these:
let radius: Float = 0.025
let mass: Float = 0.04593 // 质量,单位:kg
var inertia = 2/5 * mass * pow(radius, 2)
let currentPosition = entity.position(relativeTo: nil)
let distance = distance(currentPosition, rgfc.lastPosition)
let deltaTime = Float(context.deltaTime)
let speed = distance / deltaTime
let C_d: Float = 0.47 //阻力系数
let linearDamping = 0.5 * 1.2 * pow(speed, 2) * .pi * pow(radius, 2) * C_d //线性阻尼(1.2表示空气密度)
entity.components[PhysicsBodyComponent.self]?.massProperties.inertia = SIMD3<Float>(inertia, inertia, inertia)
entity.components[PhysicsBodyComponent.self]?.linearDamping = linearDamping
// force
let acceleration = speed / deltaTime
let forceDirection = normalize(currentPosition - rgfc.lastPosition)
let forceMultiplier: Float = 1.0
let appliedForce = forceDirection * mass * acceleration * forceMultiplier
entityCollidedWith.addForce(appliedForce, at: rgfc.hitPosition, relativeTo: nil)
Also I try to applyImpulse but not addForce, like:
let linearImpulse = forceDirection * speed * forceMultiplier * mass
No matter how I adjust the friction(static, dynamic) and restitution, using addForce or applyImpulse, the ball can only slide. How can I solve this problem?
It’s great that we’ll be able to use Metal custom renderers in passthrough mode on visionOS.
https://developer.apple.com/wwdc24/10092
This is a lot of complicated set-up, however. It’s also unclear how occlusion and custom algorithms / raytracing will work in tandem with scene understanding. May we have a project template and/or sample? Preferably with the C api and not just swift. This would be much-appreciated and helpful to everyone who wants this set-up. I’d like to see the whole process.
Thank you for introducing this feature!
I would like to take YCbCr CVPixelBuffers from AVCaptureVideoDataOutput, apply some processing in RGB space, render to an MTKView, and pass to AVAssetWriter for recording.
Right now, I'm doing this all manually – deswing the incoming data if necessary, choose the right matrix to convert to RGB, apply processing, etc. I also have to convert back to YCbCr before feeding the frames to AVAssetWriter because encoding performs much better if I do. Is there any efficient, built-in way to achieve the same?
I can't use AVCaptureVideoPreviewLayer, since I need to do some further processing before display. I can't use AVCaptureVideoDataOutput's videoSettings to get automatic BGRA conversion because that would lose bit depth for 10 bit video formats (and isn't available on all formats anyway).
I see these Accelerate functions, but they seemingly don't use the GPU, nor do they support all the formats and bit depths I'd need.
I found reference to some undocumented MTLPixelFormats that seem to do exactly what I want, but I don't want to rely on something like this unless it's explicitly endorsed. This would also incur an RGB/YCbCr conversion on every texture read and write, right?
Is there anything I'm missing here?
Dear Apple Support Team,
I recently purchased an iPad Pro 2022 and updated it to iOS 18.2. However, I am experiencing an issue while using Call of Duty Mobile. The Game Mode activates randomly and sometimes does not activate at all. Additionally, when the Game Mode is on, the game crashes unexpectedly, causing an unstable experience. I kindly request that you address this issue in upcoming iOS updates.
Thank you for your attention and support.
Best regards,
[samadBg]
SceneKit SCNScene MacOS 15.1 Xcode 16.0
SceneView(scene: , options:[autoenablesDefaultLighting, allowsCameraContol]
there is:
.rootNode.cameraNode
.rootNode.camera
.rootNode.nestedChildNodes each with its own animation
when the object is animated and dragged by mouse to change the view point, I can't return the view to the previous view.
I have reinstated a clone of the original cameraNode, positions of all childNodes, removed and re-activated all animations... in vain.
I have also cloned, removed and replaced .rootNode.camera, in vain.
The documentation states the camera is "attached" to an SCNNode but does not say how. I make no declaration to associate .rootNode.cameraNode to .rootNode.camera yet if either is absent there is no scene to view.
What am I missing?
Thanks
I want to implement the ability to apply Lightroom Preset (.xmp file) to an image in my app, but am running into difficulties. How can I configure things like color grading, curve, etc. in Swift?
In macOS project with RealityKit and SwiftUI, adding OrthographicCameraComponent causes crashes in both Xcode Preview and at runtime.
import SwiftUI
import RealityKit
struct ContentView: View {
var body: some View {
RealityView { content in
var camera = Entity()
var component = OrthographicCameraComponent()
component.scale = 5
camera.position = [0, 0, 5]
camera.components.set(component)
content.add(camera)
content.add(ModelEntity(mesh: .generateSphere(radius: 1)))
}
}
}
#Preview {
ContentView()
}
Has anyone faced this issue or knows a fix?