I am currently working on a game that involves earning achievements, which I am using the Apple Unity Plug-Ins to display. I have found that occasionally opening the Game Center Dashboard the last achievement earned will not be displayed until the game is closed and reopened. I am using GKAccessPoint.Shared.Trigger to display the Achievements screen, which occasionally seems to open a cached version of the dashboard. I've found that it seems to consistently happen when earning multiple achievements within one minute, but this is not always the case. Does anybody have any experience with something like this in the past?
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Post
Replies
Boosts
Views
Activity
Hi, is there any way to use front camera to do the motion capture?
I want recognize if the user raised there hands up with the front camera on iPhone.
I was able to do it with the back camera, not the front.
Also, if there is any sample code, or document, I would be super happy.
Waiting for your reply!!
Hey I wanted to make an app that tracks changes in the room and room lightning and I was wondering if its possible to use VirtualEnvironmentProbeComponent to obtain the EnvironmentResource image and store it?
If so are there any example of similar operation I could use?
Thank you!
I have a visionOS app that I’m adding support for IOS and will like to keep using RealityView.
I know there are the following modifiers to add some navigation
.realityViewCameraControls(.orbit)
.realityViewCameraControls(.dolly)
.realityViewCameraControls(.pan)
But how can I add more than one? For example I would like to orbit with one finger, Pan with 2 fingers and dolly by pinching. Is this possible and if so can someone share some sample code on how to achieve that?
Thanks,
Guillermo
I used xcode gpu capture to profile render pipeline's bandwidth of my game.Then i found depth buffer and stencil buffer use the same buffer whitch it's format is Depth32Float_Stencil8.
But why in a single pass of pipeline, this buffer was loaded twice, and the Load Attachment Size of Encoder Statistics was double.
Is there any bug with xcode gpu capture?Or the pass really loaded the buffer twice times?
Hello,
I am trying to use the subdivision mesh rendering option.
I can see it working in RealityComposerPro:
But not when loading asset and displaying in Simulator:
Using this code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AirspaceView: View {
// MARK: - VIEW BODY
var body: some View {
RealityView { content in
if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) {
content.add(a)
}
}
}
}
Any ideas why?
Our app uses Metal for image processing. We have found that if our app (and its possible intensive image processing) is started quickly after user is logged in, then calls to Metal may be hanging/stuck for a good while.
Example: it can take 1-2 minutes for something that usually takes 3-5 seconds! Metal threads are just hanging in a memmove...
In Activity Monitor we see a lot of things are happening right after log-in. But why Metal calls are blocking for so long is unknown to us...
The workaround is to wait a minute before we start our app and start intensive image processing using Metal. But hard to explain this workaround to end-users...
It doesn't happen on all computers but fairly easy to reproduce on some computers.
We are using macOS 15.3.1. M1/M3 Max.
Any good ideas for how to proceed with this problem and possible reach out to Apple engineers?
Thanks! :)
Hey everyone,
I’m trying to run Kingdom Come: Deliverance 2 using the Game Porting Toolkit, but I’m encountering a black screen when launching the game. From what I know about the game’s requirements, it might be using Shader Model 6.5, which supports advanced features like DirectX Raytracing (DXR) Tier 1.1. This leads me to suspect that the issue could be related to missing support for DirectX 12.1 Features or Shader Model 6.5 in GPTK.
Does anyone know if these features are currently supported by GPTK? If not, are there any plans to implement them in future updates? Alternatively, is there any workaround for games that rely on Shader Model 6.5 and ray tracing?
Thanks a lot for your help!
Hi
Looking at the documentation for screenSpaceAmbientOcclusionIntensity, I noticed that it says this is supported on visionOS 1.0+: https://developer.apple.com/documentation/scenekit/scncamera/screenspaceambientocclusionintensity
Could someone enlighten me as to how that would work? As far as I know, we don't use an SCNCamera on visionOS. So, what's the idea here? Can we activate SSAO on visionOS?
When importing FBX animations (generated by Cinema 4d or Blender) the models come in very far way and cannot resize or zoomed in. I have tried every setting from both programs to no avail. Is there a secret to providing the right export options? When importing without animations/and rigging the model imports fine and correct size. But once motion is included, something is awry. I also tried changing base units in Converter, but did not work. I have attache my model heirarchy in C4D as well as the imported result. It appears the animation is imported, as I can see it move, but can barely see it :)
Following the post on
https://developer.apple.com/documentation/realitykit/custommaterial it's simple to use shader for materials and get uniforms and params from each vertex. However it's not available for visionOS. Any alternative to use in this case? I want to write shader to fill material by myself. (I have shader experience from web, familiar with fragment shader)
Hi all im having a variety of issues with gamekit matchmaking. On the simulator the matchmaking ui pops up and I can click Quick Match, then immediately "Failed to find Players" this is the same with a real Apple ID and a sandbox account.
If I use real devices the app at least discovers a match, but then the match none of the delegate methods for the match ever get called and the logs are filled with socket not connected and various errors.
My questions are:
Should match making via quick match work in the simulator, I have seen tutorial videos etc of this working, but I can't seem to get it to work.
How do people debug issues with GameCenter / Gamekit to find out why its not able to connect?
Many thanks in advance
I am trying to simulate a paste command and it seems to not want to paste. It worked at one point with the same code and now is causing issues.
My code looks like this:
` func simulatePaste() {
guard let source = CGEventSource(stateID: .hidSystemState) else {
print("Failed to create event source")
return
}
let keyDown = CGEvent(keyboardEventSource: source, virtualKey: CGKeyCode(9), keyDown: true)
let keyUp = CGEvent(keyboardEventSource: source, virtualKey: CGKeyCode(9), keyDown: false)
keyDown?.flags = .maskCommand
keyUp?.flags = .maskCommand
keyDown?.post(tap: .cgAnnotatedSessionEventTap)
keyUp?.post(tap: .cgAnnotatedSessionEventTap)
print("Simulated Cmd + V")
}
I know that there is some issues around permissions and so in my Info.plist I have this:
<string>NSApplication</string>
<key>NSAppleEventsUsageDescription</key>
<string>This app requires permission to send keyboard input for pasting from the clipboard.</string>
I have also disabled sandbox. It does ask me if I want to give the app permissions but after approving it, it still doesn't paste.
Hey there,
I tried to install GPTK again, since I had to reinstall the OS for irrelevant reasons. But every time I try to install the tool kit, it gives me theError: apple/apple/game-porting-toolkit 1.1 did not build error. Before that error occired, I had the Openssl error, which I fixed with the rbenv version of openssl. Is there any way to fix this error? Down bellow you'll find the full error message it gave me. The specs for my Mac are (if they are helpful in any way): M1 Pro MBP 14" with 16GB Ram and 512GB SSD.
Thanks!
``Error: apple/apple/game-porting-toolkit 1.1 did not build
Logs:
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/00.options.out
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/01.configure
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/01.configure.cc
/Users/myuser/Library/Logs/Homebrew/game-porting-toolkit/wine64-build
If reporting this issue please do so at (not Homebrew/brew or Homebrew/homebrew-core):
https://github.com/apple/homebrew-apple/issues```
Helle there
Currently, I’m attempting to create an interactive learning application with a 3D view. I’ve discovered the framework SceneKit, but I lack the necessary knowledge to animate, load and moving objects. Could someone kindly suggest some good articles or tutorials on this topic?
When inspecting the geometry in Xcode's metal debugger, I noticed that the shown "frustrum box" didn't make sense. Since Metal uses depth range 0,1 in NDC space, I would expect a vertex that is projected to z:0 to be on the front clipping plane of the frustrum shown in the geometry inspector. This is however not the case. A vertex with ndc z:0 is shown halfway inside the frustrum. Vertices with ndc z less than 0 are correctly culled during rendering, while the geometry inspector's frustrum shows that the vertex is stil inside the frustrum.
The image shows vertices that are visually in the middle of the frustrum on z axis, but at the same time the out position shows that they are projected to z:0. How is this possible, unless there's a bug in the geometry inspector?
Hi,
I wanted to do something quite simple: Put a box on a wall or on the floor.
My box:
let myBox = ModelEntity(
mesh: .generateBox(size: SIMD3<Float>(0.1, 0.1, 0.01)),
materials: [SimpleMaterial(color: .systemRed, isMetallic: false)],
collisionShape: .generateBox(size: SIMD3<Float>(0.1, 0.1, 0.01)),
mass: 0.0)
For that I used Plane Detection to identify the walls and floor in the room. Then with SpatialTapGesture I was able to retrieve the position where the user is looking and tap.
let position = value.convert(value.location3D, from: .local, to: .scene)
And then positioned my box
myBox.setPosition(position, relativeTo: nil)
When I then tested it I realized that the box was not parallel to the wall but had a slightly inclined angle.
I also realized if I tried to put my box on the wall to my left the box was placed perpendicular to this wall and not placed on it.
After various searches and several attempts I ended up playing with transform.matrix to identify if the plane is wall or a floor, if it was in front of me or on the side and set up a rotation on the box to "place" it on the wall or a floor.
let surfaceTransform = surface.transform.matrix
let surfaceNormal = normalize(surfaceTransform.columns.2.xyz)
let baseRotation = simd_quatf(angle: .pi, axis: SIMD3<Float>(0, 1, 0))
var finalRotation: simd_quatf
if acos(abs(dot(surfaceNormal, SIMD3<Float>(0, 1, 0)))) < 0.3 {
logger.info("Surface: ceiling/floor")
finalRotation = simd_quatf(angle: surfaceNormal.y > 0 ? 0 : .pi, axis: SIMD3<Float>(1, 0, 0))
} else if abs(surfaceNormal.x) > abs(surfaceNormal.z) {
logger.info("Surface: left/right")
finalRotation = simd_quatf(angle: surfaceNormal.x > 0 ? .pi/2 : -.pi/2, axis: SIMD3<Float>(0, 1, 0))
} else {
logger.info("Surface: front/back")
finalRotation = baseRotation
}
Playing with matrices is not really my thing so I don't know if I'm doing it right.
Could you tell me if my tests for the orientation of the walls are correct? During my tests I don't always correctly identify whether the wall is in front or on the side.
Is this generally the right way to do it?
Is there an easier way to do this?
Regards
Tof
I want to use SwiftUI and RealityView to get AR scene understanding data (ARMeshAnchor) on iOS devices with LiDAR. The only way we can do that is by using ARSession (unless there is another way).
However in previous iOS 18 builds there was this function:
https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:session:arconfiguration:)
, which worked with SpatialTrackingSession and a custom ARSession together. This function in the the latest iOS and Xcode has since been removed in the RealityKit framework but still there on documentation.
I also wanted to get ARFaceAnchor data which I still cannot get without ARSession, the closest I can get is by using:
let target = AnchoringComponent.Target.face
let anchoringComponent = AnchoringComponent(target, trackingMode: .predicted)
entity = Entity()
entity!.components.set(anchoringComponent)
But I still can't find a way to get the current frame (ARFrame) or the anchors ([ARAnchor]) in the view.
Alternatively if I use if I use this function: https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:) and start the ARSession separately. The session (didUpdate and didAdd) only runs for a few frames before getting interrupted.
And if I completely remove SpatialTrackingConfiguration and just run the ARSession. There still is a valid tracked entity for the AnchoringComponent.Target.face component. IF in the configuration for the ARSession I use the ARWorldTrackingConfiguration with face tracking. And I still get updated facial data each frame. But the ARSession didUpdate or didAdd functions don't get called passed the first few frames.
Interestingly if I switch the RealityViewCameraContent.RealityViewCamera to .virtual. I get ARMeshAnchor and ARFaceAnchor data, but no camera feed (as expected). This with or without SpatialTrackingConfiguration.
My overarching question is what is the proper way to access ARMeshAnchors and other ARAnchors created by the system and track them live while also using SwiftUI.
GitHub Repo with sample project can be found here: https://github.com/bpate75/RealityViewTesting
I'm trying to build a Shader in "Reality Composer Pro" that updates from a start time. Initially I tried the following:
The idea was that when the startTime was 0, the output would be 0, but then I would set startTime from within code and this would be compared with the current GPU time, and difference used to drive another part of the shader graph:
if
let testEntity = root.findEntity(named: "Test"),
var shaderGraphMaterial = testEntity.components[ModelComponent.self]?.materials.first as? ShaderGraphMaterial
{
let time = CFAbsoluteTimeGetCurrent()
try! shaderGraphMaterial.setParameter(name: "StartTime", value: .float(Float(time)))
testEntity.components[ModelComponent.self]?.materials[0] = shaderGraphMaterial
}
However, I haven't found a reference to the time the shader would be using.
So now I am trying to write an EntityAction to achieve the same effect. Instead of comparing a start time to the GPU's time I'm trying to animate one of the shader's uniform input. However, I'm not sure how to specify the bind target. Here's my attempt so far:
import RealityKit
struct ShaderAction: EntityAction {
let startValue: Float
let targetValue: Float
var animatedValueType: (any AnimatableData.Type)? { Float.self }
static func registerEntityAction() {
ShaderAction.subscribe(to: .updated) { event in
guard let animationState = event.animationState else { return }
let value = simd_mix(event.action.startValue, event.action.targetValue, Float(animationState.normalizedTime))
animationState.storeAnimatedValue(value)
}
}
}
extension Entity {
func updateShader(from startValue: Float, to targetValue: Float, duration: Double) {
let fadeAction = ShaderAction(startValue: startValue, targetValue: targetValue)
if let shaderAnimation = try? AnimationResource.makeActionAnimation(for: fadeAction, duration: duration, bindTarget: .material(0).customValue) {
playAnimation(shaderAnimation)
}
}
}
'''
Currently when I run this I get an assertion failure: 'Index out of range (operator[]:line 797) index = 260, max = 8'
Furthermore, even if it didn't crash I don't understand how to pass a binding to the custom shader value "startValue".
Any clues of how to achieve this effect - even if it's a completely different way.
Hello!
I need to "draw" a set of particles into the texture. It would be trivial in render encoder of course. However, I would like to implement the task in compute kernel. Every particle draw operation is expected to set 5 texels - "center" one and left/right/upper/lower. Particles can and will overlap, so concurrent draws are to be expected.
I tried using texture atomics - atomic_store() to be more precise. This worked, albeit pretty slowly - too slow for my purpose.
Just to test what would happen, I tried using normal texture write(). I was expecting to see some kind of visual artefacts, but to my surprise, it worked very well (and much faster).
My question: is it safe? I understand that calling write() doesn't guarantee any ordering of the operations, so if multiple threads write to the same texel, the final value may come from any of those threads. But suppose all the threads were to write the very same color? Can I assume that the texel in question will have said color after the compute kernel finishes?
I am using M2 Pro MacBook, but ideally I would love to get the answer for the all Apple Silicon devices. My texture format is R32Int (so as to be able to use atomics), but I could do with any single-channel format, the purpose of the texture is to be binary mask of sorts.
Thanks!