I have a visionOS app that I’m adding support for IOS and will like to keep using RealityView.
I know there are the following modifiers to add some navigation
.realityViewCameraControls(.orbit)
.realityViewCameraControls(.dolly)
.realityViewCameraControls(.pan)
But how can I add more than one? For example I would like to orbit with one finger, Pan with 2 fingers and dolly by pinching. Is this possible and if so can someone share some sample code on how to achieve that?
Thanks,
Guillermo
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
In the CanyonCrosser example project, some RealityKit systems are implemented as classes while others are structs. What’s the reason for using different types?
Hello Dev Community,
I've been thinking over Apple's preference for USDZ for AR and 3D content, especially when there's the widely used GLTF. I'm keen to discuss and hear your insights on this choice.
USDZ, backed by Apple, has seen a surge in the AR community. It boasts advantages like compactness, animation support, and ARKit compatibility. In contrast, GLTF too is a popular format with its own merits, like being an open standard and offering flexibility.
Here are some of my questions toward the use of USDZ:
Why did Apple choose USDZ over other 3D file formats like GLTF?
What benefits does USDZ bring to Apple's AR and 3D content ecosystem?
Are there any limitations of USDZ compared to other file formats?
Could factors like compatibility, security, or integration ease have influenced Apple's decision?
I would love to hear your thoughts on this. Feel free to share any experiences with USDZ or other 3D file formats within Apple's ecosystem!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
RealityKit
USDZ
Reality Converter
Reality Composer Pro
Issues building Unity plug-in project: Cannot locate native library Apple.Core/Apple.GameKit for iOS
I'm having issues getting a well built package from the Apple Unity Plug-in project.
When building the my game project in Unity the following error is printed to the console:
Apple.Core.AppleNativeLibraryUtility] Cannot locate a Debug or Release Apple.Core native library for iOS.
Please ensure that the build invocation (build.py, xcodebuild, or Xcode) compiled cleanly and that the build was configured to support Debug on iOS.
As far as I can tell the build did compile cleanly, but I might be missing something.
If anyone can see what I'm doing wrong or has any insight it would be greatly appreciated.
Setup is the following:
macOS Tahoe 26 Beta
Xcode-beta Version 26.0 beta 3 (17A5276g)
Unity Plug-in branch: 2025-beta1
Unity game project version: 2022.3.60f
M1 Macbook Pro
The built packages have been imported into the game project through the Unity Package Manager using the tarball option pointing to the built packages from the Unity Plug-in project.
The Unity Plug-in project has been built using the build.py file with the following:
python3 build.py -m iOS iPhoneSimulator -p Core GameKit CoreHaptics GameController -k all
The output is available in the attached file.
build-output.txt
Here's an image of the NativeLibraries~ folder inside the built Apple.Core package.
Guys,
In my main application bundle, I have included a helper bundle in its Resources. When the helper requests Accessibility permission, the system modal window displays what the helper is requesting permission for.
However, when the helper requests permission for Screen Recording, the system modal window displays that the main application bundle is requesting permission, which includes the helper.
This issue seems to be specific to Ventura, as both requests are displayed on behalf of the helper in Monterey.
I'm wondering if this is a known issue or limitation or if there is a way to make the permission request specifically from the helper.
I'm trying to use MTLBinaryArchive. I collected a BinaryArchive from one device and used metal-tt to translate it for all supported iPhone devices, ranging from iPhone 7 Plus to iPhone 16.
However, this BinaryArchive is quite large, around 1.5GB uncompressed, and about 500MB compressed in the IPA. I'm wondering how to address the size issue.
I watched the WWDC 2022 video, which mentioned that the operating system or app installation process would handle compatibility. Does this compatibility support different GPU chips? I tried installing an IPA with a BinaryArchive collected only from an iPhone 12 on an iPhone 13, but the BinaryArchive didn't take effect.
I also saw that Apple supports App Thinning. However, it seems that resources in the Asset Catalog cannot be accessed via URL, and creating an MTLBinaryArchive requires a URL. Is it possible for MTLBinaryArchive to be distributed through App Thinning?
The WWDC 2022 video also mentioned using the -Os optimization flag to reduce size. Can this give an estimate of how much compression it would achieve? Are there any methods to solve the BinaryArchive size issue without impacting performance?
Topic:
Graphics & Games
SubTopic:
Metal
I am unable to get VisionOS 2.0 (simulator) to receive the GCControllerDidConnect notification and thus am unable to setup support for a gamepad. However, it works in VisionOS 1.2.
For VisionOS 2.0 I've tried adding:
.handlesGameControllerEvents(matching: .gamepad) attribute to the view
Supports Controller User Interaction to Info.plist
Supported game controller types -> Extended Gamepad to Info.plist
...but the notification still doesn't fire. It does when the code is run from VisionOS 1.2 simulator, both of which have the Send Game Controller To Device option enabled.
Here is the example code. It's based on the Xcode project template. The only files updated were ImmersiveView.swift and Info.plist, as detailed above:
import SwiftUI
import GameController
import RealityKit
import RealityKitContent
struct ImmersiveView: View {
var body: some View {
RealityView { content in
// Add the initial RealityKit content
if let immersiveContentEntity = try? await Entity(named: "Immersive", in: realityKitContentBundle) {
content.add(immersiveContentEntity)
}
NotificationCenter.default.addObserver(
forName: NSNotification.Name.GCControllerDidConnect,
object: nil, queue: nil) { _ in
print("Handling GCControllerDidConnect notification")
}
}
.modify {
if #available(visionOS 2.0, *) {
$0.handlesGameControllerEvents(matching: .gamepad)
} else {
$0
}
}
}
}
extension View {
func modify<T: View>(@ViewBuilder _ modifier: (Self) -> T) -> some View {
return modifier(self)
}
}
, it is after update to Xcode 14.3:
[default] CGSWindowShmemCreateWithPort failed on port 0
Hello,
I am trying to use the subdivision mesh rendering option.
I can see it working in RealityComposerPro:
But not when loading asset and displaying in Simulator:
Using this code:
import SwiftUI
import RealityKit
import RealityKitContent
struct AirspaceView: View {
// MARK: - VIEW BODY
var body: some View {
RealityView { content in
if let a = try? await Entity(named: "Models/Test/Test.usdc", in: realityKitContentBundle) {
content.add(a)
}
}
}
}
Any ideas why?
I want to use SwiftUI and RealityView to get AR scene understanding data (ARMeshAnchor) on iOS devices with LiDAR. The only way we can do that is by using ARSession (unless there is another way).
However in previous iOS 18 builds there was this function:
https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:session:arconfiguration:)
, which worked with SpatialTrackingSession and a custom ARSession together. This function in the the latest iOS and Xcode has since been removed in the RealityKit framework but still there on documentation.
I also wanted to get ARFaceAnchor data which I still cannot get without ARSession, the closest I can get is by using:
let target = AnchoringComponent.Target.face
let anchoringComponent = AnchoringComponent(target, trackingMode: .predicted)
entity = Entity()
entity!.components.set(anchoringComponent)
But I still can't find a way to get the current frame (ARFrame) or the anchors ([ARAnchor]) in the view.
Alternatively if I use if I use this function: https://developer.apple.com/documentation/realitykit/spatialtrackingsession/run(_:) and start the ARSession separately. The session (didUpdate and didAdd) only runs for a few frames before getting interrupted.
And if I completely remove SpatialTrackingConfiguration and just run the ARSession. There still is a valid tracked entity for the AnchoringComponent.Target.face component. IF in the configuration for the ARSession I use the ARWorldTrackingConfiguration with face tracking. And I still get updated facial data each frame. But the ARSession didUpdate or didAdd functions don't get called passed the first few frames.
Interestingly if I switch the RealityViewCameraContent.RealityViewCamera to .virtual. I get ARMeshAnchor and ARFaceAnchor data, but no camera feed (as expected). This with or without SpatialTrackingConfiguration.
My overarching question is what is the proper way to access ARMeshAnchors and other ARAnchors created by the system and track them live while also using SwiftUI.
GitHub Repo with sample project can be found here: https://github.com/bpate75/RealityViewTesting
Hello reader,
I am facing an issue that I am not able to resolve. I have been able to create a demo project that demonstrates the issue, which I hope enables you to have a look as well and hopefully find a way to resolve it.
What is the issue:
I am using SKTileMapNode in order to draw tile maps. Instead of using the tilesets as you can use from within the Xcode editor, I prefer to do it all programmatically using tilesheets (for a plethora of reasons that I will leave out of this equation).
This is the code of the gameScene:
import SpriteKit
import GameplayKit
class GameScene: SKScene {
private let tileSize = CGSize(width: 32, height: 32)
override func didMove(to view: SKView) {
super.didMove(to: view)
let tileSet = createTileSet()
let tileMap = SKTileMapNode(tileSet: tileSet,
columns: 100,
rows: 100,
tileSize: tileSize)
for column in 0..<tileMap.numberOfColumns {
for row in 0..<tileMap.numberOfRows {
guard let tileGroup = tileSet.tileGroups.randomElement() else {
fatalError()
}
tileMap.setTileGroup(tileGroup, forColumn: column, row: row)
}
}
addChild(tileMap)
}
private func createTileSet() -> SKTileSet {
let tileSheetTexture = SKTexture(imageNamed: "terrain")
var tileGroups = [SKTileGroup]()
let relativeTileSize = CGSize(width: tileSize.width/tileSheetTexture.size().width,
height: tileSize.height/tileSheetTexture.size().height)
for idx in 0...2 {
for jdx in 0...2 {
let tileTexture = SKTexture(rect: .init(x: CGFloat(idx) * relativeTileSize.width,
y: CGFloat(jdx) * relativeTileSize.height,
width: relativeTileSize.width,
height: relativeTileSize.height),
in: tileSheetTexture)
let tileDefinition = SKTileDefinition(texture: tileTexture,
size: tileSize)
let tileGroup = SKTileGroup(tileDefinition: tileDefinition)
tileGroups.append(tileGroup)
}
}
let tileSet = SKTileSet(tileGroups: tileGroups)
return tileSet
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
presentSceneAgain()
}
func presentSceneAgain() {
if let frame = view?.frame {
view?.presentScene(GameScene(size: frame.size),
transition: .doorsCloseHorizontal(withDuration: 1.0))
}
}
}
This demo project create a tilemapnode of 100 X 100 tiles. Then, it fills these 10.000 tiles with a random tile from the tilesheet named "terrain.png". This tile sheet contains many tiles, but I only take the 9 tiles (3 X 3) from the lower left corner as a random tile option.
Thus, the 10.000 tiles get filled with one of these 9 tiles. So it doesnt look pretty or anything, but that isnt the purpose.
Now, to create these 9 tile textures, I use the SKTexture(rectIn:) method on the source texture being "terrain.png".
I think the code is quite clear in itself, but so far the explanation. When you run it, you should see the map being rendered.
When you tap the scene, the scene will present a new instance of the scene. Not more than that.
Now, when you do this, have a look at the RAM usage of the app. You will see it steadily increases over time, each time you click the scene and a new scene is presented.
I looked deeper into what is happening, and what I see in the memory graph, is that for every present of the scene that is done, there are 3 SKTexture instances being created that are never released. The first time the scene is rendered, there 11 SKTexture instances allocated (I dont know why there are 11 though. I would expect 10: the source texture and the 9 tile textures).
But then as mentioned, after a tap and a new present, I get 14 SKTexture, of which 3 are zombies, see image leak_1.
Moreover, Xcode reports multiple additional leaks from Jet and Metal allocations, see image leak_all.
As far as I know, the code presented is not retaining any references that it should not, and I suspect this leaks are happening somewhere inside SpriteKit. But I am not able to find exactly where, or how to resolve it.
I hope someone can help with this issue.
The title is self-exploratory. I wasn't able to find the CAMetalDisplayLink on the most recent metal-cpp release (metal-cpp_macOS15_iOS18-beta). Are there any plans to include it in the next release?
Hi, following the recent deprecation of SceneKit, I'm trying to move a couple of my SceneKit projects to RealityKit.
One thing I can't seem to find is how to change the content scale factor when using a RealityView in SwiftUI. It was really easy to do in SceneKit with just a SCNView property, and it seems that it's also possible when using ARView, but I can't find a way to do it with a RealityView. Maybe it's a SwiftUI limitation?
Topic:
Graphics & Games
SubTopic:
RealityKit
Hello,
This exact question was already asked in this forum (8 years ago) but I can't find a definitive answer:
Does Metal allow using the same color texture as both an input and output (color attachment) of a fragment shader? Is the behavior defined somewhere?
I believe this results in undefined behavior under both DirectX and OpenGL, so I'd assume the same for Metal, but then why doesn't Metal warn me about this as it does on some many other "misconfigurations"? It also seems to work correctly in my case, as I found out by accident.
Would love to get a clarification!
Thanks ahead!
I have published a number of games that use SpriteKit for everything important. Since the release of macOS Tahoe, I've had a lot of end user reports saying that sound effects have stopped working in many (but not all) of my titles.
I'm not doing anything unusual here – typical code is:
sndGameOver = [SKAction playSoundFileNamed:@"Audio/GameOver.wav" waitForCompletion:YES];
Then at the appropriate time:
[self runAction:sndGameOver];
Has anyone else encountered this? The code still works fine on previous operating systems, and appears to be fine on iOS too. Has something changed in macOS Tahoe?
I'm at a bit of a loss. There's nothing obviously different between the titles that do work and the titles that don't.
Suggestions welcomed!
Thanks
Has anyone come across the issue that setting GKLocalPlayer.local.authenticateHandler breaks a RealityView's world tracking on iOS / iPadOS 18 beta 5?
I'm in the process of upgrading my app to make use of the much appreciated RealityView unification, using RealityView not only on visionOS but now also on iOS and iPadOS. In my RealityView, I enable world tracking on iOS like this:
content.camera = .worldTracking
However, device position and orientation were ignored (the camera remained static) and there was no camera pass-through. Then I discovered that the issue disappeared when I remove the line
GKLocalPlayer.local.authenticateHandler = { viewController, error in
// ... some more code ...
}
So I filed FB14731139 and hope that it will be resolved before the release of iOS / iPadOS 18.
I've been playing with the new GameSave API and cannot get it to work.
I followed the 3-step instructions from the Developer video. Step 2, "Next, login to your Apple developer account and include this entitlement in the provisioning profile for your game." seems to be unnecessary, as Xcode set this for you when you do step 1 "First add the iCloud entitlement to your game."
Running the app on my device and tapping "Load" starts the sync, then fails with the error "Couldn’t communicate with a helper application." I have no idea how to troubleshoot this. Every other time I've used CloudKit it has Just Worked™.
Halp‽
Here is my example app:
import Foundation
import SwiftUI
import GameSave
@main struct GameSaveTestApp: App {
var body: some Scene {
WindowGroup {
GameView()
}
}
}
struct GameView: View {
@State private var loader = GameLoader()
var body: some View {
List {
Button("Load") { loader.load() }
Button("Finish sync") { Task { try? await loader.finish() } }
}
}
}
@Observable class GameLoader {
var directory: GameSaveSyncedDirectory?
func stateChanged() {
let newState = withObservationTracking {
directory?.state
} onChange: {
Task { @MainActor [weak self] in self?.stateChanged() }
}
print("State changed to \(newState?.description ?? "nil")")
switch newState {
case .error(let error):
print("ERROR: \(error.localizedDescription)")
default: _ = 0 // NOOP
}
}
func load() {
print("Opening gamesave directory")
directory = GameSaveSyncedDirectory.openDirectory()
stateChanged()
}
func finish() async throws {
print("finishing syncing")
await directory?.finishSyncing()
}
}
Topic:
Graphics & Games
SubTopic:
General
When running my game in the Unity Editor on Windows platform I get an error:
DllNotFoundException: GameKitWrapper assembly:<unknown assembly> type:<unknown type> member:(null)
Apple.GameKit.DefaultNSErrorHandler.Init () (at ./Library/PackageCache/com.apple.unityplugin.gamekit@0abcad546f73/Source/DefaultHandlers.cs:35)
This is because GameKitWrapper dynamically linked library is not available under Windows platform.
Besides, "Apple Build Settings" are declared under UNITY_EDITOR_OSX and also not available under Windows platform.
Does anyone managed to solve this?
I am an AR developer working on Apple Silicon Macs. Currently, Reality Composer Pro does not allow exporting .reality files, and Reality Composer (classic) is not available for Apple Silicon. This creates a gap in the workflow for ARKit/RealityKit developers who need interactive .reality files for use in Xcode projects.
Having the ability to export .reality files directly from Reality Composer Pro on Mac would greatly streamline development and enable a fully native workflow on modern Macs. Alternatively, bringing Reality Composer (classic) to Apple Silicon would also resolve this issue.
I have submitted this as a feature request via Feedback Assistant (FB17900386). I encourage others with similar needs to reply or submit feedback as well.
Thank you!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
ARKit
Reality Composer
RealityKit
Reality Composer Pro
Hi all
I have two mystic issues with saving and fetching data to and from iCloud. Both repro only after first launch of an app.
1. [GKLocalPlayer fetchSavedGamesWithCompletionHandler:]
After first attempt I can see 0 saved games (but i know that there is at least one saved game) and there is no any error. In case if I try fetch one more time (without any additional actions) even immediately after first attempt I receive saved games correctly (not 0)
2. [GKLocalPlayer saveGameData: withName: completionHandler:]
After first attempt I can see error The requested operation could not be completed because local player has not been authenticated. In case if I try save one more time (without any additional actions) even immediately after first attempt I can save data successfully without any error
I found the same issue in StackOverflow, but there are no fixes...