Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.

All subtopics

Post

Replies

Boosts

Views

Activity

Triangle count and texture size budget for RealityKit on visionOS
In the past, Apple recommended restricting USDZ models to a maximum of 100,000 triangles and a texture sizes of 2048x2048 for Apple QuickLook (and I think for RealityKit on iOS in general). Does Apple have any recommended max polygon counts for visionOS? Is it the same for models running in a Volumetric window in the shared space and in ImmersiveSpace? What is the recommended texture size for visionOS? (I seem to recall 8192x8192, but I can't find it now)
2
0
461
May ’24
in-app purchase with free tier
Hi, I am developing Cordova apps/games with in-app purchase products as well as an initial product as non-paid (Free) tier: New users will be able to play freely for a set of games as default at the beginning. Then, if they would like to have more games with different set of attributes or themes, they can add more games with in-app purchases. It is similar to a game called Subway Surfers in App Store I would play in the past. A new player initiates games as Free Tier. After, let us say 3 games, the user is asked if he/she would like to have more games to play with different scenes/themes in different Tiers, in addition to their disposal: Tier 1, Tier 2 and Tier 3. For example Tier 1 adds 3 more games into the Free Tier games so they can play 6 games in the set;Tier 2 add 6 more games and so 9 games they can play and so on. Each individual game in their set is a variant of others in differing Tiers. If they don't wish and play Free Tier games, they may play them too, with limited set of themes but infinite times. If a user chooses a tier, let us say Tier 1, and when they play 6 games, they are asked if they would like advance to Tier 2 or Tier 3. If they choose Tier 2, as they complete the respective games they will be asked for Tier 3. However, if they don't wish to advance, again they can play current Tier games as many times as they wish. It is like non-subscription apps then converted to subscription-based ones. In App Store Connect, I created a number of products for in-app purchases for an app. How I can deliver this Free Tier games in the app and let users try it and allow them to choose in-app purchase products available in AppStore Connect. I would appreciate response and support. Best Lexxyacc
0
0
280
May ’24
How to move the player in visionOS
I'm creating a full immersive app of a large 3d environment in which I need to be able to move the player with different options like, hand gestures, game controller and teleporting. I have worked with unreal engine in which moving the player is easy and well documented. But I have not been able to find any information on how I could achieve this in visionOS. Has anyone done something similar that could give me some advice or sample code? any help appreciated Guillermo
1
0
292
May ’24
CGContextDrawLayerAtPoint Problems in Mac OS Sonoma
My app stopped working in Mac OS Sonoma 14.0 and I quickly isolated the problem to CGContextDrawLayerAtPoint. Two issues, first of all about 1/2 the time there was no data copied (the updated CGLayer did not show up in the window). Then the app would crash iin libswiftCore.dylib after about 5 updates with a very unusual message: "Fatal error: Duplicate keys of type 'DisplayList' were found in a Dictionary. This usually means either that the type violates Hashable's requirements, or that members of such a dictionary were mutated after insertion". This behavior showed up in builds built with XCode 13 on a Mac OS Montery platform, as well as XCode 15 on Mac OS Ventura when the app was run on Sonoma. My app uses a very traditional method to create an off-screen graphics context in drawRect: - (void)drawRect:(NSRect)dirtyRect { // Obtain context from the current NSGraphicsContext ... viewNSContext = [NSGraphicsContext currentContext]; viewCGContext = (CGContextRef)[viewNSContext graphicsPort]; drawingLayer = CGLayerCreateWithContext(viewCGContext, size, NULL); So the exact details of the off-screen drawing area were based upon the characteristics of the window being drawn to. Fortunately the work-around was very easy. By creating a custom CGBitmapContext everything was resolved. My drawing requirements are very basic, so a simple 32-bit RGB off-screen context was adequate. colorSpaceRef = CGColorSpaceCreateDeviceRGB(); bitMapContextRef = CGBitmapContextCreate(NULL, (int) rintf(size.width), (int) rintf(size.height), 8, 0, colorSpaceRef, kCGImageAlphaNoneSkipLast); drawingLayer = CGLayerCreateWithContext(bitMapContextRef, size, NULL); Once I changed to a bitmap offscreen context, problem resolved. In my case I verified that the portion of the window that was updated with the CGContextDrawLayerAtPoint was indeed restricted to the dirty part of the view rectangle, at least in Sonoma 14.5. Hope this helps someone else searching for the issue, as I found nothing in the Forums or online.
2
1
299
May ’24
Vision Pro Unity Build and Xcode (Swift) vision os build merge
I have a unity scene which i have created for vision pro and i have also created a biomatric authentication application for vision os using Xcode and swift. What i want to do is call unity scene after the authentication has taken place form the xcode. now i have seen medium post but it only shows how we can do that for apps, I am not bale to do that for vision Pro I have followed this post : https://medium.com/mop-developers/launch-a-unity-game-from-a-swiftui-ios-app-11a5652ce476 All this i am doing because as far as i know Apple vision pro is not currently supporting optic id authentication with unity's polyspatial plugin. Any help on this will be appreciated. Thank you in advace.
1
0
370
May ’24
Guideline 4.3(a) Original game misinterpreted as Spam
Hello, our game Nerd Survivors has been identified as spam. The game is based on an original IP and the only games that should share any resemblance with it are from our company. Today we got notified the rejection due to the violation of Guideline 4.3 but there is no reference to the other application or even a contact to the other developer. We do use Unity as our game engine, so some parts of the code might be shared across different games, but i cannot find any other justification of the failure. "Guideline 4.3(a) - Design - Spam We noticed your app shares a similar binary, metadata, and/or concept as apps submitted to the App Store by other developers, with only minor differences. Submitting similar or repackaged apps is a form of spam that creates clutter and makes it difficult for users to discover new apps."
1
0
415
May ’24
WebGPU bugreport: problem with uniform buffer
Hi, in this WebGPU example: https://skal65535.github.io/curl/index_bug_safari.html the lighting is wrong compared to Chrome's reference version. I narrowed the problem to the uniform value 'params.specular' at line 515 not being equal to the expected value 1.2f. The value is set a line at line 1078 in the uniform buffer. Platform: MacBook M1 Pro Sonoma 14.4.1 (23E224) Safari Technology Preview: Release 194 (Safari 17.4, WebKit 19619.1.11.111.2) Works ok with Chrome 124.0.6367.156 (Official Build) (arm64).
6
0
315
May ’24
Walking an entity around an immersive space in visionOS like the window drag bar
I'm trying to understand how Apple handles dragging windows around in an immersive space. 3d Gestures seem to be only half of the solution in that they are great if you're standing still and want to move the window an exaggerated amount around the environment, but if you then start walking while dragging, the amplified gesture sends the entity flying off into the distance. It seems they quickly transition from one coordinate system to another depending on if the user is physically moving. If you drag a window and start walking the movement suddenly matches your speed. When you stop moving, you can push and pull the windows around again like a super hero. Am I missing something obvious in how to copy this behavior? Hello world, which uses the 3d gesture has the same problem. You can move the world around but if you walk with it, it flies off. Are they tracking the head movement and if it's moved more than a certain amount it uses that offset instead? Is there anything out of the box that can do this before I try and hack my own solution?
2
0
330
May ’24
RealityKit Memory Leak
I'm building a visionOS app which loads a Reality Composer scene with a large number of models. The app includes several of these scenes, and allows the user to switch between them. Because the scenes have a large number of models, I want to unload the currently loaded scene before loading a different one. So far I have been unable to reclaim all of the used memory by removing the entities from the scene. I've made a few small changes to the Mixed Immersive app template which demonstrate this behavior which I've included below (apparently I'm unable to upload a zip file with the entire project). Using just the two spheres included in the reality kit content the leaked memory is fairly small, but if you add a couple larger models to the scene (I was able to easily find free ones online) then the memory leak becomes much more obvious. When the immersive space is initially opened, I'm seeing roughly 44MB of used memory (as shown in the Xcode Debug navigator). Each time I tap the "Load Models" and then "Unload Models" buttons, the memory use decreases but does not get back down to the initial amount. Subsequent loads and unloads will continue to increase the used memory (the amount of increase will depend on the models that you add to the scene). Also note that I've seen similar memory increases when dynamically creating the entities. Inside ViewModel.loadModels I've included some commented out code that dynamically creates entities instead of loading a Reality Composer scene. Is there a way to fully reclaim the used memory? I've tried many different ways to clear the RealityKit entities but so far have been unsuccessful. struct RKMemTestApp: App { private var viewModel = ViewModel() var body: some Scene { WindowGroup { ContentView() .environment(viewModel) } ImmersiveSpace(id: "ImmersiveSpace") { ImmersiveView() .environment(viewModel) } } } Add this above the body in ContentView: @Environment(ViewModel.self) private var viewModel The ContentView body should be: VStack { Toggle("Show ImmersiveSpace", isOn: $showImmersiveSpace) .font(.title) .frame(width: 360) .padding(24) .glassBackgroundEffect() Button("Load Models") { viewModel.loadModels() } Button("Unload Models") { viewModel.unloadModels() } } ImmersiveView: struct ImmersiveView: View { @Environment(ViewModel.self) private var viewModel var body: some View { RealityView { content in if let rootEntity = viewModel.rootEntity { content.add(rootEntity) } } update: { content in if viewModel.rootEntity == nil && !content.entities.isEmpty { content.entities.removeAll() } else if let rootEntity = viewModel.rootEntity, content.entities.isEmpty { content.add(rootEntity) } } } } ViewModel: import Foundation import Observation import RealityKit import RealityKitContent @Observable class ViewModel { var rootEntity: Entity? init() { } func loadModels() { Task { if let scene = try? await Entity(named: "Immersive", in: realityKitContentBundle) { Task { @MainActor in if rootEntity == nil { rootEntity = Entity() } rootEntity!.addChild(scene) } } } /*if rootEntity == nil { rootEntity = Entity() } for _ in 0..<1000 { let mesh = MeshResource.generateSphere(radius:0.1) let material = SimpleMaterial(color: .blue, roughness: 0, isMetallic: true) let entity = ModelEntity(mesh: mesh, materials: [material]) entity.position = [Float.random(in: 0.0..<1.0), Float.random(in: 0.5..<1.5), -Float.random(in: 1.5..<2.5)] rootEntity!.addChild(entity) }*/ } func unloadModels() { rootEntity?.children.removeAll() rootEntity?.removeFromParent() rootEntity = nil } }
0
1
291
May ’24
Loading a .scnz file in Xcode / Displaying it in a view using Swift
Hello! I need to display a .scnz 3D model in an iOS app. I tried converting the file to a .scn file so I could use it with SCNScene but the file became corrupted. I also tried to instantiate a SCNScene with the .scnz file but that didn't work either (crash when instantiating it). After all this, what would be the best way to use this file knowing that converting it or exporting it to a .scn file with scntool hasn't worked? Thank you!
0
0
335
May ’24
Game Porting Installer Won't Start
I am installing Homebrew and GPTk according to this video: https://www.youtube.com/watch?v=WdQIc69e5oA I have Homebrew installed. When i type in which brew it responds with /opt/homebrew/bin/brew When I enter the command brew -v install apple/apple/game-porting-toolkit it replies with HomeBrew 4. 2. 21 Anyone know who to get the game porting kit to install? I have tried download both of the options (kit v1 and v1.1) from the website but it doesn't make a difference
1
0
457
May ’24
VisionOS: Simultaneous Drag & Rotate gestures
I have been trying to replicate the entity transform functionality present in the magnificent app Museum That Never Was (https://apps.apple.com/us/app/the-museum-that-never-was/id6477230794) -- it allows you to simultaneously rotate, magnify and translate the entity, using gestures with both hands (as opposed to normal DragGesture() which is a one-handed gesture). I am able to rotate & magnify simultaneously but translating via drag does not activate while doing two-handed gestures. Any ideas? My setup is something like so: Gestures: var drag: some Gesture { DragGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureTranslation = value.convert(value.translation3D, from: .local, to: .scene) } .onEnded { value in itemTranslation += gestureTranslation gestureTranslation = .init() } } var rotate: some Gesture { RotateGesture3D() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureRotation = simd_quatf(value.rotation.quaternion).inverse } .onEnded { value in itemRotation = gestureRotation * itemRotation gestureRotation = .identity } } var magnify: some Gesture { MagnifyGesture() .targetedToEntity(where: QueryPredicate<Entity>.has(MyComponent.self)) .onChanged { value in gestureScale = Float(value.magnification) } .onEnded { value in itemScale *= gestureScale gestureScale = 1.0 } } RealityView modifiiers: .simultaneousGesture(drag) .simultaneousGesture(rotate) .simultaneousGesture(magnify) RealityView update block: entity.position = itemTranslation + gestureTranslation + exhibitDefaultPosition entity.orientation = gestureRotation * itemRotation entity.scaleAll(itemScale * gestureScale)
1
1
357
May ’24
Is there a way with XCode 15.2 or 15.3 to use RealityKit without VisionOS
I start a project for iPad/iPhone and I set SwiftUI - RealityKit and I can’t get the build to compile. I do nothing but create a project and hit run. So I am wondering if it’s even possible to run RealityKit on just an iPad anymore. I then tried to use Reality composer to import a basic cylinder shape to my project and that wouldn’t run either. So I am wondering how to get a 3D model into my iPad app so that the user can interact with it. Thanks for any help
0
0
289
May ’24
GameKit leaderboard image fails to load
I've added a recurring leaderboard in App Store Connect which I can get the localized title from, submit scores to, and load players from, but I'm getting an error whenever I try loading the image for it with the instance method for GKLeaderboard, loadImage(). The description from the error is: The requested operation could not be completed due to an error communicating with the server. Both the completion block and async versions of this method yield the same result. I made sure the image is a PNG in 1024x1024, 72 DPI, and in the RGB colorspace. No errors appear after I upload the image. Are there any hidden requirements that might be causing this error? Perhaps there is a waiting period before the server can provide the image?
1
0
310
May ’24
HoverEffectComponent on one child highlights all siblings in an Entity?
I am trying to verify my understanding of adding a HoverEffectComponent on entities inside a scene in RealityViews. Inside RealityComposer Pro, I have added the required Input Target and Collision components to one entity inside a node with multiple siblings, and left any options as defaults. They appear to create appropriately sized bounding boxes etc for these objects. In my RealityView I programmatically add the HoverEffectComponents to the entities as I don't see them in RCP. On device, this appears to "work" in the sense that when I gaze at the entity, it lights up - but so does every other entity in the scene - even those without Input Target and Collision components attached. Because the documentation on the components is sparse I am unsure if this is behavior as designed (e.g. all entities in that node are activated) or a bug or something in between. Has anyone encountered this and is there an appropriate way of setting these relationships up? Thanks
2
1
295
May ’24