Into a SKScene, I add a SCNSphere as a child of SKShapeNode, as depicted below.
When the sphere hit another node (the fence in the example) the sphere is deformed as it were elastic.
I didn't found any information about elastic properties.
Someone know a way to avoid the deformation?
import SwiftUI
import SpriteKit
import SceneKit
@main struct MyApp: App
{
var body: some Scene
{
WindowGroup {SpriteView(scene: GameSceneSK(size: UIScreen.main.bounds.size))}
}
}
class GameSceneSK: SKScene
{
override func sceneDidLoad() {
var fencePoints = [
CGPoint(x: 300, y: 0), CGPoint(x: 300, y: 400), CGPoint(x: 0, y: 400)
]
let fence = SKShapeNode(points: &fencePoints,
count: fencePoints.count)
fence.physicsBody = SKPhysicsBody(edgeChainFrom: fence.path!)
addChild(fence)
let sphereGeometry = SCNSphere(radius: 20)
let sphereNode = SCNNode(geometry: sphereGeometry)
let sphereScnScene = SCNScene()
sphereScnScene.rootNode.addChildNode(sphereNode)
let ball3D = SK3DNode(viewportSize: CGSize(width: 40,
height: 40))
ball3D.scnScene = sphereScnScene
let ball = SKShapeNode(circleOfRadius: 20)
ball.physicsBody = SKPhysicsBody(circleOfRadius: 20)
ball.addChild(ball3D)
physicsWorld.gravity = CGVector(dx: 0.2, dy: 0.2)
addChild(ball)
}
}
Delve into the world of graphics and game development. Discuss creating stunning visuals, optimizing game mechanics, and share resources for game developers.
Selecting any option will automatically load the page
Post
Replies
Boosts
Views
Activity
Breaking Through PolySpatial's ~8k Object Limit – Seeking Alternative Approaches for Large-Scale Digital Twins
Confirmed: PolySpatial make Doubles MeshFilter Count – Hard Limit at ~8k Active Objects (15.9k Total)
Project Context & Research Goals
I’m developing an industrial digital twin application for Apple Vision Pro using Unity’s PolySpatial framework (RealityKit rendering in Unbounded_Volume mode). The scene contains complex factory environments with:
Production line equipment Many fragmented grid objects need to be merged.)
Dynamic product racks (state-switchable assets)
Animated worker avatars
To optimize performance, I’m systematically testing visionOS’s rendering capacity limits. Through controlled stress tests, I’ve identified a critical threshold:
Key Finding
When the total MeshFilter count reaches 15,970 (system baseline + 7,985 user-created objects × 2 due to PolySpatial cloning), the application crashes consistently. This suggests:
PolySpatial’s mirroring mechanism effectively doubles GameObject overhead
An apparent hard limit exists around ~8k active mesh objects in practice
Objectives for This Discussion
Verify if others have encountered similar limits with PolySpatial/RealityKit
Understand whether this is a:
Memory constraint (per-app allocation)
Render pipeline limit (Metal draw calls)
Unity-specific PolySpatial behavior
Explore optimization strategies beyond brute-force object reduction
Why This Matters
Industrial metaverse applications require rendering thousands of interactive objects . Confirming these limits will help our team:
Design safer content guidelines
Prioritize GPU instancing/LOD investments
Potentially contribute back to PolySpatial’s optimization
I’d appreciate insights from engineers who’ve:
Pushed similar large-scale scenes in visionOS
Worked around PolySpatial’s cloning overhead
Discovered alternative capacity limits (vertices/draw calls)
Im new in the Mac area but for sure not UE. Windows is a long process to packaging but it could be done. All the documentation for Epic and from the internet is basically non existent with exactly how to package a project within UE. I have Xcode installed which makes sense, agreed to terms and install for MacOS, I've been able to make a project for several weeks now and want to package for a test run for my friends to play on Windows. Now I just get this in the log:
UATHelper: Packaging (Mac): ERROR: Failed to finalize the .app with Xcode. Check the log for more information
UATHelper: Packaging (Mac): Trace written to file /Users/rileysleger/Library/Logs/Unreal Engine/LocalBuildLogs/UBA-ProjectNightTerror-Mac-Development.uba with size 12.6kb
UATHelper: Packaging (Mac): Total time in Unreal Build Accelerator local executor: 8.12 seconds
UATHelper: Packaging (Mac): Result: Failed (OtherCompilationError)
UATHelper: Packaging (Mac): Total execution time: 9.71 seconds
PackagingResults: Error: Failed to finalize the .app with Xcode. Check the log for more information
UATHelper: Packaging (Mac): Took 9.77s to run dotnet, ExitCode=6
UATHelper: Packaging (Mac): UnrealBuildTool failed. See log for more details. (/Users/rileysleger/Library/Logs/Unreal Engine/LocalBuildLogs/UBA-ProjectNightTerror-Mac-Development.txt)
UATHelper: Packaging (Mac): AutomationTool executed for 0h 0m 10s
UATHelper: Packaging (Mac): AutomationTool exiting with ExitCode=6 (6)
UATHelper: Packaging (Mac): RunUAT ERROR: AutomationTool was unable to run successfully. Exited with code: 6
PackagingResults: Error: AutomationTool was unable to run successfully. Exited with code: 6
PackagingResults: Error: Unknown Error
This absolutely makes no sense to me. Anyone have ideas?
I have used the Mac M1 and M4.
Developing OpenGL projects on machines running macOS 15.2 and 13.6.
Call the OpenGL library functions of Mac.
glTexImage2D
If you use GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_ALPHA these three textures, you will get an error gl 500.
It makes me unable to draw normally on Mac.
What's the reason for this? Don't they support it?
Hello,
Thank you for attending today’s Metal & game technologies group lab at WWDC25!
We were delighted to answer many questions from developers and energized by the community engagement.
We hope you enjoyed it and welcome your feedback.
We invite you to carry on the conversation here, particularly if your question appeared in Slido and we were unable to answer it during the lab.
If your question received feedback let us know if you need clarification.
You may want to ask your question again in a different lab e.g. visionOS tomorrow.
(We realize that this can be confusing when frameworks interoperate)
We have a lot to learn from each other so let’s get to Q&A and make the best of WWDC25! 😃
Looking forward to your questions posted in new threads.
After following the instructions here:
https://developer.apple.com/metal/cpp/
I attempted building my project and Xcode presented several errors. In essence it's complaining about some redeclarations in the Metal-CPP headers.
NSBundle.hpp and NSError.hpp are included in the metal-cpp/foundation directory from the metal-cpp download.
Any help in getting these issues resolved is appreciated.
Thanks!
I am interested in learning the Metal framework for rendering development. However, most of Apple’s official documentation uses Objective-C code. Therefore, I am seeking guidance on whether it is more advantageous for me to focus solely on learning Swift to gain proficiency in Metal.
I've just started working on my first SpriteKit game that will eventually run on both tvOS and iOS and am looking at how to build a "button". So far, I've got a custom node that looks like:
class MyButton: SKSpriteNode {
...
#if os(tvOS)
override var canBecomeFocused: Bool {
true
}
override func didUpdateFocus(...) {
...
}
#endif
}
The above let me nicely handle focus changes in tvOS and now I'm looking at reacting to selecting the button.
Searching around, all the articles/questions/posts are from 2015-2016 - which is a LOOOONG time ago. Most of the guidance appears to be to add a tap gesture recognizer in the owning scene and getting the scene to hand it off to the button. That seems pretty brittle and I'd much prefer if the button itself is responsible for its own tap management.
So, I guess my question is whether I should just add a gesture recognizer to my custom button class? Is this inefficient if I end up having 7-8 buttons on the screen and each one has its own gesture recognizer?
Somewhat related, all of the 10-year-old advice is that if we add recognizers to scenes, then they need to be removed from the view controller... however, in the modern day world with SwiftUI, my project doesn't even have a view controller (yet, anyway)... what gesture recognizer lifecycle management do I need in a SpriteKit scene that is presented within a SpriteKitView?
Or, is there a better way? I was kind of hoping that overriding pressesBegan() (or something similar) in my custom button might have been triggered on tvOS (like touchesBegan() lets me manage touches for the iOS variant of my app)
Any pointers or suggestions would be gladly received. Thanks.
The sample code here, has code like:
// Create a display link capable of being used with all active displays
cvReturn = CVDisplayLinkCreateWithActiveCGDisplays(&_displayLink);
But that function's doc says it's deprecated and to use NSView/NSWindow/NSScreen displayLink instead. That returns CADisplayLink, not CVDisplayLink.
Also the documentation for that displayLink method is completely empty. I'm not sure if I'm supposed to add it to run loop, or what, after I get it.
It would be nice to get an updated version of this sample project and/or have some documentation in NSView.displayLink
Topic:
Graphics & Games
SubTopic:
Metal
Imagine a native macOS app that acts as a "launcher" for a Java game.** For example, the "launcher" app might use the Swift Process API or a similar method to run the java command line tool (lets assume the user has installed Java themselves) to run the game.
I have seen How to Enable Game Mode. If the native launcher app's Info.plist has the following keys set:
LSApplicationCategoryType set to public.app-category.games
LSSupportsGameMode set to true (for macOS 26+)
GCSupportsGameMode set to true
The launcher itself can cause Game Mode to activate if the launcher is fullscreened. However, if the launcher opens a Java process that opens a window, then the Java window is fullscreened, Game Mode doesn't seem to activate. In this case activating Game Mode for the launcher itself is unnecessary, but you'd expect Game Mode to activate when the actual game in the Java window is fullscreened.
Is there a way to get Game Mode to activate in the latter case?
** The concrete case I'm thinking of is a third-party Minecraft Java Edition launcher, but the issue can also be demonstrated in a sample project (FB13786152). It seems like the official Minecraft launcher is able to do this, though it's not clear how. (Is its bundle identifier hardcoded in the OS to allow for this? Changing a sample app's bundle identifier to be the same as the official Minecraft launcher gets the behavior I want, but obviously this is not a practical solution.)
I'm new here so I don't know what's this function belongs to which topic... Sorry about that!
I watched the WWDC stream and I am really interested in this function, I'm wondering if this function could be used in my apps.
I looked up the document but I find it only support visionOS(i'm not sure about that, but I saw the demo is base on the visionOS)
I'm writing a swift app that uses metal to render textures to the main view. I currently use a NSViewRepresentable to place a MTKView into the window and a MTKViewDelegate to perform the metal operations. It's running well and I see my metal view being updated.
However, when I close the window (either through the user clicking the close button or by programatically using the appropriate @Environment(\.dismissWindow) private var dismissWindow) and then reopen the window, I no longer receive calls to MTKViewDelegate draw(in mtkView: MTView). If I manually call the MTKView::draw() function my view updates it's content as expected, so it seems to be still be correctly setup / alive.
As best as I can tell the CVDisplayLink created by MTKView is no longer active (or at least that's my understanding of how the MTKView::draw() function is called).
I've setup the MTKView like this
let mtkView = MTKView()
mtkView.delegate = context.coordinator // My custom delegate
mtkView.device = context.coordinator.device // The default metal device
mtkView.preferredFramesPerSecond = 60
mtkView.enableSetNeedsDisplay = false
mtkView.isPaused = false
which I was hoping would call the draw function at 60fps while the view is visible.
I've also verified the values don't change while running.
Does anyone have any ideas on how I could restart the CVDisplayLink or anyother methods to avoid this problem??
Cheers
Jack
Hello,
I'm getting this error when launching a SpriteKit Swift game in iOS 18+ on an iPhone 11 Pro, whose shell is partly damaged in the back:
CHHapticEngine.mm:1206 -[CHHapticEngine doStartWithCompletionHandler:]_block_invoke: ERROR: Player start failed: The operation couldn’t be completed. (com.apple.CoreHaptics error 1852797029.)
Haptics do not work on this device, due to the damaged shell, so some error — which obviously occurs when calling start(completionHandler:) — is definitely expected; what is not expected is the main thread sometimes blocking for up to 5 seconds — although the method is not called from the main thread... the error itself is always displayed from some other secondary (system) thread. During this time, the main thread does not access the haptics engine at all; on average, it blocks once every four or five launches. In each launch (blocking or not), the 'nope' error is displayed ~5 seconds after trying to start the engine.
After going nuts with all kinds of breakpoints and instrumentation, I'm at a loss as to why the main thread would sometimes block...
Ideas, anyone?
Thank you,
D.
I notice some metal-cpp classes have static funtion like
static URL* fileURLWithPath(const class String* pPath);
static class ComputePassDescriptor* computePassDescriptor();
static class AccelerationStructurePassDescriptor* accelerationStructurePassDescriptor();
which return a new object.
these classes also provide 'alloc' and 'init' function to create object by default.
for object created by 'alloc' and 'init', I use something like NS::Shaderd_Ptr or call release directly to free memory. Because 'alloc' and 'init' not explicit call on these static function.
I wonder how to correctly free object created by these static function? did they managed by autorelease pool?
I have a 3D model with morphing animation that works correctly in Blender.
I exported this model as a USDZ file and tried to display it in an Xcode-developed visionOS app, but the morphing animation does not play.
What I Have Tried:
Morphing animation works correctly in Blender.
After exporting to USDZ, the morphing animation does not play in the Xcode app.
Linear motion animations (such as object movement) work fine.
Behavior in Reality Converter:
GLB files do not display.
USDZ files load, but morphing animations do not play.
What I Want to Know:
Is there a way to play morphing animations in an Xcode-developed app?
Does RealityKit support morphing animations?
Can morphing animations be played in an Xcode-developed app?
If RealityKit does not support morphing animations, what alternative methods can be used to play them?
I am looking for a way to use the existing animations without recreating them.
Additional Information:
I have both the Blender file (where animations work) and the USDZ file (where animations do not play).
I am developing a visionOS app using Xcode.
Any advice or solutions would be greatly appreciated.
Thank you in advance!
Hi, is there any way to use front camera to do the motion capture?
I want recognize if the user raised there hands up with the front camera on iPhone.
I was able to do it with the back camera, not the front.
Also, if there is any sample code, or document, I would be super happy.
Waiting for your reply!!
Topic:
Graphics & Games
SubTopic:
RealityKit
Tags:
Swift Student Challenge
Swift
ARKit
Swift Playground
I searched the Metal Shading Language Specification Version 3.0 document, however I cannot see any function for inverting a matrix. Is there really no function in Metal for inverting a matrix?
I often need to this in linear equations and have so far resorted to writing the necessary function each time, most of the time just copy-and-pasting code.
inverse exists in SIMD and GLSL, so why not in Metal? It seems so unexpected that this function does not exist that I am almost certain I have just overlooked something obvious. I even tried 1 / M, to no avail.
I am making a framework in C++ using metal-cpp, basically a small game engine. I am also consequently using metal-cpp-extensions provided in LearnMetalCPP to make applications work.
For one of my classes, I needed to add AppKit.hpp inside a public header file, so I moved it and its associate headers(NSApplication.hpp, NSMenu.hpp, etc.) from Project headers to Public in Build Phases' Headers, however, it started giving me the error "cast of C pointer type 'void *' to Objective-C pointer type 'Class' requires a bridged cast" at several points in the AppKit headers. They don't appear when AppKit and its associates are in the Project headers, or when they are in the Private headers and no headers import it.
I imagined that disabling Objective-C ARC and Using __bridge casts outside of ARC in Build Settings would solve it, but it didn't budge.
I imagined it wouldn't involve actively changing the headers would be the answer, but even if I try to put __bridge before the problematic casts, it didn't recognize __bridge.
How do I solve this? And why is it only happening in Public and not Project headers?
Does anyone have a working example on how to play OGG files with swift?
I've been trying for over a year now. I was able to wrap the C Vorbis library in swift. I then used it to parse an OGG file successfully. Then I was required to use Obj-C\++ to fill the PCM because this method seems to only be available in C\++ and that part hangs my app for a good 40 seconds to several minutes depending on the audio file, it then plays for about 2 seconds and then crashes.
I can't get the examples on the Vorbis site to work in objective-c and i tried every example on github I could find (most of which are for iOS - I want to play the files on mac)
I also tried using Cricket Audio framework below.
https://github.com/sjmerel/ck
It has a swift example and it can play their proprietary soundbank format but it is also supposed to play OGG and it just doesn't do anything when trying to play OGG as you can see in the posted issue
https://github.com/sjmerel/ck/issues/3
Right now I believe every player that can play OGGs on mac is written in Objective-C or C++.
Anyway, any help/advice is appreciated. OGG format is very prevalent in the gaming community. I could use unity, which I believe plays oggs through the mono framework but I really really want to stay in swift.
Helle there
Currently, I’m attempting to create an interactive learning application with a 3D view. I’ve discovered the framework SceneKit, but I lack the necessary knowledge to animate, load and moving objects. Could someone kindly suggest some good articles or tutorials on this topic?