3D Procedural generation at runtime

Hey guys,

Seems like simple question, but was not able to find a clear answer. I am building an app (game like) and all 3D geometry is going to be created and modified at run time.

What framework should I use with SwiftUI ? SceneKit or RealityKit

Thanks

Replies

I fully expect any answer you get from Apple would be "They're both fully supported frameworks", and so far that's boiled down to how you want to use the content. For quite a while, only SceneKit had APIs for generating geometry meshes procedurally, but two years ago RealityKit quietly added API (although it's not really documented) - so you can do the same there.

RealityKit comes with a super-easy path to making 3D content overlaying the current world (at least through the lens of an iPhone or iPad currently), but if you're just trying to display 3D content on macOS its quite a bit crankier to deal with (although it's possible). RealityKit also comes with a presumption that you'll be coding the interactions with any 3D content leveraging an ECS pattern, which is rather "built-in" at the core. The best examples & content I've seen for learning how to procedurally assemble geometry with RealityKit is RealityGeometries at (https://swiftpackageindex.com/maxxfrazer/RealityGeometries) - read through the code and you'll see how the MeshDescriptors are used to assemble things.

SceneKit is a slightly older API, but in some ways much easier to get into for procedurally generated (and displayed) geometry. There's also some libraries you can leverage (such as Euclid at (https://github.com/nicklockwood/Euclid) which has been a joy for my experiments and purpose. There's quite a bit more (existing) sample content out there for SceneKit, so while the API can be a bit quirky from swift, it's quite solid.

Hi @heckj

Thank you for such a detailed explanation. It seems I will go with SceneKit and will hope apple will not kill it in a year or two. Building my app for iOS(IPad) and Mac Plus SceneKit has a lot of helpful things like, camera controller, mesh flattening and so on… Does it uses ECS as well or only RealityKit? What you mean by SceneKit being quirky with Swift?

Btw here is what I am building https://www.thebrief.space

SceneKit doesn't preclude using an ECS system, but Apple's version of that setup is only built-in to the RealityKit framework. The stock SceneKit setup doesn't provide the same kind of setup, instead leaving that up to how-ever you'd like to implement any relevant gameplay/simulation logic.

My "SceneKit is quirky with Swift" was mostly about the API and how it's exposed. There's zero issue with using it with Swift, the API is, however, far more C/Objective-C oriented - not at all surprising for when it was initially released. The RealityKit API's (in comparison) feel to me like they fit a bit more smoothly into a swift-based codebase.

  • Thank you @heckj I think it is clear. Basically I will develop the core of the application in SceneKit and for some specific AR features I can add RealityKit to the mix. and if performance will become a problem I can do low level Metal stuff. or would you suggest spending some time and start with Metal?

Add a Comment

I am new to this forum and not sure if Graphics and Games Engineer can also provide some advice. What technology / framework is better suited for real-time 3D procedural game like application?

Thank you

I think it will depend on how much control/flexibility you need. Writing your own rendering engine in Metal will provide you with the most flexibility, but will likely be more work on your side. SceneKit and RealityKit will provide you with a lot of built in features, but implementing something like custom procedural geometry could end up being more difficult. RealityKit introduced a custom mesh API in iOS 15, so you could take a look at using that, and I’m sure SceneKit has a similar API. Another option would be to use a game engine like Unity to develop your app. I think you’ll likely need to do some prototyping to see which framework is best for your app. Since you mention wanting to include AR features, I’d probably look at RealityKit first (but I’m biased because I work on RealityKit 😅).

Thank you!

I am actually coming from Unity. I have built a prototype in it. The reason I am switching to native is to be able pencilKit and native UI. Plus in future AR.

It is hard to choose between SceneKit and RealityKit.

  • RealityKit had ECS similar to Unity and procedural meshes.

  • But SceneKit has better camera controls (orbit, zoom etc).

  • Plus it has mesh flatten for static mesh. Not sure ReallyKit has something similar.

Just wondering how hard is it to use different features from different frameworks? Is it even a good practice?

Or if I will continue with Unity did Apple had good examples of using Unity as a Library with native SwiftUI. Maybe something from WWDC ?

Thank you 😊

Completely forgot to share what I am building. It might give some context. https://www.instagram.com/p/CPYbq7YgrJU/?igshid=YmMyMTA2M2Y=

  • I watched all WWDC regarding RealityKit and I found an interesting one about Dynamic Mesh generation 🥳. Seems straight forward, similar to Unity. I just wonder how it works with more complex meshes, say I will have 20k-50k dynamic meshes in the scene, what optimisations options RealityKit provides? https://developer.apple.com/wwdc21/10075?time=1320

  • I don’t think rendering that many meshes is a standard use case for RealityKit, so I would expect there to be performance issues, especially since we don’t expose any instancing or LOD features right now.

  • Thank you. Last questions.

    Is there a timeline for orthographic camera in RealityKit?

    When it is going to be available?

    Are there camera controls similar to SceneKit, orbit , pan ? If not what is the work around and when to expect?Is there a mesh flatten function to reduce amount of rendering callbacks? If not what are the plans or workarounds planned?