RealityKit Subdivide

In the Discover RealityKit APIs for iOS, macOS, and visionOS presentation, there was a slide at the end highlighting new features not covered in the video. One of them was surface subdivision, but I have not been able to find any documentation or APIs that support this feature. Does anyone have any further details or how this works in RealityKit?

I'm wondering the same thing. It would be great if generateSphere would make use of surface subdivision, dynamically adjusting depending on the distance to the camera... but there is nothing in the documentation that tells us what "subdivision surface" actually means for RealityKit 4.

RealityKit Subdivide
 
 
Q