Streaming is available in most browsers,
and in the Developer app.
-
Build a spatial drawing app with RealityKit
Harness the power of RealityKit through the process of building a spatial drawing app. As you create an eye-catching spatial experience that integrates RealityKit with ARKit and SwiftUI, you'll explore how resources work in RealityKit and how to use features like low-level mesh and texture APIs to achieve fast updates of the users' brush strokes.
Chapters
- 0:00 - Introduction
- 2:43 - Set up spatial tracking
- 5:42 - Build a spatial user interface
- 13:57 - Generate brush geometry
- 26:11 - Create a splash screen
Resources
Related Videos
WWDC24
- Compose interactive 3D content in Reality Composer Pro
- Create custom hover effects in visionOS
- Discover RealityKit APIs for iOS, macOS, and visionOS
- Enhance your spatial computing app with RealityKit audio
WWDC23
-
Download
Hi, I’m Adrian, and I’m an engineer on the RealityKit team. In this session, I’ll take you through the process of building a spatial drawing app for visionOS, using brand new features in RealityKit.
RealityKit is a framework which provides high-performance 3D simulation and rendering capabilities to iOS, macOS, and visionOS. And on visionOS, RealityKit is a foundation for the spatial capabilities of your app. Since the announcement of Apple Vision Pro, we have received a ton of great feedback from amazing developers like you, and we’ve been hard at work to address that feedback as we advance the capabilities of the platform. Today, I’m proud to announce some new APIs which push the limits of what’s possible to create with RealityKit. Let’s take a look. Our spatial drawing app integrates RealityKit’s powerful 3D capabilities with SwiftUI and ARKit to support a great user experience. We’ll have some fun building customized meshes, textures, and shaders to achieve a polished visual design. When you launch the app, you’re greeted with an eye-catching splash screen.
And after a quick set up process, you’re ready to create.
You can start drawing in a snap, by simply pinching your fingers together in the air.
You can change the look of your brush strokes using the palette view. The app supports solid brush types, which are like a tube, as well as a dazzling sparkle brush.
Brush strokes can be customized, for example, you can change the color or the thickness of the stroke.
There is a lot to cover, and I’m so excited to build this app together. So let’s dive in! First, we will set up spatial tracking, so that the app can understand your hand and environment data.
Next, we will build a UI, so that you have control over your brush and canvas. We’ll also use powerful new features to customize how the UI looks on hover.
We’ll dive deep into how meshes work in RealityKit, and discuss how our app can use new RealityKit APIs to efficiently generate brush geometry with metal.
And we’ll put the finishing touch on the app by creating an engaging splash screen, with dynamic textures and spatial UI elements.
Our drawing app needs to understand your hand pose, as you pinch and move your hand to write. For this we’ll need to set up spatial tracking of hand anchors.
In visionOS, apps can place SwiftUI or RealityKit content in windows, volumes and spaces.
When your app uses an immersive space, it can receive spatial tracking information with anchors. This includes scene understanding information with world and plane anchors, as well as pose information with hand anchors.
In visionOS 1.0, you could access this data using ARKit.
With visionOS 2.0, we are introducing an easier way for you to use spatial tracking in your RealityKit apps. Let’s use this API to set up spatial hand tracking in our drawing app.
In RealityKit, you can use AnchorEntity, to affix a RealityKit Entity to an AR anchor.
Our drawing app creates two AnchorEntities for each hand. One is anchored to the thumb tip, and one is anchored to the index finger tip.
In order to access spatial tracking data, our app needs permission from the user. In the drawing app, the relevant authorizations are requested when the user taps Start. It’s important to think about when to ask your users for permission. You should request authorization only when your app needs it, in this case, when the user is about to start drawing. To request authorization for tracking data in RealityKit, use a SpatialTrackingSession. This is a new API in visionOS 2.0.
Declare the tracking capabilities your app requires. In this case, the app needs hand data.
Next, call run on the SpatialTrackingSession. At this point, an alert is presented to authorize this tracking.
The run function returns a list of unapproved tracking capabilities. You can check this list to understand if permission has been granted.
If spatial tracking is approved, then you can access tracking data via the transform of your AnchorEntity.
If the permission is rejected, your AnchorEntity transforms will not update. However, the AnchorEntity will still update its pose visually.
So, to recap. When your app uses an ImmersiveSpace, it can anchor RealityKit content to the world.
You can use AnchorEntity, to set up anchoring with your RealityKit content.
And new this year, you can use SpatialTrackingSession if your app needs to access AnchorEntity transforms.
Additionally, your AnchorEntities can interact with RealityKit’s physics system if you use a SpatialTrackingSession. Next, let’s talk about the user interface of our app. After you tap Start on the splash screen, you are brought into an immersive space that will include your drawing canvas. You can change the size of the canvas, or its location by dragging a sphere-shaped handle. When you are ready to begin drawing, the palette view appears.
This allows you to configure the shape and color of your brush.
When you are ready to draw, you can simply step inside the canvas to begin.
Let’s look deeper into how this interface can be built.
We will start with the canvas placement interface. This interface allows you to define the drawing area.
There are two elements which comprise the immersive space during canvas placement. On the floor, a 3D shape delineates the edge of the canvas, and a handle can be used to change the canvas location.
First, let’s consider the boundary mesh. This mesh is generated in real time, because the size of the boundary can be modified by dragging the slider.
The mesh is defined by two circles, as represented in this diagram on the left, an outer circle in green, and an inner circle in red.
We can define this shape as a SwiftUI path.
A circle is an arc spanning 360 degrees. So we create two of these arcs, each with a different radius.
Then, by specifying an even-odd fill mode with normalized, we have defined the shape that we want to create.
To generate our mesh in RealityKit, we can take advantage of a new API this year. MeshResource extruding. MeshResource extruding is a powerful API, which helps you convert your 2D vector content into a 3D model.
All we need to do is specify the desired depth and resolution of our shape, and we’re done.
There is one important consideration to keep in mind. On visionOS, RealityKit uses a foveated renderer. Regions of the image in your peripheral vision, are rendered at a lower resolution. This helps to optimize your app’s performance. If your scene contains thin geometry with high contrast, you might be able to notice shimmering artifacts. In this example, the ring is too thin. So make sure to avoid thin geometric elements like you see here, especially in regions with high contrast.
To address this issue, we can increase the thickness of the geometry and remove the thin high-contrast edge. Notice that the shimmering artifacts are mitigated on the left.
To learn more about spatial content aliasing, I recommend the talk "Explore rendering for spatial computing", from WWDC23.
Next, let’s talk about the canvas handle. I’d like to call out one detail. When you gaze at the handle, there is a blue highlight effect.
On visionOS, the HoverEffectComponent adds a visual effect when you gaze at RealityKit content.
In visionOS 1.0, HoverEffectComponent uses a default spotlight effect.
This year, we are introducing two more types of hover effects to HoverEffectComponent. The highlight effect applies a uniform highlight color to an Entity.
And now, you can use HoverEffectComponent with ShaderGraph shaders. Shader-backed hover effects are incredibly flexible, so you can control exactly how your Entities look on hover.
The blue highlight on the handle is possible thanks to the highlight hover effect. To use the highlight effect, initialize your HoverEffectComponent, with dot highlight, and provide the highlight color. You can also change the strength value to increase the vibrancy of the highlight.
You might have noticed that the UI elements for canvas placement, seem to glow on top of the environment behind them. This is because they have been set up with the additive blend mode. This year, RealityKit has added new support for the additive blend mode in its built in materials, such as UnlitMaterial and PhysicallyBasedMaterial. To use it, first create a Program and set blend mode to add.
Once the user has selected their drawing canvas, it’s time for the main event. The palette view appears, and the user can begin setting up their brush. The palette view is built in SwiftUI, and allows you to customize your brush type, and style. On the bottom of the palette, there is a set of preset brushes to choose from.
I’d like to give special attention to the brush preset view. Notice that each brush preset thumbnail, is actually a full three-dimensional shape. This mesh is generated in the same way as actual brush strokes. SwiftUI and RealityKit integrate seamlessly together. Here, we use a RealityView for each thumbnail, which allows us to take advantage of RealityKit’s full capabilities.
When you gaze at a brush preset, an eye-catching hover effect activates, sweeping a purple glow along the brush.
This is a shader-based hover effect, like I mentioned earlier. Let’s dive in and explore how this effect was achieved.
Shader-based hover effects are enabled, by the Hover State node in a shader graph. This node provides useful tools for you to integrate hover effects into your shaders.
For example, Intensity is a system provided value, which animates between 0 and 1, based on gaze state. You could use the intensity value to recreate the highlight effect, like we saw earlier for the canvas handle.
For the preset view, however, we would like to achieve a more advanced effect. The glow effect should sweep along the brush mesh, from the beginning of the brush stroke to the end.
To achieve this complicated effect, the app uses a shader graph material. Let’s walk through the shader graph together.
We’ll use the Hover State node’s, Time Since Hover Start property, this is a value in seconds since the hover event began.
We’ll use this to define the location along the curve of the glow highlight. When a hover event begins, the glow location will begin sweeping along the curve.
When generating meshes for the brush stroke, our app provides an attribute called CurveDistance. The app provides CurveDistance values for each vertex via the UV1 channel.
This is a visualization of curve distance on a brush stroke. This value increases along the length of the stroke.
The shader compares the location of the glow highlight with curve distance.
That way, the shader can understand the location of the glow relative to the current geometry.
The next step is to define the size of the glow effect. The current geometry will glow, when it is within range of the glow location.
Now we can add an easing curve, which defines the intensity of the hover effect as the glow sweeps over the geometry.
The final step is to mix the hover effect color, with the original brush stroke color, depending on the intensity value we just computed.
It looks great! To use shader based hover effects, first create a HoverEffectComponent with the shader setting. Then use a ShaderGraphMaterial. It will receive updates to the Hover State node. Now that we have built a way for users to configure their brush, it’s time to talk about the core of the app. Generating geometry for each brush stroke. Broadly, a mesh is a collection of vertices and primitives like triangles which connect them.
Each vertex is associated with a number of attributes, such as the position or texture coordinate of that vertex.
And those attributes are described by data. For example, each vertex position is a 3-dimensional vector.
The vertex data needs to be organized into buffers, so that it can be submitted to the GPU. For most RealityKit meshes, data is organized in memory contiguously. So the vertex position 0 is followed in memory by vertex position 1, which is followed by vertex 2 and so on. The same goes for all other vertex attributes. The index buffer is laid out separately, and it contains the vertex indices of each triangle in the mesh.
RealityKit’s standard mesh layout is versatile, and fits many different use cases. But in some cases, a domain-specific approach can be more efficient. The drawing app uses a custom-built geometry processing pipeline, to create the mesh of the user’s brush strokes. For example, each brush stroke is smoothed to improve the mesh curvature.
This algorithm is optimized, so that appending points to the end of the brush stroke is as fast as possible. It is critical to minimize latency.
A single buffer is used to lay out the vertices, for brush stroke meshes.
But unlike the standard mesh layout, each vertex is described after each other in its entirety. So, the attributes are interleaved. The position of the first vertex is followed by the normal of that vertex, which is followed by the bitangent, and so on, until all of the attributes have been described. Only then does the buffer begin to describe the second vertex, and so on.
In contrast, the standard vertex buffer lays out all of the data for each attribute contiguously. The brush vertex buffer layout, is particularly convenient for a drawing app.
When generating brush strokes, the app constantly appends vertices to the end of the vertex buffer. Notice that the brush vertex buffer can append new vertices, without modifying the locations of older data. However, when you do this with the standard vertex buffer, most of the data needs to be moved as the buffer grows. Brush vertices also have different attributes than you would see in the standard layout. Some of the attributes, like position, normal and bitangent, are standard. While some of them, like the color, material properties, and curve distance, are custom attributes.
In our app’s code, brush vertices are represented as this struct in Metal Shading Language.
Each entry of the struct, corresponds with an attribute of the vertex.
So we’re faced with a problem. On the one hand, we want to retain the vertex layout of our high performance geometry engine, and avoid any unnecessary conversions or copying. But on the other hand, our geometry engine’s layout is incompatible with RealityKit’s standard layout. What we need is a way to bring our GPU buffers to RealityKit as-is, and instruct RealityKit how to read them.
And now, you can with a brand new API called LowLevelMesh. With LowLevelMesh, you can arrange your vertex data in a wide variety of ways.
You have four distinct Metal buffers to use for vertex data. So we could use a similar layout to RealityKit’s standard layout.
But sometimes it is useful to have more than one buffer. For example, suppose you needed to update texture coordinates more frequently than any other attribute. It is more efficient to move this dynamic data, to its own buffer.
You can rearrange the vertex buffers to be interleaved. Or a combination of interleaved and non-interleaved.
You can also use any metal primitive type, such as triangle strips.
I encourage you to think about how a LowLevelMesh and its custom buffer layouts can benefit your app.
Maybe your mesh data is sourced from a binary file, with its own custom layout. Now you can transfer that data directly into RealityKit, without any overhead for conversion. Or, perhaps you are bridging an existing mesh processing pipeline with its own pre-defined buffer layout, like one you would see in a Digital Content Creation tool, or a CAD application, to RealityKit.
You can even use LowLevelMesh as a way to efficiently bridge mesh data from a game engine into RealityKit.
LowLevelMesh expands the possibilities of how you can provide mesh data to RealityKit, and we’re excited to see what your apps can achieve! Let’s dive in, and take a look, at how you can create a LowLevelMesh in code.
Now, our app can provide its vertex buffer to LowLevelMesh as is, without any extra conversions or unnecessary copies.
You use LowLevelMesh attributes to describe how the vertices are laid out. I’ll set up the attribute list in a swift extension to our SolidBrushVertex struct.
I’ll begin by declaring the attribute for position.
Let’s go into detail. The first step is to define a semantic, this instructs the LowLevelMesh on how to interpret the attribute. In this case, the attribute is a position, so I’ll use that semantic.
Next, I’ll define the Metal vertex format for this attribute. In this case I must choose float3, to match the definition in SolidBrushVertex.
Next, I’ll provide an offset in bytes of the attribute.
Finally, I’ll provide a layout index. This indexes into the list of vertex layouts, which we will discuss later. The drawing app only uses a single layout, so I’ll use index zero.
Now, I’ll declare the other mesh attributes. The normal and bitangent attributes are similar to position, except different memory offsets and semantics are used.
The color attribute uses half-precision floating point values. New this year, you can use any Metal vertex format with LowLevelMesh, including compressed vertex formats.
For the other two parameters, I’ll use the semantics UV1 and UV3. Also new this year, up to 8 UV channels are available for you to use in LowLevelMesh. A shader graph material can access these values. Now, we can create the LowLevelMesh object itself. To do this, I’ll create a LowLevelMesh Descriptor. LowLevelMesh descriptor is conceptually similar to Metal’s MTLVertexDescriptor, but it also contains information that RealityKit needs to ingest your mesh.
First, I’ll declare the required capacity for the vertex and index buffers.
Next, I’ll pass along the list of vertex attributes. This is the list I put together on the previous slide.
Then, I’ll make a list of vertex layouts. Each vertex attribute uses one of the layouts.
LowLevelMesh provides up to 4 metal buffers for your vertex data. The buffer index declares which of those buffers should be used.
Then you provide a buffer offset and the stride of each vertex. Most of the time, you’ll only use one buffer as we did here.
Now, we are able to initialize the LowLevelMesh.
The last step is to populate a list of parts. Each part spans a region of the index buffer.
You can assign a different RealityKit material index to each mesh part.
And our app uses a triangle strip topology for improved memory efficiency.
Finally, you can create a MeshResource from your LowLevelMesh and assign it to an Entity’s ModelComponent.
When it’s time to update the vertex data of a LowLevelMesh, you can use the withUnsafeMutableBytes API. This API gives you access to the actual buffer, which will be submitted to the GPU for rendering. So, there is minimal overhead when updating your mesh data.
For example, because we know the memory layout of our mesh up-front, we can use bindMemory to convert the provided raw pointer, to a buffer pointer.
The same can be said for index buffer data. You can update your LowLevelMesh index buffers, via withUnsafeMutableIndices.
We’ve already seen how LowLevelMesh is a powerful tool to accelerate your app’s mesh processing pipeline. LowLevelMesh additionally allows you to back vertex or index buffer updates, with GPU compute. Let’s check out an example.
This is the sparkle brush in our drawing app. It generates a particle field which follows your brush strokes. This particle field updates dynamically every frame, so it uses a different updating scheme than what we saw for solid brushes. Due to the frequency and complexity of the mesh updates, it makes sense to use the GPU.
Let’s go into detail. The sparkle brush contains a list of per-particle attributes, like position and color. As before, we include the curveDistance parameter, as well as the size of the particle.
Our GPU particle simulation uses the type SparkleBrushParticle to track the attributes and velocity of each particle. The app uses an auxiliary buffer, of SparkleBrushParticles for the simulation.
The struct SparkleBrushVertex is used for the vertex data of the mesh. It contains the UV coordinates of each vertex, so that our shader can understand how to orient the particle in 3D space. A plane with four vertices is created for each particle. So our app needs to maintain two buffers to update the sparkle brush mesh, the particle simulation buffer, filled with SparkleBrushParticles, and a LowLevelMesh vertex buffer, which contains SparkleBrushVertices.
Just like with the solid brush, I’ll provide a specification of the vertex buffer with a list of LowLevelMesh Attributes. The list of attributes corresponds with the members of SparkleBrushVertex.
When it’s time to populate the LowLevelMesh on the GPU, you use a Metal command buffer and compute command encoder.
When the buffer has finished its work, RealityKit automatically applies the changes. Here’s how that looks in code. As I mentioned before, the app uses a Metal buffer for the particle simulation, and a LowLevelMesh for the vertex buffer.
I’ll set up a metal command buffer and compute command encoder. This is what will allow our app, to run a GPU compute kernel to build our mesh.
I’ll call replace on the LowLevelMesh and provide the command buffer.
This returns a metal buffer. This vertex buffer will be used directly by RealityKit for rendering.
After dispatching the simulation to the GPU, I commit the command buffer. When the command buffer completes, RealityKit will automatically start using the updated vertex data.
Our app looks great thanks to fast and responsive brush stroke generation. Now, let’s put the finishing touches on our app with an engaging splash screen. A splash screen is a great way to welcome the user into our app’s space. It’s also an opportunity to have fun and show off the app’s visual style.
The splash screen for our app contains four visual elements.
The logotype contains 3D text of, "RealityKit Drawing App" using two different fonts. The logomark is also a 3D shape.
There’s a start button at the bottom, inviting the user to begin drawing. And in the background, there is a striking graphic which glows in your environment.
Let’s start by building the logotype. To begin, I’ll create an Attributed String, of "RealityKit" with the default system font.
New this year, you can create a MeshResource in RealityKit, from AttributedString, using MeshResource extruding.
Since we’re using AttributedString, it’s easy to add additional lines of text with different properties. Let’s write the text "drawing app", in a different font and with a larger font size.
Now, let's center the text with a paragraph style.
To learn more about how to style text with AttributedString, check out the talk: "What’s new in Foundation", from WWDC21.
Let’s zoom in on the text we’ve made so far. Right now the 3D model looks a bit too flat, so let’s customize it. You can do this by passing the structure, ShapeExtrusionOptions to MeshResource extruding.
First, I will specify a larger depth, to produce a thicker 3D shape. Next I’ll add a second material to our mesh. You can specify which material index to assign to the front, back, and sides.
Last, I’ll add a subtle chamfer so that the outline material is more visible when viewing the text from the front. In this case, I specify the chamfer radius to be one tenth of a point.
The app also generates the logo mark using MeshResource extruding. I’ll use a SwiftUI path, so there is a lot of flexibility in how the shape can be defined. The logo mark is set up as a series of bezier curves.
To learn more about SwiftUI Path, check out the SwiftUI tutorial “Drawing paths and shapes”.
Now let’s talk about the background of the splash screen. This is one of the most striking aesthetic elements of the app. To build it, I used a brand new API called LowLevelTexture. LowLevelTexture provides the same fast resource update semantics as LowLevelMesh, but for texture assets.
On the splash screen, the app uses LowLevelTexture to generate a sort of shape description for the swarm of pill shapes. This shape description is stored in the red channel of the texture. Darker regions are far inside one of the pills, whereas lighter regions are outside of the pills.
In the green channel of the texture, the app stores a description of the splash screen’s vignette.
This texture is interpreted into the final image via a shader graph shader in Reality Composer Pro.
You create a LowLevelTexture from its Descriptor. A LowLevelTexture descriptor is comparable to Metal’s MTLTextureDescriptor. Just like with LowLevelMesh, LowLevelTexture offers detailed control over the pixel format and texture usage. And now, you can use compressed pixel formats with RealityKit. For the splash screen, we only need the red and green channels, so we use the pixel format RG16Float.
You can initialize a LowLevelTexture from the descriptor. Then, create a RealityKit texture resource from the LowLevelTexture. Now, you’re ready to use this texture with a Material.
You update a LowLevelTexture on the GPU just like a LowLevelMesh. First, set up our Metal command buffer and compute command encoder.
Next, call LowLevelTexture.replace with your command buffer; this returns a Metal texture which you can write to in your compute shader. Finally, dispatch the GPU compute and commit your command buffer. When the command buffer completes, the Metal texture will automatically appear in RealityKit. Putting it all together again, I’m really happy with the look of this splash screen. The eye-catching background combined with personalized 3D geometry gives it a really distinct look. The perfect finishing touch to our experience! That about wraps it up. Today, we built an interactive spatial drawing app all in RealityKit. We used RealityKit spatial tracking APIs so that the app can detect where the user draws in their space. We used SwiftUI and advanced hover effects to build an interactive spatial UI to customize brushes and styles. We learned how resource updates work in RealityKit, and used advanced Low-Level APIs to generate meshes and textures interactively. Finally, we used new APIs to import 2D vector graphics and make them spatial.
I recommend you check out the talks “Discover RealityKit APIs for iOS, macOS and visionOS” and “Enhance your spatial computing app with RealityKit audio” to learn more about what’s new in RealityKit this year. We can’t wait to see what you build. Enjoy the rest of WWDC24!
-
-
4:18 - Using SpatialTrackingSession
// Retain the SpatialTrackingSession while your app needs access let session = SpatialTrackingSession() // Declare needed tracking capabilities let configuration = SpatialTrackingSession.Configuration(tracking: [.hand]) // Request authorization for spatial tracking let unapprovedCapabilities = await session.run(configuration) if let unapprovedCapabilities, unapprovedCapabilities.anchor.contains(.hand) { // User has rejected hand data for your app. // AnchorEntities will continue to remain anchored and update visually // However, AnchorEntity.transform will not receive updates } else { // User has approved hand data for your app. // AnchorEntity.transform will report hand anchor pose }
-
7:07 - Use MeshResource extrusion
// Use MeshResource(extruding:) to generate the canvas edge let path = SwiftUI.Path { path in // Generate two concentric circles as a SwiftUI.Path path.addArc(center: .zero, radius: outerRadius, startAngle: .degrees(0), endAngle: .degrees(360), clockwise: true) path.addArc(center: .zero, radius: innerRadius, startAngle: .degrees(0), endAngle: .degrees(360), clockwise: true) }.normalized(eoFill: true) var options = MeshResource.ShapeExtrusionOptions() options.boundaryResolution = .uniformSegmentsPerSpan(segmentCount: 64) options.extrusionMethod = .linear(depth: extrusionDepth) return try MeshResource(extruding: path, extrusionOptions: extrusionOptions)
-
9:33 - Highlight HoverEffectComponent
// Use HoverEffectComponent with .highlight let placementEntity: Entity = // ... let hover = HoverEffectComponent( .highlight(.init( color: UIColor(/* ... */), strength: 5.0) ) ) placementEntity.components.set(hover)
-
9:54 - Using Blend Modes
// Create an UnlitMaterial with Additive Blend Mode var descriptor = UnlitMaterial.Program.Descriptor() descriptor.blendMode = .add let prog = await UnlitMaterial.Program(descriptor: descriptor) var material = UnlitMaterial(program: prog) material.color = UnlitMaterial.BaseColor(tint: UIColor(/* ... */))
-
13:45 - Shader based hover effects
// Use shader-based hover effects let hoverEffectComponent = HoverEffectComponent(.shader(.default)) entity.components.set(hoverEffectComponent) let material = try await ShaderGraphMaterial(named: "/Root/SolidPresetBrushMaterial", from: "PresetBrushMaterial", in: realityKitContentBundle) entity.components.set(ModelComponent(mesh: /* ... */, materials: [material]))
-
16:56 - Defining a vertex buffer struct for the solid brush
struct SolidBrushVertex { packed_float3 position; packed_float3 normal; packed_float3 bitangent; packed_float2 materialProperties; float curveDistance; packed_half3 color; };
-
19:27 - Defining LowLevelMesh Attributes for solid brush
extension SolidBrushVertex { static var vertexAttributes: [LowLevelMesh.Attribute] { typealias Attribute = LowLevelMesh.Attribute return [ Attribute(semantic: .position, format: MTLVertexFormat.float3, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.position)!), Attribute(semantic: .normal, format: MTLVertexFormat.float3, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.normal)!), Attribute(semantic: .bitangent, format: MTLVertexFormat.float3, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.bitangent)!), Attribute(semantic: .color, format: MTLVertexFormat.half3, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.color)!), Attribute(semantic: .uv1, format: MTLVertexFormat.float, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.curveDistance)!), Attribute(semantic: .uv3, format: MTLVertexFormat.float2, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.materialProperties)!) ] } }
-
21:14 - Make LowLevelMesh
private static func makeLowLevelMesh(vertexBufferSize: Int, indexBufferSize: Int, meshBounds: BoundingBox) throws -> LowLevelMesh { var descriptor = LowLevelMesh.Descriptor() // Similar to MTLVertexDescriptor descriptor.vertexCapacity = vertexBufferSize descriptor.indexCapacity = indexBufferSize descriptor.vertexAttributes = SolidBrushVertex.vertexAttributes let stride = MemoryLayout<SolidBrushVertex>.stride descriptor.vertexLayouts = [LowLevelMesh.Layout(bufferIndex: 0, bufferOffset: 0, bufferStride: stride)] let mesh = try LowLevelMesh(descriptor: descriptor) mesh.parts.append(LowLevelMesh.Part(indexOffset: 0, indexCount: indexBufferSize, topology: .triangleStrip, materialIndex: 0, bounds: meshBounds)) return mesh }
-
22:28 - Creating a MeshResource
let mesh: LowLevelMesh let resource = try MeshResource(from: mesh) entity.components[ModelComponent.self] = ModelComponent(mesh: resource, materials: [...])
-
22:37 - Updating vertex data of LowLevelMesh using withUnsafeMutableBytes API
let mesh: LowLevelMesh mesh.withUnsafeMutableBytes(bufferIndex: 0) { buffer in let vertices: UnsafeMutableBufferPointer<SolidBrushVertex> = buffer.bindMemory(to: SolidBrushVertex.self) // Write to vertex buffer `vertices` }
-
23:07 - Updating LowLevelMesh index buffers using withUnsafeMutableBytes API
let mesh: LowLevelMesh mesh.withUnsafeMutableIndices { buffer in let indices: UnsafeMutableBufferPointer<UInt32> = buffer.bindMemory(to: UInt32.self) // Write to index buffer `indices` }
-
23:58 - Creating a particle brush using LowLevelMesh
struct SparkleBrushAttributes { packed_float3 position; packed_half3 color; float curveDistance; float size; }; // Describes a particle in the simulation struct SparkleBrushParticle { struct SparkleBrushAttributes attributes; packed_float3 velocity; }; // One quad (4 vertices) is created per particle struct SparkleBrushVertex { struct SparkleBrushAttributes attributes; simd_half2 uv; };
-
24:58 - Defining LowLevelMesh Attributes for sparkle brush
extension SparkleBrushVertex { static var vertexAttributes: [LowLevelMesh.Attribute] { typealias Attribute = LowLevelMesh.Attribute return [ Attribute(semantic: .position, format: .float3, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.attributes.position)!), Attribute(semantic: .color, format: .half3, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.attributes.color)!), Attribute(semantic: .uv0, format: .half2, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.uv)!), Attribute(semantic: .uv1, format: .float, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.attributes.curveDistance)!), Attribute(semantic: .uv2, format: .float, layoutIndex: 0, offset: MemoryLayout.offset(of: \Self.attributes.size)!) ] } }
-
25:28 - Populate LowLevelMesh on GPU
let inputParticleBuffer: MTLBuffer let lowLevelMesh: LowLevelMesh let commandBuffer: MTLCommandBuffer let encoder: MTLComputeCommandEncoder let populatePipeline: MTLComputePipelineState commandBuffer.enqueue() encoder.setComputePipelineState(populatePipeline) let vertexBuffer: MTLBuffer = lowLevelMesh.replace(bufferIndex: 0, using: commandBuffer) encoder.setBuffer(inputParticleBuffer, offset: 0, index: 0) encoder.setBuffer(vertexBuffer, offset: 0, index: 1) encoder.dispatchThreadgroups(/* ... */) // ... encoder.endEncoding() commandBuffer.commit()
-
27:01 - Use MeshResource extrusion to generate 3D text
// Use MeshResource(extruding:) to generate 3D text var textString = AttributedString("RealityKit") textString.font = .systemFont(ofSize: 8.0) let secondLineFont = UIFont(name: "ArialRoundedMTBold", size: 14.0) let attributes = AttributeContainer([.font: secondLineFont]) textString.append(AttributedString("\nDrawing App", attributes: attributes)) let paragraphStyle = NSMutableParagraphStyle() paragraphStyle.alignment = .center let centerAttributes = AttributeContainer([.paragraphStyle: paragraphStyle]) textString.mergeAttributes(centerAttributes) var extrusionOptions = MeshResource.ShapeExtrusionOptions() extrusionOptions.extrusionMethod = .linear(depth: 2) extrusionOptions.materialAssignment = .init(front: 0, back: 0, extrusion: 1, frontChamfer: 1, backChamfer: 1) extrusionOptions.chamferRadius = 0.1 let textMesh = try await MeshResource(extruding: textString extrusionOptions: extrusionOptions)
-
28:25 - Use MeshResource extrusion to turn a SwiftUI Path into 3D mesh
// Use MeshResource(extruding:) to bring SwiftUI.Path to 3D let graphic = SwiftUI.Path { path in path.move(to: CGPoint(x: -0.7, y: 0.135413)) path.addCurve(to: CGPoint(x: -0.7, y: 0.042066), control1: CGPoint(x: -0.85, y: 0.067707), control2: CGPoint(x: -0.85, y: 0.021033)) // ... } var options = MeshResource.ShapeExtrusionOptions() // ... let graphicMesh = try await MeshResource(extruding: graphic extrusionOptions: options)
-
29:44 - Defining a LowLevelTexture
let descriptor = LowLevelTexture.Descriptor(pixelFormat: .rg16Float, width: textureResolution, height: textureResolution, textureUsage: [.shaderWrite, .shaderRead]) let lowLevelTexture = try LowLevelTexture(descriptor: descriptor) var textureResource = try TextureResource(from: lowLevelTexture) var material = UnlitMaterial() material.color = .init(tint: .white, texture: .init(textureResource))
-
30:27 - Update a LowLevelTexture on the GPU
let lowLevelTexture: LowLevelTexture let commandBuffer: MTLCommandBuffer let encoder: MTLComputeCommandEncoder let computePipeline: MTLComputePipelineState commandBuffer.enqueue() encoder.setComputePipelineState(computePipeline) let writeTexture: MTLTexture = lowLevelTexture.replace(using: commandBuffer) encoder.setTexture(writeTexture, index: 0) // ... encoder.dispatchThreadgroups(/* ... */) encoder.endEncoding() commandBuffer.commit()
-
-
Looking for something specific? Enter a topic above and jump straight to the good stuff.
An error occurred when submitting your query. Please check your Internet connection and try again.