스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
Explore advanced rendering with RealityKit 2
Create stunning visuals for your augmented reality experiences with cutting-edge rendering advancements in RealityKit. Learn the art of writing custom shaders, draw real-time dynamic meshes, and explore creative post-processing effects to help you stylize your AR scene.
리소스
- Building an Immersive Experience with RealityKit
- Creating a Fog Effect Using Scene Depth
- Displaying a Point Cloud Using Scene Depth
- Explore the RealityKit Developer Forums
- RealityKit
관련 비디오
WWDC22
WWDC21
WWDC20
-
다운로드
♪ Bass music playing ♪ ♪ Courtland Idstrom: Hello, my name is Courtland Idstrom, and I'm an engineer on the RealityKit team. In this video, I'm going to show you how to use the new rendering features in RealityKit 2. RealityKit is a framework designed to make building AR apps simple and intuitive. Rendering is a key piece of RealityKit, centered around highly realistic, physically based rendering. Since our first release in 2019, we've been working on your feedback and we're shipping a major update to RealityKit. In the "Dive into Reality Kit 2" session, we covered the evolution of RealityKit, providing many enhancements -- from updates to the ECS system, more evolved material and animation capabilities, and generating audio and texture resources at runtime. To showcase these improvements, we built an app that turns your living room into an underwater aquarium. In this talk, we'll show some of the new rendering features that went into the app. RealityKit 2 provides control and flexibility with how objects are rendered, allowing you to create even better AR experiences. This year we bring advancements to our material system, enabling you to add your own materials by authoring custom Metal shaders. Custom post effects allow you to augment RealityKit's post effects with your own. New mesh APIs allow mesh creation, inspection, and modifications at runtime. Let's start with the most requested feature in RealityKit 2, support for custom shaders. RealityKit's rendering centers around a physically based rendering model. Its built-in shaders make it easy to create models that look natural next to real objects across a range of lighting conditions. This year, we're building on these physically based shaders and exposing the ability for you to customize the geometry and surface of models using shaders. The first of our shader APIs is geometry modifier. A geometry modifier is a program, written in the Metal Shading Language, that gives you the opportunity to change the vertices of an object every frame as it's rendered on the GPU. This includes moving them and customizing their attributes, such as color, normal, or UVs. It's run inside of RealityKit's vertex shader, and is perfect for ambient animation, deformation, particle systems, and billboards. Our seaweed is a great example of ambient animation. The seaweed is moving slowly due to the movement of water around it. Let's take a closer look. Here you can see the wireframe of the seaweed as created by our artist; this shows the vertices and triangles comprising the mesh. We're going to write a shader program that executes on each vertex to create our motion. We'll use a sine wave, a simple periodic function, to create movement. We're simulating water currents so we want nearby vertices to behave similarly, regardless of their model's scale or orientation. For this reason, we use the vertex's world position as an input to the sine function. We include a time value as well, so that it moves over time. Our first sine wave is in the Y dimension to create up-down movement. To control the period of the motion, we'll add a spatial scale. And we can control the amount of its movement with an amplitude. We'll apply the same function to the X and Z dimensions so it moves in all three axes. Now, let's look at the model as a whole. One thing we haven't yet accounted for: vertices close to the base of the stalk have very little room for movement, while ones at the top have the highest freedom to move. To simulate this, we can use the vertex's y-coordinate relative to the object's origin as a scaling factor for all three axes, which gives us our final formula. Now that we have a plan for our shader, let's take a look at where to find these parameters. Geometry parameters are organized into a few categories. The first is uniforms, values that are the same for every vertex of the object within one frame. We need time for our seaweed. Textures contain all textures authored as part of the model, plus an additional custom slot, which you can use as you see fit. Material constants have any parameters, such as tint or opacity scale, authored with the object or set via code. Geometry contains some read-only values, such as the current vertex's model position or vertex ID. We need both model and world positions for our seaweed movement. Geometry also has read-write values, including normal, UVs, and model position offset. Once we have computed our offset, we'll store it here to move our vertices. Let's dive into Metal shader. We start out by including RealityKit.h. Now we declare a function with the visible function attribute. This instructs the compiler to make it available separately from other functions. The function takes a single parameter, which is RealityKit's geometry_parameters. We'll retrieve all values through this object. Using the geometry member of params, we'll ask for both the world position and model position. Next we calculate a phase offset, based on the world position at the vertex and time. Then we apply our formula to calculate this vertex's offset. We store the offset on geometry, which will get added to the vertex's model position. We have our geometry modifier, but it's not yet hooked up to our seaweed. Let's switch to our ARView subclass, written in Swift. We start by loading our app's default Metal library, which contains our shader. Next we construct a geometryModifier using our shader's name and library. For each material on the seaweed, we create a new custom material. We pass the existing material as the first parameter to CustomMaterial, so that it inherits the textures and material properties from the base material while adding our geometry modifier. It looks pretty nice! Since we're underwater, we've kept the animation pretty slow. By tweaking amplitude and phase, the same effect can be extended to grass, trees, or other foliage. Now that we've shown how to modify geometry, let's talk about shading. This is our octopus from the underwater scene, looking great with our built-in shader. As they do, our octopus transitions between multiple looks. The second look has a reddish color. Our artist has authored two base color textures, one for each look. In addition to the color change, the red octopus has a higher roughness value, making it less reflective. And, to make our octopus even more special, we wanted to create a nice transition between looks. Here you can see the transition in action. Mesmerizing. While each look can be described as a physically based material, for the transition itself, we need to write a surface shader. So what is a surface shader? A surface shader allows you to define the appearance of an object. It runs inside the fragment shader for every visible pixel of an object. In addition to color, this includes surface properties such as normal, specular, and roughness. You can write shaders that enhance an object's appearance or replace it entirely, creating new effects. We've seen the two base-color textures for our octopus. For the transition effect, our artist has encoded a special texture for us. This texture is actually a combination of three different layers. There's a noise layer on top creating localized transition patterns. We have a transition layer, which dictates the overall movement, starting at the head and moving towards the tentacles. And there's a mask layer for areas that we don't want to change color, such as the eye and underside of the tentacles. These three layers are combined into the red, green, and blue channels of our texture, which we assign to the custom texture slot. With our textures set up, let's look at how to access these from a surface shader. Similar to the geometry modifier, the surface shader has access to uniforms, textures, and material constants. Time is an input to our octopus transition. We'll sample textures authored with our model and read material constants, allowing our artist to make model-wide adjustments. Geometry -- such as position, normal, or UVs -- appear in a geometry structure. These are the interpolated outputs from the vertex shader. We'll use UV0 as our texture coordinate. A surface shader writes a surface structure. Properties start with default values, and we're free to calculate these values in any way we see fit. We'll be calculating base color and normal. Then, four surface parameters: roughness, metallic, ambient occlusion, and specular. Now that we know where our values live, let's start writing our shader. We'll do this in three steps. First calculate the transition value, where 0 is a fully purple octopus and 1 is fully red. Using the transition value, we'll calculate color and normal and then fine-tune by assigning material properties. Let's get started. First step: transition. We're building the octopus surface function, which takes a surface_parameters argument. Since we're using textures, we declare a sampler. On the right, you can see what our octopus looks like with an empty surface shader -- it's gray and a little bit shiny. RealityKit puts you in complete control of what does or does not contribute to your model's appearance. In order to compute color, there's a few things we need to do first. We'll store some convenience variables. We access our UV0, which we'll use as a texture coordinate. Metal and USD have different texture coordinate systems, so we'll invert the y-coordinate to match the textures loaded from USD. Now we'll sample our transition texture -- the three-layered texture our artist created. Our artist set up a small function that takes the mask value plus time, and returns 0 to 1 values for blend and colorBlend. Second step: color and normal. With our previously computed blend variable, we can now calculate the octopus's color and see the transition. To do this, we sample two textures: the base color and the secondary base color, which we've stored in emissive_color. Then we blend between the two colors using the previously computed colorBlend. We'll multiply by base_color_tint -- a value from the material -- and set our base color on the surface. Next we'll apply the normal map, which adds surface deviations, most noticeable on the head and tentacles. We sample the normal map texture, unpack its value, and then set on the surface object. Onto material properties. Here's our octopus so far, with color and normal. Let's see how surface properties affect its look. Roughness, which you'll see on the lower body; ambient occlusion, which will darken up the lower portions; and specular, which gives us a nice reflection on the eye and some additional definition on the body. Let's add these to our shader. We sample four textures on the model, one for each property. Next we scale these values with material settings. In addition, we're also increasing the roughness as we transition from purple to red. Then we set our four values on the surface. Similar to before, we need to apply the shader to our model. We assign this material to our model in our ARView subclass. First we load our two additional textures, then load our surface shader. Like before, we're constructing new materials from the object's base material, this time with a surface shader and our two additional textures. And we're done. So to recap, we've shown the seaweed animation using geometry modifiers and how to build an octopus transition with surface shaders. While we've demonstrated them separately, you can combine the two for even more interesting effects. Moving on to another highly requested feature, support for adding custom post processing effects. RealityKit comes with a rich suite of camera-matched post effects like motion blur, camera noise, and depth of field. These effects are all designed to make virtual and real objects feel like they're part of the same environment. These are available for you to customize on ARView. This year, we're also exposing the ability for you to create your own fullscreen effects. This allows you to leverage RealityKit for photo realism, and add new effects to tailor the result for your app. So what is a post process? A post process is a shader or series of shaders that execute after objects have been rendered and lit. It also occurs after any RealityKit post effects. Its inputs are two textures: color and a depth buffer. The depth buffer is displayed as greyscale here; it contains a distance value for each pixel relative to the camera. A post process writes its results to a target color texture. The simplest post effect would copy source color into target color. We can build these in a few ways. Apple's platforms come with a number of technologies that integrate well with post effects, such as Core Image, Metal Performance Shaders, and SpriteKit. You can also write your own with the Metal Shading Language. Let's start with some Core Image effects. Core Image is an Apple framework for image processing. It has hundreds of color-processing, stylization, and deformation effects that you can apply to images and video. Thermal is a neat effect -- something you might turn on for an underwater fish finder. Let's see how easy it is to integrate with RealityKit. All of our post effects will follow the same pattern. You set render callbacks, respond to prepare with device, and then post process will be called every frame. Render callbacks exist on RealityKit's ARView. We want both the prepareWithDevice and postProcess callbacks. Prepare with device will be called once with the MTLDevice. This is a good opportunity to create textures, load compute or render pipelines, and check device capabilities. This is where we create our Core Image context. The postProcess callback is invoked each frame. We'll create a CIImage, referencing our source color texture. Next we create our thermal filter. If you're using a different Core Image filter, this is where you'd configure its other parameters. Then we create a render destination, which targets our output color texture and utilizes the context's command buffer. We ask Core Image to preserve the image's orientation and start the task. That's it! With Core Image, we've unlocked hundreds of prebuilt effects that we can use. Now let's see how we can use Metal Performance Shaders to build new effects. Let's talk about bloom. Bloom is a screen space technique that creates a glow around brightly lit objects, simulating a real-world lens effect. Core Image contains a bloom effect, but we're going to build our own so we can control every step of the process. We'll build the effect with Metal Performance Shaders, a collection of highly optimized compute and graphics shaders. To build this shader, we're going to construct a graph of filters using color as the source. We first want to isolate the areas that are bright. To do this, we use an operation called "threshold to zero." It converts color to luminance and sets every pixel below a certain brightness level to 0. We then blur the result using a Gaussian blur, spreading light onto adjacent areas. Efficient blurs can be challenging to implement and often require multiple stages. Metal Performance Shaders handles this for us. Then we add this blurred texture to the original color, adding a glow around bright areas. Let's implement this graph as a post effect. We start by creating an intermediate bloomTexture. Then execute our ThresholdToZero operation, reading from sourceColor and writing to bloomTexture. Then we perform a gaussianBlur in place. Finally, we add our original color and this bloomed color together. That's it! Now that we've seen a couple ways to create post effects, let's talk about a way to put effects on top of our output using SpriteKit. SpriteKit is Apple's framework for high performance, battery-efficient 2D games. It's perfect for adding some effects on top of our 3D view. We'll use it to add some bubbles on the screen as a post effect, using the same prepareWithDevice and postProcess callbacks. We have the same two steps as before. In prepareWithDevice, we'll create our SpriteKit renderer and load the scene containing our bubbles. Then in our postProcess callback, we'll copy our source color to target color, update our SpriteKit scene, and render on top of the 3D content. prepareWithDevice is pretty straightforward -- we create our renderer and load our scene from a file. We'll be drawing this over our AR scene, so we need our SpriteKit background to be transparent. In postProcess, we first blit the source color to the targetColorTexture; this will be the background that SpriteKit renders in front of. Then advance our SpriteKit scene to the new time so our bubbles move upward. Set up a RenderPassDescriptor and render onto it. And that's it! We've shown how to utilize existing frameworks to make post effects, but sometimes you really do need to make one from scratch. You can also author a full-screen effect by writing a compute shader. For our underwater demo, we needed a fog effect that applies to virtual objects and camera passthrough. Fog simulates the scattering of light through a medium; its intensity is proportional to the distance. To create this effect, we needed to know how far each pixel is from the device. Fortunately, ARKit and RealityKit both provide access to depth information. For LiDAR-enabled devices, ARKit provides access to sceneDepth, containing distances in meters from the camera. These values are extremely accurate at a lower resolution than the full screen. We could use this depth directly but it doesn't include virtual objects, so they wouldn't fog correctly. In our postProcess, RealityKit provides access to depth for virtual content and -- when scene understanding is enabled -- approximated meshes for real-world objects. The mesh builds progressively as you move, so it contains some holes where we haven't currently scanned. These holes would show fog as if they were infinitely far away. We'll combine data from these two depth textures to resolve this discrepancy. ARKit provides depth values as a texture. Each pixel is the distance, in meters, of the sampled point. Since the sensor is at a fixed orientation on your iPhone or iPad, we'll ask ARKit to construct a conversion from the sensor's orientation to the current screen orientation, and then invert the result. To read virtual content depth, we need a little bit of info about how RealityKit packs depth. You'll notice that, unlike ARKit's sceneDepth, brighter values are nearer to the camera. Values are stored in a 0 to 1 range, using an Infinite Reverse-Z Projection. This just means that 0 means infinitely far away, and 1 is at the camera's near plane. We can easily reverse this transform by dividing the near plane depth by the sampled depth. Let's write a helper function to do this. We have a Metal function taking the sample's depth and projection matrix. Pixels with no virtual content are exactly 0. We'll clamp to a small epsilon to prevent divide by zero. To undo the perspective division, we take the last column's z value and divide by our sampled depth. Great! Now that we have our two depth values, we can use the minimum of the two as an input to our fog function. Our fog has a few parameters: a maximum distance, a maximum intensity at that distance, and a power curve exponent. The exact values were chosen experimentally. They shape our depth value to achieve our desired fog density. Now we're ready to put the pieces together. We have our depth value from ARKit, a linearized depth value from RealityKit, and a function for our fog. Let's write our compute shader. For each pixel, we start by sampling both linear depth values. Then we apply our fog function using our tuning parameters, which turns linear depth into a 0 to 1 value. Then we blend between source color and the fog color, depending on fogBlend's value, storing the result in outColor. To recap, RealityKit's new post process API enables a wide range of post effects. With Core Image, we've unlocked hundreds of ready-built effects. You can easily build new ones with Metal Performance Shaders, add screen overlays with SpriteKit, and write your own from scratch with Metal. For more information about Core Image or Metal Performance Shaders, see the sessions listed. Now that we've covered rendering effects, let's move onto our next topic, dynamic meshes. In RealityKit, mesh resources store mesh data. Previously, this opaque type allowed you to assign meshes to entities. This year, we're providing the ability to inspect meshes, create, and update meshes at runtime. Let's look at how we can add special effects to the diver. In this demo, we want to show a spiral effect where the spiral contours around the diver. You can also see how the spiral is changing its mesh over time to animate its movement. Let's have a look at how to create this using our new mesh APIs. The effect boils down into three steps. We use mesh inspection to measure the model by examining its vertices. We then build a spiral, using the measurements as a guide. And finally, we can update the spiral over time. Starting with mesh inspection. To explain how meshes are stored, let's look at our diver model. In RealityKit, the Diver's mesh is represented as a mesh resource. With this year's release, MeshResource now contains a member called Contents. There is where all of the processed mesh geometry lives. Contents contains a list of instances and models. Models contain the raw vertex data, while instances reference them and add a transform. Instances allow the same geometry to be displayed multiple times without copying the data. A model can have multiple parts. A part is a group of geometry with one material. Finally, each part contains the vertex data we're interested in, such as positions, normals, texture coordinates, and indices. Let's first look at how we would access this data in code. We'll make an extension on MeshResource.Contents, which calls a closure with the position of each vertex. We start by going through all of the instances. Each of these instances map to a model. For each instance, we find its transform relative to the entity. We can then go into each of the model's parts and access the part's attributes. For this function, we're only interested in position. We can then transform the vertex to the entity space position and call our callback. Now that we can visit the vertices, let's look at how we want to use this data. We'll section our diver into horizontal slices. For each slice, we'll find the bounding radius of our model, and do this for every slice. To implement this, we'll start by creating a zero-filled array with numSlices elements. We then figure out the bounds of the mesh along the y-axis to create our slices. Using the function we just created, for each vertex in the model, we figure out which slice it goes in and we update the radius with the largest radius for that slice. Finally, we return a Slices object containing the radii and bounds. Now that we've analyzed our mesh to know how big it is, let's look at how to create the spiral mesh. The spiral is a dynamically generated mesh. To create this mesh, we need to describe our data to RealityKit. We do this with a mesh descriptor. The mesh descriptor contains the positions, normals, texture coordinates, primitives, and material indices. Once you have a mesh descriptor, you can generate a mesh resource. This invokes RealityKit's mesh processor, which optimizes your mesh. It will merge duplicate vertices, triangulate your quads and polygons, and represent the mesh in the most efficient format for rendering. The result of this processing gives us a mesh resource, which we can assign to an entity. Note that normals, texture coordinates, and materials are optional. Our mesh processor will automatically generate correct normals and populate them. As part of the optimization process, RealityKit will regenerate the topology of the mesh. If you need a specific topology, you can use MeshResource.Contents directly. Now that we know how creating a mesh works, let's look at how to create the spiral. To model the spiral, let's take a closer look at a section.
A spiral is also known as a helix. We'll build this in evenly spaced segments. We can calculate each point using the mathematical definition of a helix and the radius from our analyzed mesh. Using this function for each segment on the helix, we can define four vertices. P0 and P1 are exactly the values that p() returns. To calculate P2 and P3, we can offset P0 and P1 vertically with our given thickness. We're creating triangles, so we need a diagonal. We'll make two triangles using these points. Time to put it all together. Our generateSpiral function needs to store positions and indices. Indices reference values in positions. For each segment, we'll calculate four positions and store their indices -- i0 is the index of p0 when it's added to the array. Then we add the four positions and six indices -- for two triangles -- to their arrays. Once you have your geometry, creating a mesh is simple. First, create a new MeshDescriptor. Then assign positions and primitives. We're using triangle primitives, but we could also choose quads or polygons. Once those two fields are populated, we have enough to generate a MeshResource. You can also provide other vertex attributes like normals, textureCoordinates, or material assignments. We've covered how to create the mesh. The last thing in our spiral example is mesh updates. We use mesh updates to get the spiral to move around the diver. To update the mesh, there's two ways. We could create a new MeshResource each frame using the MeshDescriptors API. But this is not an efficient route, as it will run through the mesh optimizer each frame. A more efficient route is to update the contents in the MeshResource. You can generate a new MeshContents and use it to replace the mesh. There is one caveat, however. If we created our original mesh using MeshDescriptor, RealityKit's mesh processor will have optimized the data. Topology is also reduced to triangles. As a result, make sure you know how your mesh is affected before applying any updates. Let's have a look at code for how you can update the spiral. We start by storing the contents of the existing spiral. Create a new model from the existing model. Then, for each part, we replace triangleIndices with a subset of indices. Finally, with the new contents, we can call replace on the existing MeshResource. And that's it for dynamic meshes. To summarize the key things about dynamic meshes, we've introduced a new Contents field in the MeshResource. This container allows you to inspect and modify a mesh's raw data. You can create new meshes using MeshDescriptor. This flexible route allows you to use triangles, quads, or even polygons, and RealityKit will generate an optimized mesh for rendering. Finally, to update meshes, we've provided the ability to update a MeshResource's contents, which is ideal for frequent updates. To wrap up, today we've shown off some of the new rendering features in RealityKit 2. Geometry modifiers let you move and modify vertices. Surface shaders allow you to define your model's surface appearance. You can use post effects to apply effects to the final frame, and dynamic meshes make it easy to create and modify meshes at runtime. To see more of this year's features, don't miss "Dive into RealityKit 2." And for more information about RealityKit, watch "Building Apps with RealityKit." We're very excited about this year's release, and can't wait to see the experiences you build with it. Thank you. ♪
-
-
4:52 - Seaweed Shader
#include <RealityKit/RealityKit.h> [[visible]] void seaweedGeometry(realitykit::geometry_parameters params) { float spatialScale = 8.0; float amplitude = 0.05; float3 worldPos = params.geometry().world_position(); float3 modelPos = params.geometry().model_position(); float phaseOffset = 3.0 * dot(worldPos, float3(1.0, 0.5, 0.7)); float time = 0.1 * params.uniforms().time() + phaseOffset; float3 maxOffset = float3(sin(spatialScale * 1.1 * (worldPos.x + time)), sin(spatialScale * 1.2 * (worldPos.y + time)), sin(spatialScale * 1.2 * (worldPos.z + time))); float3 offset = maxOffset * amplitude * max(0.0, modelPos.y); params.geometry().set_model_position_offset(offset); }
-
5:43 - Assign Seaweed Shader
// Assign seaweed shader to model. func assignSeaweedShader(to seaweed: ModelEntity) { let library = MTLCreateSystemDefaultDevice()!.makeDefaultLibrary()! let geometryModifier = CustomMaterial.GeometryModifier(named: "seaweedGeometry", in: library) seaweed.model!.materials = seaweed.model!.materials.map { baseMaterial in try! CustomMaterial(from: baseMaterial, geometryModifier: geometryModifier) } }
-
9:21 - Octopus Shader
#include <RealityKit/RealityKit.h> void transitionBlend(float time, half3 masks, thread half &blend, thread half &colorBlend) { half noise = masks.r; half gradient = masks.g; half mask = masks.b; half transition = (sin(time * 1.0) + 1) / 2; transition = saturate(transition); blend = 2 * transition - (noise + gradient) / 2; blend = 0.5 + 4.0 * (blend - 0.5); // more contrast blend = saturate(blend); blend = max(blend, mask); blend = 1 - blend; colorBlend = min(blend, mix(blend, 1 - transition, 0.8h)); } [[visible]] void octopusSurface(realitykit::surface_parameters params) { constexpr sampler bilinear(filter::linear); auto tex = params.textures(); auto surface = params.surface(); auto material = params.material_constants(); // USD textures have an inverse y orientation. float2 uv = params.geometry().uv0(); uv.y = 1.0 - uv.y; half3 mask = tex.custom().sample(bilinear, uv).rgb; half blend, colorBlend; transitionBlend(params.uniforms().time(), mask, blend, colorBlend); // Sample both color textures. half3 baseColor1, baseColor2; baseColor1 = tex.base_color().sample(bilinear, uv).rgb; baseColor2 = tex.emissive_color().sample(bilinear, uv).rgb; // Blend colors and multiply by the tint. half3 blendedColor = mix(baseColor1, baseColor2, colorBlend); blendedColor *= half3(material.base_color_tint()); // Set on the surface. surface.set_base_color(blendedColor); // Sample the normal and unpack. half3 texNormal = tex.normal().sample(bilinear, uv).rgb; half3 normal = realitykit::unpack_normal(texNormal); // Set on the surface. surface.set_normal(float3(normal)); // Sample material textures. half roughness = tex.roughness().sample(bilinear, uv).r; half metallic = tex.metallic().sample(bilinear, uv).r; half ao = tex.ambient_occlusion().sample(bilinear, uv).r; half specular = tex.roughness().sample(bilinear, uv).r; // Apply material scaling factors. roughness *= material.roughness_scale(); metallic *= material.metallic_scale(); specular *= material.specular_scale(); // Increase roughness for the red octopus. roughness *= (1 + blend); // Set material properties on the surface. surface.set_roughness(roughness); surface.set_metallic(metallic); surface.set_ambient_occlusion(ao); surface.set_specular(specular); }
-
11:41 - Assign Octopus Shader
// Apply the surface shader to the Octopus. func assignOctopusShader(to octopus: ModelEntity) { // Load additional textures. let color2 = try! TextureResource.load(named: "Octopus/Octopus_bc2") let mask = try! TextureResource.load(named: "Octopus/Octopus_mask") // Load the surface shader. let surfaceShader = CustomMaterial.SurfaceShader(named: "octopusSurface", in: library) // Construct a new material with the contents of an existing material. octopus.model!.materials = octopus.model!.materials.map { baseMaterial in let material = try! CustomMaterial(from: baseMaterial surfaceShader: surfaceShader) // Assign additional textures. material.emissiveColor.texture = .init(color2) material.custom.texture = .init(mask) return material } }
-
14:13 - CoreImage PostEffect
// Add RenderCallbacks to the ARView. var ciContext: CIContext? func initPostEffect(arView: ARView) { arView.renderCallbacks.prepareWithDevice = { [weak self] device in self?.prepareWithDevice(device) } arView.renderCallbacks.postProcess = { [weak self] context in self?.postProcess(context) } } func prepareWithDevice(_ device: MTLDevice) { self.ciContext = CIContext(mtlDevice: device) } // The CoreImage thermal filter. func postProcess(_ context: ARView.PostProcessContext) { // Create a CIImage for the input color. let sourceColor = CIImage(mtlTexture: context.sourceColorTexture)! // Create the thermal filter. let thermal = CIFilter.thermal() thermal.inputImage = sourceColor // Create the CIRenderDestination. let destination = CIRenderDestination(mtlTexture: context.targetColorTexture, commandBuffer: context.commandBuffer) // Preserve the image orientation. destination.isFlipped = false // Instruct CoreImage to start our render task. _ = try? self.ciContext?.startTask(toRender: thermal.outputImage!, to: destination) }
-
16:15 - Bloom Post Effect
var device: MTLDevice! var bloomTexture: MTLTexture! func initPostEffect(arView: ARView) { arView.renderCallbacks.prepareWithDevice = { [weak self] device in self?.prepareWithDevice(device) } arView.renderCallbacks.postProcess = { [weak self] context in self?.postProcess(context) } } func prepareWithDevice(_ device: MTLDevice) { self.device = device } func makeTexture(matching texture: MTLTexture) -> MTLTexture { let descriptor = MTLTextureDescriptor() descriptor.width = texture.width descriptor.height = texture.height descriptor.pixelFormat = texture.pixelFormat descriptor.usage = [.shaderRead, .shaderWrite] return device.makeTexture(descriptor: descriptor)! } func postProcess(_ context: ARView.PostProcessContext) { if self.bloomTexture == nil { self.bloomTexture = self.makeTexture(matching: context.sourceColorTexture) } // Reduce areas of 20% brightness or less to zero. let brightness = MPSImageThresholdToZero(device: context.device, thresholdValue: 0.2, linearGrayColorTransform: nil) brightness.encode(commandBuffer: context.commandBuffer, sourceTexture: context.sourceColorTexture, destinationTexture: bloomTexture!) // Blur the remaining areas. let gaussianBlur = MPSImageGaussianBlur(device: context.device, sigma: 9.0) gaussianBlur.encode(commandBuffer: context.commandBuffer, inPlaceTexture: &bloomTexture!) // Add color plus bloom, writing the result to targetColorTexture. let add = MPSImageAdd(device: context.device) add.encode(commandBuffer: context.commandBuffer, primaryTexture: context.sourceColorTexture, secondaryTexture: bloomTexture!, destinationTexture: context.targetColorTexture) }
-
17:15 - SpriteKit Post Effect
// Initialize the SpriteKit renderer. var skRenderer: SKRenderer! func initPostEffect(arView: ARView) { arView.renderCallbacks.prepareWithDevice = { [weak self] device in self?.prepareWithDevice(device) } arView.renderCallbacks.postProcess = { [weak self] context in self?.postProcess(context) } } func prepareWithDevice(_ device: MTLDevice) self.skRenderer = SKRenderer(device: device) self.skRenderer.scene = SKScene(fileNamed: "GameScene") self.skRenderer.scene!.scaleMode = .aspectFill // Make the background transparent. self.skRenderer.scene!.backgroundColor = .clear } func postProcess(context: ARView.PostProcessContext) { // Blit (Copy) sourceColorTexture onto targetColorTexture. let blitEncoder = context.commandBuffer.makeBlitCommandEncoder() blitEncoder?.copy(from: context.sourceColorTexture, to: context.targetColorTexture) blitEncoder?.endEncoding() // Advance the scene to the new time. self.skRenderer.update(atTime: context.time) // Create a RenderPass writing to the targetColorTexture. let desc = MTLRenderPassDescriptor() desc.colorAttachments[0].loadAction = .load desc.colorAttachments[0].storeAction = .store desc.colorAttachments[0].texture = context.targetColorTexture // Render! self.skRenderer.render(withViewport: CGRect(x: 0, y: 0, width: context.targetColorTexture.width, height: context.targetColorTexture.height), commandBuffer: context.commandBuffer, renderPassDescriptor: desc) }
-
19:08 - ARKit AR Depth
let width = context.sourceColorTexture.width let height = context.sourceColorTexture.height let transform = arView.session.currentFrame!.displayTransform( for: self.orientation, viewportSize: CGSize(width: width, height: height) ).inverted()
-
20:01 - Depth Fog Shader
typedef struct { simd_float4x4 viewMatrixInverse; simd_float4x4 viewMatrix; simd_float2x2 arTransform; simd_float2 arOffset; float fogMaxDistance; float fogMaxIntensity; float fogExponent; } DepthFogParams; float linearizeDepth(float sampleDepth, float4x4 viewMatrix) { constexpr float kDepthEpsilon = 1e-5f; float d = max(kDepthEpsilon, sampleDepth); // linearize (we have reverse infinite projection); d = abs(-viewMatrix[3].z / d); return d; } constexpr sampler textureSampler(address::clamp_to_edge, filter::linear); float getDepth(uint2 gid, constant DepthFogParams &args, texture2d<float, access::sample> inDepth, depth2d<float, access::sample> arDepth) { // normalized coordinates float2 coords = float2(gid) / float2(inDepth.get_width(), inDepth.get_height()); float2 arDepthCoords = args.arTransform * coords + args.arOffset; float realDepth = arDepth.sample(textureSampler, arDepthCoords); float virtualDepth = linearizeDepth(inDepth.sample(textureSampler, coords)[0], args.viewMatrix); return min(virtualDepth, realDepth); } [[kernel]] void depthFog(uint2 gid [[thread_position_in_grid]], constant DepthFogParams& args [[buffer(0)]], texture2d<half, access::sample> inColor [[texture(0)]], texture2d<float, access::sample> inDepth [[texture(1)]], texture2d<half, access::write> outColor [[texture(2)]], depth2d<float, access::sample> arDepth [[texture(3)]] ) { const half4 fogColor = half4(0.5, 0.5, 0.5, 1.0); float depth = getDepth(gid, args, inDepth, arDepth); // Ignore depth values greater than the maximum fog distance. float fogAmount = saturate(depth / args.fogMaxDistance); float fogBlend = pow(fogAmount, args.fogExponent) * args.fogMaxIntensity; half4 nearColor = inColor.read(gid); half4 color = mix(nearColor, fogColor, fogBlend); outColor.write(color, gid); }
-
23:32 - MeshResource.Contents extension
// Examine each vertex in a mesh. extension MeshResource.Contents { func forEachVertex(_ callback: (SIMD3<Float>) -> Void) { for instance in self.instances { guard let model = self.models[instance.model] else { continue } let instanceToModel = instance.transform for part in model.parts { for position in part.positions { let vertex = instanceToModel * SIMD4<Float>(position, 1.0) callback([vertex.x, vertex.y, vertex.z]) } } } } }
-
24:20 - Mesh Radii
struct Slices { var radii : [Float] = [] var range : ClosedRange<Float> = 0...0 var sliceHeight: Float { return (range.upperBound - range.lowerBound) / Float(sliceCount) } var sliceCount: Int { return radii.count } func heightAt(index: Int) -> Float { return range.lowerBound + Float(index) * self.sliceHeight + self.sliceHeight * 0.5 } func radiusAt(y: Float) -> Float { let relativeY = y - heightAt(index: 0) if relativeY < 0 { return radii.first! } let slice = relativeY / sliceHeight let sliceIndex = Int(slice) if sliceIndex+1 >= sliceCount { return radii.last! } // 0 to 1 let t = (slice - floor(slice)) // linearly interpolate between two closest values let prev = radii[sliceIndex] let next = radii[sliceIndex+1] return mix(prev, next, t) } func radiusAtIndex(i: Float) -> Float { let asFloat = i * Float(radii.count) var prevIndex = Int(asFloat.rounded(.down)) var nextIndex = Int(asFloat.rounded(.up)) if prevIndex < 0 { prevIndex = 0 } if nextIndex >= radii.count { nextIndex = radii.count - 1 } let prev = radii[prevIndex] let next = radii[nextIndex] let remainder = asFloat - Float(prevIndex) let lerped = mix(prev, next, remainder) return lerped + 0.5 } } func meshRadii(for mesh: MeshResource, numSlices: Int) -> Slices { var radiusForSlice: [Float] = .init(repeating: 0, count: numSlices) let (minY, maxY) = (mesh.bounds.min.y, mesh.bounds.max.y) mesh.contents.forEachVertex { modelPos in let normalizedY = (modelPos.y - minY) / (maxY - minY) let sliceY = min(Int(normalizedY * Float(numSlices)), numSlices - 1) let radius = length(SIMD2<Float>(modelPos.x, modelPos.z)) radiusForSlice[sliceY] = max(radiusForSlice[sliceY], radius) } return Slices(radii: radiusForSlice, range: minY...maxY) }
-
25:58 - Spiral Point
// The angle between two consecutive segments. let theta = (2 * .pi) / Float(segmentsPerRevolution) // How far to step in the y direction per segment. let yStep = height / Float(totalSegments) func p(_ i: Int, radius: Float = 1.0) -> SIMD3<Float> { let y = yStep * Float(i) let x = radius * cos(Float(i) * theta) let z = radius * sin(Float(i) * theta) return SIMD3<Float>(x, y, z) }
-
26:37 - Generate Spiral
extension MeshResource { static func generateSpiral( radiusAt: (Float)->Float, radiusAtIndex: (Float)->Float, thickness: Float, height: Float, revolutions: Int, segmentsPerRevolution: Int) -> MeshResource { let totalSegments = revolutions * segmentsPerRevolution let totalVertices = (totalSegments + 1) * 2 var positions: [SIMD3<Float>] = [] var normals: [SIMD3<Float>] = [] var indices: [UInt32] = [] var uvs: [SIMD2<Float>] = [] positions.reserveCapacity(totalVertices) normals.reserveCapacity(totalVertices) uvs.reserveCapacity(totalVertices) indices.reserveCapacity(totalSegments * 4) for i in 0..<totalSegments { let theta = Float(i) / Float(segmentsPerRevolution) * 2 * .pi let t = Float(i) / Float(totalSegments) let segmentY = t * height if i > 0 { let base = UInt32(positions.count - 2) let prevInner = base let prevOuter = base + 1 let newInner = base + 2 let newOuter = base + 3 indices.append(contentsOf: [ prevInner, newOuter, prevOuter, // first triangle prevInner, newInner, newOuter // second triangle ]) } let radialDirection = SIMD3<Float>(cos(theta), 0, sin(theta)) let radius = radiusAtIndex(t) var position = radialDirection * radius position.y = segmentY positions.append(position) positions.append(position + [0, thickness, 0]) normals.append(-radialDirection) normals.append(-radialDirection) // U = in/out // V = distance along spiral uvs.append(.init(0.0, t)) uvs.append(.init(1.0, t)) } var mesh = MeshDescriptor() mesh.positions = .init(positions) mesh.normals = .init(normals) mesh.primitives = .triangles(indices) mesh.textureCoordinates = .init(uvs) return try! MeshResource.generate(from: [mesh]) } }
-
28:17 - Update Spiral
if var contents = spiralEntity?.model?.mesh.contents { contents.models = .init(contents.models.map { model in var newModel = model newModel.parts = .init(model.parts.map { part in let start = min(self.allIndices.count, max(0, numIndices - stripeSize)) let end = max(0, min(self.allIndices.count, numIndices)) var newPart = part newPart.triangleIndices = .init(self.allIndices[start..<end]) return newPart }) return newModel }) try? spiralEntity?.model?.mesh.replace(with: contents) }
-
-
찾고 계신 콘텐츠가 있나요? 위에 주제를 입력하고 원하는 내용을 바로 검색해 보세요.
쿼리를 제출하는 중에 오류가 발생했습니다. 인터넷 연결을 확인하고 다시 시도해 주세요.