스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
What's new in RealityKit
RealityKit is Apple's rendering, animation, physics, and audio engine built from the ground up for augmented reality: It reimagines the traditional 3D renderer to make it easy for developers to prototype and produce high-quality AR experiences. Learn how to effectively implement each of the latest improvements to RealityKit in your app. Discover features like video textures, scene understanding using the LiDAR scanner on iPad Pro, Location Anchors, face tracking, and improved debugging tools. To get the most out of this session, you should understand the building blocks of developing RealityKit-based apps and games. Watch “Introducing RealityKit and Reality Composer” for a primer. For more on how you can integrate Reality Composer into your augmented reality workflow, watch "The artist's AR toolkit".
리소스
관련 비디오
WWDC21
WWDC20
WWDC19
-
다운로드
♪ Hello, and welcome to WWDC.
My name's Saad and I'm from the RealityKit team.
Last year we released RealityKit, an AR-focused 3D engine.
And since then, we've been working hard to add many new features.
Today, I'm excited to tell you about some of them.
To start off, we have video materials.
And video materials allow you to use videos as materials in RealityKit.
Second, is scene understanding using the brand new LiDAR sensor.
Thanks to ARKit processing data from this new sensor, we're able to bring the real world into your virtual one.
As a result, we have a huge set of features that allow you to make your virtual content interact with the real world.
Next is improved rendering debugging.
With the rendering being such a huge pipeline, we've added the ability to inspect various properties related to the rendering of your entity.
And the next two updates are related to ARKit 4 integration.
ARKit has extended its Face Tracking support to work on more devices.
This means that now face anchors will also work on more devices.
And ARKit has also added location anchors.
Location anchors allow you to place AR content at specific locations in the real world using RealityKit.
With these new features in mind, I want to now show you an experience that you can now create using RealityKit.
This experience is built upon a problem we face as developers every day.
When we code, what do we get? We get bugs like this one.
And this is quite a simple looking bug.
And in real life, bugs are never this simple.
So let's add some complexity to it using our new video materials feature.
The bug now has a glowing body as well as glittering in its eyes.
So what does a bug normally do? Well, it hides.
As the bug runs to hide, the tree occludes the bug.
But it can't hide forever.
You know you'll fix it.
But until then, you know it's going to try to run away from you.
And using scene understanding, we can implement this logic.
Once you finally catch it, you want to make sure you fix it for good.
So let's pick it up and crush it into the pavement.
And because of scene understanding, the bug disintegrates when it collides against the real world.
And that's it for the experience.
Now let's take a deep dive into these new features and see how you can use them.
We'll start with video materials.
We saw from the experience that video materials were used to give the bug a glowing effect as well as glittering in its eyes.
Now, let's have a look at the video.
On the left, you can see the video associated with the texture on the bug.
You can see how the eyes map on and you can also see how we're able to share one portion of the texture for both eyes.
You can also see how the body is mapped on as the globe pulses in the texture, you can see it pulse through the body.
So in a nutshell, video materials allow you to have textures that change over time.
They get these textures from a video, and you can use this for a lot of things.
You can use it to simulate the glow effect you saw.
You can also use it to provide video instructions.
Simply create a plane and map a video onto it.
You can even combine ARKit image anchors with video materials to bring images to life.
But that's not all.
Video materials also play audio, spatialized audio.
When you apply a video material onto an entity, that entity becomes a spatialized audio source for your video.
Spatialized audio is when the sound source acts as if it's emitted from a specific location.
In this case, from the entity.
This helps build a more immersive experience.
Now, because all of this is done under the hood when you apply the material, you don't have to do extra work like synchronization or doing your own manual audio playback.
So now that you know what they are, how do we use them? The flow is quite simple.
First, we have to load the video.
RealityKit leverages the power of AVFoundation's AVPlayer to use as a video source.
You load the video using AVFoundation and create an AVPlayer object.
Once you have an AVPlayer object, you can then use it to create a video material.
This video material is just like any other RealityKit material.
You can assign it on any entity that you want to use it on.
So let's see how these steps map to code.
First, we use AVFoundation to load the video.
Next we create a video material using an AVPlayer object and assign it to the bug entity.
Once we've assigned the material, we can then play the video.
Now, video can be controlled as you would through AVPlayer.
And this is quite a simple example of video playback.
However, since RealityKit's video material uses AVPlayer you get all of the great functionality and features that AVFoundation brings.
For example, you can use AVPlayer to directly play, pause, and seek inside your media.
This allows you to use a video atlas instead of having one video per texture.
You can use AVPlayer properties to trigger state transitions in your app.
For example, when your video finishes you can use this to go to the next stage of your app.
You can use AVPlayerLooper to play one video in a loop and you can use AVQueuePlayer to sequentially play a queue of videos.
And lastly, you can even play remote media served using HTTP Live streaming.
If you want to learn more, see the advances in AVFoundation session in WWDC 2016 for more information.
This concludes video materials.
To summarize, video materials allow you to use videos as a texture source and spatial audio source.
Let's move on to our next big feature: scene understanding.
Scene understanding has one main goal.
Its goal is to make virtual content interact with the real world.
To achieve this, we want to bring everything in the real world into your virtual one.
So let's see how we can do this.
We start in the ARView.
The ARView has a list of settings related to the real world under the environment struct.
These settings configure the background image, environment based lighting, as well as spatial audio options.
And we consider the real world to be part of your environment.
As a result, we've added a new scene understanding option set and this new option set lets you configure how the real world interacts with your virtual.
It has four options.
First, we have occlusion.
This means real-world objects occlude virtual objects.
Second, is receives lighting.
This allows virtual objects to cast shadows on real-world surfaces.
Third, is physics.
This enables the act of virtual objects physically interacting with the real world.
And last but not least, we have collision.
This enables the generation of collision events, as well as the ability to ray-cast against the real world.
Now one thing to know; receives lighting automatically turns on occlusion, and likewise physics automatically turns on collision.
Now let's have a look at these options in a bit more detail.
We'll start with occlusion.
So on the left without occlusion, you can see that the bug is always visible.
On the right with occlusion, the bug is hidden behind the tree.
So you can see how occlusion helps with the realism, by hiding virtual objects behind real ones.
To use occlusion, simply add .occlusion into your scene understanding options.
Our next option was receives lighting.
On the left, without receives lighting, the bug looks like it's floating because of the lack of shadows.
And now on the right with receives lighting, the bug casts the shadow and looks grounded.
To use shadows, like occlusion, simply add .receivesLighting into your option set.
These new shadows are similar to the shadows that you're used to seeing on horizontally anchored objects.
However, because we're using real-world surfaces you no longer have to anchor these objects to horizontal planes.
This means that any entity that you have in the virtual world will cast a shadow onto the real world.
It's important to note though that the shadows imitate a light shining straight down.
This means you won't see shadows on walls.
And for those shadows, you still have to anchor your entity vertically.
We can now move on to our third option: physics.
Previously, for AR experiences you were just limited to using planes or various primitives to represent your world.
This is unrealistic, because the real world doesn't conform to these shapes.
Now, because of scene understanding physics you can see that when the bug disintegrates, the small pieces bounce off the steps and scatter all around.
To use physics, you add the .physics option into your scene understanding option set.
There's a couple of specifics you should know, however.
First, we consider real-world objects to be static with infinite mass.
This means that they're not movable as you'd expect in real life.
Second, these meshes are constantly updating.
This means that you should never expect objects to stay still, especially on non-planer surfaces.
Third, the reconstructed mesh is limited to regions where the user has scanned.
This means that if the user has never scanned the floor, there will be no floor in your scene.
And as a result, objects will fall right through.
You should design your experience to make sure that the user has scanned the room before starting.
And fourth, the mesh is an approximation of the real world.
While the occlusion mask is very accurate, the mesh for physics is less so.
As a result, don't expect the mesh to have super crisp edges as you can see in the image on the right.
Lastly, if you do use physics, collaborative sessions are not supported.
Collaborative sessions will still work, just with the other scene understanding options.
Let's now look at our last option, collision.
Collision is a bigger topic.
But before we get into that, to use collision, similar to the previous three options, just insert .collision into your scene understanding options.
Now the reason collision is a bigger topic is because it has two use cases.
The first is ray-casting.
Ray-casting is a super powerful tool.
You can use it to do a lot of things such as pathfinding, initial object placement, line of sight testing, and keeping things on meshes.
The next use case is collision events.
We want to do some action when we detect a collision between a real-world object and a virtual one.
So let's recap.
Let's see how we use the two use cases in our experience.
Let's start by looking at ray-casting.
You can see how the bug stays on the tree and finds different points to visit.
Ray-casting allows us to keep the bug on the tree by ray-casting from the body towards the tree, to find the closest contact point.
And ray-casting is also used to find different points for the bug to visit.
And let's look at the next case, collision events.
By using collision events, we were able to detect when the bug is tossed and hits the real world.
When it does, we make it disintegrate.
So now that we've seen these two use cases and how they helped build the experience, there's still one more thing you need to know about using them.
Ray-casting returns a list of entities and collision events happen between two entities.
This means we need an entity, an entity that corresponds to a real-world object.
As a result, we're introducing the scene understanding entity.
A scene understanding entity is just an entity, an entity that consists of various components.
It contains a transform component, collision component, and a physics component.
And these are automatically added, based on your scene understanding options.
There's still one more component, and that is the scene understanding component.
The scene understanding component is unique to a scene understanding entity.
It is what makes a scene understanding entity, a scene understanding entity.
When looking for a real-world object in a collision event or ray-casting results, you simply need to find an entity with this component.
And similarly, with this new component we have a new HasSceneUnderstanding trait.
All scene understanding entities will conform to this trait.
It's really important to realize that these entities are created and managed by RealityKit.
You should consider these entities read-only and not modify properties on their components.
Doing so can result in undefined behavior.
So now that we have scene understanding entities in our tool box and we know how to find them, let's take a look at how we can use them in code for the collision use cases we saw.
We'll start with ray-casting.
In this sample, we'll look at implementing some simple object avoidance for a bug entity.
We start by figuring out where the bug is looking and get a corresponding ray.
Once we have this ray, we can then do a RealityKit ray-cast.
The ray-cast returns a list containing both virtual and real-world entities.
We only want the real-world entities.
Thus, we filter using the HasSceneUnderstanding trait.
And with these filtered results, we know that the first entity into the results is the closest real world object.
Then finally, we can do some simple object avoidance by looking at the distance to the nearest object and taking an action.
Maybe we go right, maybe we go left.
So now let's move on to another code example, a code example using collision events.
In this sample, we'll look at implementing something similar to the bug disintegration we saw in the experience.
We start by subscribing to collision events between all entities.
When we get a callback for a collision, we need to figure out if it was with a real-world object.
We need to see if either entity is a scene understanding entity by using the HasSceneUnderstanding trait.
If neither of these entities conform to this trait, then, well, we know we don't have a real-world collision.
If, however, one entity does conform to this trait, then we know we have a real-world collision and that the other entity is the bug.
We can then make the bug disintegrate by doing an asset swap.
Now this leads to one question that I want to discuss.
We can react when objects collide against the real world, but what if we don't want them to collide at all? And to do this, we need to use collision filters and collision filters need collision groups.
If we want objects not to collide with the real world, we need to filter them out, using the collision group for the real world.
And as a result, we're introducing a new collision group called scene understanding for real-world objects.
To use this group, set your collision filter to include or not include scene understanding to filter the collision appropriately.
And because RealityKit manages scene understanding entities, it automatically sets the collision groups on them.
So you don't have to manually do this.
That covers all of the collision related aspects of scene understanding.
Using all of the scene understanding features I talked about, you can now create a great app.
However, once you start building your app, you might run into some issues and then you're not sure if it's the mesh that's the problem or your logic.
So to help with this, we've added the ability to visualize the mesh.
Let's take a look.
Here you can see a video with the debug mesh visualization turned on.
This shows us the raw, real-world mesh.
The mesh is color coded by the distance away from the camera.
You can also see how this mesh is constantly updating, and the reconstructed mesh is an approximation of the real world.
You can see how the mesh is not crisp at the edges of the stairs.
To enable this visualization, add the .showSceneUnderstanding option into the ARViews debug options.
The colors you saw in the video are color coded using the chart below.
You can see that the color varies by distance and past five meters, everything is colored white.
And it's important to note that this option shows you the raw, real-world mesh.
If you want to see the physics mesh, then you just turn on the regular physics debug view that's already available.
And that covers everything scene understanding.
And as you can tell, there was a lot of cool stuff you can do with scene understanding.
And so, I want to summarize some key takeaways.
The first is the goal of scene understanding.
The goal of scene understanding is to make your virtual content interact with the real world.
And one set of these interactions is through occlusion and shadows.
Real-world objects will occlude and receive shadows from virtual ones.
Another set of interactions is through physics and collision.
Physics lets objects physically interact with the real world.
Collision, on the other hand, lets you know when objects physically collide.
It also enables you to ray-cast against the real world.
Real-world objects have corresponding scene understanding entities.
These entities can be identified using the HasSceneUnderstanding trait.
It's important to remember that these entities are created and managed by RealityKit and they should be considered read-only.
And lastly, we have debug mesh visualization.
This allows you to view the raw, real-world mesh.
And that's it for scene understanding.
We can now move on to our next major feature: improved rendering debugging.
Rendering is a huge pipeline with lots of small components.
This includes model loading, material setup, scene lighting, and much more.
And to help debug rendering-related issues, we've added the ability to inspect various properties related to the rendering of your entity.
Let me show you an example of what I mean by looking at our bug.
Let's have a look at its base color texture.
How about its normal map or its texture coordinates? So we can see these properties, but how did they actually help us? They can help us because you can look at normals and texture coordinates to ensure that your model was loaded correctly.
This is especially important if you find a model off the Internet and are having issues with its rendering.
Maybe the model was just bad.
If you're using a simple material on an entity and setting the base color, roughness or metallic parameter and things don't look right, you can inspect those parameters to verify that they are set correctly.
And finally, you can use PBR-related outputs, such as diffuse lighting received or specular lighting received, to know how much to tweak your material parameters.
Now, to visualize these properties, we've added a new component: the debug model component.
To enable the visualization of a property, simply choose a property, create a debug model component using the property, and then assign it to your entity.
And you can choose from a huge list of properties.
Currently we have 16.
These can be grouped as vertex attributes, material parameters, and PBR outputs.
Finally, one last thing to note: the visualization only applies to the targeted entity and is not inherited by its children.
USDZ files may have multiple entities with a varying hierarchy.
As a result, you need to add the component to each and every entity that you want to inspect.
And that's it.
Hopefully with this component, you're able to iterate and debug rendering problems much faster.
That covers everything related to improved rendering debugging.
We've also covered all of the features that are RealityKit specific.
Let's move on to our next section: integration with ARKit 4.
ARKit 4 has many updates this year.
There are two updates that relate to RealityKit.
The first is Face Tracking.
Support for Face Tracking is now extended to devices without a TrueDepth camera, as long as they have an A12 processor or later.
This includes new devices, such as the iPhone SE.
As a RealityKit developer, if you were using face anchors before, whether you created them using code or Reality Composer, your applications should now work without any changes.
The second ARKit feature is location anchors.
Location anchors let you create anchors using real-world coordinates.
ARKit takes these coordinates and converts into locations relative to your device.
This means that you can now place AR content at specific locations in the real world.
As a RealityKit developer to use location anchors, you create an anchor entity using a location anchor.
Any virtual content that you anchor under this entity will show at this specific location.
And creating an anchor entity is also super simple since ARKit's new ARGeoAnchor class is just a subclass of ARAnchor, you can call the already existing anchor initializer to create an anchor entity.
Now, one thing to note is that ARKit also introduced a new ARGeo tracking configuration for location anchors.
And since this is not a world tracking configuration, you need to manually configure and start a location anchor session in the ARView.
This also means certain features -- like scene understanding that I just talked about -- will not work.
If you want to learn more, I suggest checking out the Introducing ARKit 4 talk.
It will go over how to set up your session, how to create anchors and how to use them with RealityKit.
And that summarizes the main things you need to know about ARKit 4 and its integration with RealityKit for iOS14.
This also concludes all of the features that I want to talk to you about this year.
And there were a lot of them.
So I want to summarize everything we've learned so far.
We started with video materials.
Video materials allow you to use videos as a texture and spatial audio source on your entity.
This can be used for a lot of things, such as sprite lighting effects or instructional videos.
Next, thanks to scene understanding using the brand new LiDAR sensor, we're able to bring the real world into your virtual one through many ways, such as occlusion, shadows and physics.
You can now leverage ray-casting to implement smart character behavior to have it react with the real world.
And you can also use collision events to implement collision responses when your objects collide with the real world.
And then our next update was an improved ability to debug rendering problems.
With our new debug model component, you can inspect any rendering related property on your entity, such as vertex attributes, material parameters and PBR related outputs.
We then had ARKit related integrations.
ARKit extended Face Tracking support on devices without TrueDepth camera.
This also means face anchors will work on a lot more devices like the new iPhone SE, without requiring any code changes.
And finally, ARKit added location anchors.
Location anchors allow you to place AR content at specific locations in the real world using RealityKit.
And that concludes my talk.
Thank you for your time, and I can't wait to see what incredible content you will make using these new features.
-
-
4:52 - Loading a video material
// Use AVFoundation to load a video let asset = AVURLAsset(url: Bundle.main.url(forResource: "glow", withExtension: "mp4")!) let playerItem = AVPlayerItem(asset: asset) // Create a Material and assign it to your model entity... let player = AVPlayer() bugEntity.materials = [VideoMaterial(player: player)] // Tell the player to load and play player.replaceCurrentItem(with: playerItem) player.play()
-
13:58 - Implementing object avoidance with scene understanding
// Get the position and forward direction of the bug in world space let bugOrigin = bug.position(relativeTo: nil) let bugForward = bug.convert(direction: [0, 0, 1], relativeTo: nil) // Perform a raycast let collisionResults = arView.scene.raycast(origin: bugOrigin, direction: bugForward) // Get all hits against a Scene Understanding Entity let filteredResults = collisionResults.filter { $0.entity as? HasSceneUnderstanding } // Pick the closest one and get the collision point guard let closestCollisionPoint = filteredResults.first?.position else { return } if length(bugOrigin - closestCollisionPoint) < safeDistance { // Avoid obstacle too close to object’s forward }
-
14:48 - Using collision events with a scene understanding entity
// Subscribe to all collision events arView.scene.subscribe(to: CollisionEvents.Began.self) { event in // Get any entity if it conforms to HasSceneUnderstanding guard let sceneUnderstandingEntity = (event.entityA as? HasSceneUnderstanding) ?? (event.entityB as? HasSceneUnderstanding) else { // Did not collide with real world return } // The bug entity is the one that is not the scene understanding entity let bugEntity = (sceneUnderstandingEntity == event.entityA) ? event.entityB : event.entityA // Disintegrate the bug entity … }
-
16:00 - Real world collision filtering
// Only collide with real world entity.collision?.filter.mask = [.sceneUnderstanding] // Never collide with real world entity.collision?.filter.mask = CollisionGroup.all.subtracting(.sceneUnderstanding)
-
-
찾고 계신 콘텐츠가 있나요? 위에 주제를 입력하고 원하는 내용을 바로 검색해 보세요.
쿼리를 제출하는 중에 오류가 발생했습니다. 인터넷 연결을 확인하고 다시 시도해 주세요.