스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
Advanced Scene Understanding in AR
ARKit 3.5 and RealityKit provide new capabilities that take full advantage of the LiDAR Scanner on the new iPad Pro. Check out ARKit 3.5 and learn about Scene Geometry, enhanced raycasting, instantaneous virtual object placement, and more. See how RealityKit takes advantage of these features to enable real-world physics, object occlusion, and lighting effects that interact with real-world objects.
리소스
-
다운로드
Hello, and welcome to Advance Scene Understanding in AR.
In this video I'll introduce the new capabilities of ARKit and RealityKit enabled by the LiDAR Scanner on the new iPad Pro.
iOS and ipadOS provide developers with two powerful frameworks to help you build apps for AR.
ARKit combines positional tracking, scene understanding, and integration with rendering technologies to enable a variety of AR experiences using either the back-facing or front-facing camera.
And RealityKit -- a new high-level framework built specifically for AR -- provides photorealistic rendering and special effects, scalable performance, and a Swift-based API, making it easy to prototype and build great AR experiences.
Today we're excited to talk about advances in both frameworks, which are only made possible by the hardware capabilities on the new iPad Pro.
The new iPad Pro comes equipped with a LiDAR Scanner.
This is used to determine distance by measuring at nanosecond speeds how long it takes for light to reach an object in front of you and reflect back.
This is effective up to five meters away and operates both indoors and outdoors.
And coupling this capability with information captured by the Wide and Ultra Wide cameras gives you an incredible understanding of your environment.
So let's see how this improves AR experiences built with each of these frameworks, starting with ARKit.
The new update to ARKit version 3.5 delivers a number of new features and improvements that take full advantage of the LiDAR Scanner on the new iPad Pro.
Scene Geometry is a new API that provides your apps with a detailed topological map of your surrounding environment.
The LiDAR Scanner also simplifies the AR onboarding experience by more quickly and more accurately detecting surfaces.
And existing ARKit features -- including motion capture, people occlusion, and raycasting -- also benefit without requiring any additional application changes.
So let's start with Scene Geometry.
Scene Geometry provides you with a triangle mesh, representing a topological mapping of your environment.
And optionally, that mesh can include semantic information that classifies what's being seen.
This includes things like tables and chairs, floors, walls, ceilings, the windows, and so on.
All this information could be used to allow occlusion of virtual content by real-world objects, environment-dependent physics, and illumination of both real and virtual objects in your scene.
So let's see this mapping in action.
Here's an example being taken from an indoor scene.
We're overlaying the AR frame image with a mesh being generated by ARKit, using the LiDAR sensor.
You can see as we sweep around the room how quickly we're able to detect the shape of the furniture and the layout of the environment.
And the colors are based on a classification of what the mesh overlays.
So let's take a look at the API.
Scene Geometry is enabled through a new scene reconstruction property on ARWorldTtrackingConfiguration.
Now, there's two options, depending on what data you'd like to have generated.
The first option is to generate just the mesh, meaning only the topological information will be surfaced.
This is intended for apps doing things like object placement, where they don't depend on a classification of the surrounding objects.
The other option is .meshWithClassification.
And as the name implies, this adds semantic classification for all the Scene Geometry.
This is helpful for apps that want different behaviors, depending on what's in the scene, such as different lighting on the floor versus on the table.
And here down below, you see the code; it's pretty simple.
We're using world tracking, and we test to verify that we're running on a device that supports scene reconstruction.
If so, then we select the mesh option here and start running the session.
Once the session is running with scene reconstruction enabled, the AR session will return AR mesh anchors.
These are just like any other anchor, and changes come through the usual AR session delegate methods, like did add(anchor:), did update(anchor:), and did remove(anchor:).
And each mesh anchor represents a local area of mesh geometry.
It's described with a transform of the anchor and a mesh geometry object.
An ARMeshGeometry object holds all the information that you'll need to represent the surrounding environment.
Each object contains a list of vertices, normals, faces, and a semantic classification if enabled for each phase.
These are all provided as MTLBuffers to allow them to be directly integrated into your renderers.
Now there's some interesting interplay that happens between Scene Geometry and plane detection.
When scene reconstruction and plane detection are both enabled, the mesh that's constructed will be flattened to match overlapping planes.
This is useful for an object placement where surfaces need to be consistently flat to allow for a smooth object movement.
On the other hand, if you're using Scene Geometry, and plane detection is not enabled, the mesh will no longer be flattened.
But this combination will preserve more detail in the meshed surfaces.
OK, so that's Scene Geometry.
Moving on, there's several more improvements enabled by the LiDAR Scanner.
The first is much simpler and faster onboarding.
With the LiDAR Scanner, planer surfaces are detected almost instantly and more accurately as well.
This is true even on low-features surfaces, like white walls.
So the result is that the plane-detection onboarding that previously took a few seconds and required some amount of user guidance can now occur completely seamlessly.
And no changes are necessary.
All ARKit apps will benefit from this when running on the new iPad Pro.
Existing apps will also benefit from improved raycasting.
The enhanced scene understanding of ARKit allows quicker and more accurate raycasting against horizontal and vertical planes.
Additionally, the new iPad Pro can raycast against a wider range of surfaces than ever before.
Just set the allowing target to estimatedPlane and data from the LiDAR Scanner will provide raycasting results that match the surrounding environment.
For example, objects can now be placed on all surfaces of a large chair or a couch as seen here.
Motion Capture and people occlusion are also improved due to the LiDAR Scanner providing more accurate depth information.
Apps using Motion Capture will benefit from a more accurate scale estimation, and the depth values for people occlusion are more accurate as well.
In addition, people occlusion and the Scene Geometry API can work together when both features are enabled.
The very dynamic geometry of people can be excluded from the scene reconstruction, which in turn, provides a more stable mesh of the real-world environment.
So that's a quick look at ARKit 3.5 on the new iPad Pro.
Scene Geometry provides a topological map of the environment around you.
Planer surfaces are detected almost instantly and more accurately, which simplifies onboarding.
Raycasting is more accurate and can take Scene Geometry into account, and Motion Capture and people occlusion are improved as well.
ARKit is also tightly integrated with our higher-level AR framework called RealityKit.
RealityKit provides photo-realistic rendering, camera effects, animations, physics, and a lot more.
It was built from the ground up specifically for AR.
RealityKit takes advantage of the new ARKit 3.5 features and makes it really easy for you to integrate them into either new or existing RealityKit apps.
These capabilities can be accessed through the new scene understanding API.
It provides options for enabling LiDAR-enhanced physics, occlusion, and lighting, and it's all access through some simple settings on the ARView, so let's take a look.
With the new iPad Pro, RealityKit is able to determine physics interactions between virtual objects in the Scene Geometry from surfaces we detect in the real world.
So you can have a virtual ball bounce off of your real-world furniture.
To do this, first you'll generate collision shapes for your virtual content, the ModelEntity, and initialize its physics body.
Then just add the physics option to your ARView's sceneUnderstanding.options set, and RealityKit will take care of the rest.
Likewise with occlusion, RealityKit uses the Scene Geometry that's detected from real-world objects, like doorways, and tables, chairs, and so on to occlude the virtual objects in your scene.
This is totally automatic.
All it takes to enable this is to add occlusion to the sceneUnderstanding.options set for your ARView.
RealityKit will take care of everything else under the hood, and your virtual content will be occluded by all the major objects in the environment.
Here's an example of that.
Our virtual robot is walking around on the floor.
But you'll see as the camera moves behind the column, the robot is occluded.
Its geometry is disappearing from the rendering, and that helps maintain the illusion of proper depth in the scene.
Now the third piece is for lighting.
With a new RealityKit, your virtual light sources can illuminate real-world surfaces.
This is because we're able to illuminate the Scene Geometry, which is very accurately fitted to those surfaces with the help of the LiDAR Scanner.
And like before, enabling this is just as simple as adding receivesLighting to the ARView's sceneUnderstanding.options set.
And finally, support for these features extends to scenes built in Reality Composer as well.
Physics can be configured to collide virtual objects with the real world using the Scene Geometry mesh, and occlusion of virtual content by real-world objects can be enabled in the Inspector panel.
For more information, please visit developer.apple.com where you'll find links to documentation, sample code, or developer videos like this one, and a lot more.
Thank you for watching!
-
-
찾고 계신 콘텐츠가 있나요? 위에 주제를 입력하고 원하는 내용을 바로 검색해 보세요.
쿼리를 제출하는 중에 오류가 발생했습니다. 인터넷 연결을 확인하고 다시 시도해 주세요.