스트리밍은 대부분의 브라우저와
Developer 앱에서 사용할 수 있습니다.
-
SceneKit: What's New
SceneKit is a fast and fully featured high-level 3D graphics framework that enables your apps and games to create immersive scenes and effects. See the latest advances in camera control and effects for simulating real camera optics including bokeh and motion blur. Learn about surface subdivision and tessellation to create smooth-looking surfaces right on the GPU starting from a coarser mesh. Check out new integration with ARKit and workflow improvements enabled by the Xcode Scene Editor.
리소스
관련 비디오
Tech Talks
-
다운로드
Good morning, everyone, and welcome to the SceneKit, What's New session. As you already know, SceneKit is Apple's high-level API for 3D. It's built on top of Metal and it's available on all our platforms. This session will be about the new feature we are adding to SceneKit. We will not cover the basics today, so if you are new to SceneKit, I encourage you to watch our previous presentations from the past years.
So here's our agenda for today.
I will first present some camera improvements that includes some new camera effects and some new APIs to simplify the control of cameras. Then, Amaury will come on stage and talk about tessellation and subdivision, some improvements and new APIs on the animation front, and to finish, we'll present some new developer tools and talk about some related technologies, including ARKit.
Now, to start, I would like to show you a little demo that illustrates some of the new API I'm going to detail later during the session.
So this demo is a simple game example with a character that I can manipulate with a virtual D-pad. I can attack, jump, and we are on the camera with the virtual D-pad. And the first thing I would like to highlight in the demo is the behavior of the camera. As you can see, the camera follows the character smoothly. And by smoothly, I mean it doesn't strictly reproduce the movement of the character, but it tries to keep a constant distance an elevation with the character, but always move with a smooth acceleration and deceleration.
And you will see that the behavior of the camera will adapt, depending on the different zones of the game. For example, here as I approach this fight area, the camera moved down a little bit and adjust the depth of field to focus on the character and the enemies.
Speaking of enemies, we have two enemies here with very basic behavior. One is chasing me while the other is moving away, and I will explain briefly how this was implemented using GameplayKit. For now, let's just kill them, and the other one.
And collect that gem here.
Here the camera smoothly transitioned to a cinematic view with a strong depth of field to emphasize the key, and you can notice also some nice bokeh in the background.
So let's collect that key. And as I get close to this new zone here, where I have to carefully jump on platforms, the camera smoothly transitioned to a new behavior where it now stops rotating and stay aligned with the platform. To simplify my jumps to the platforms, so let's jump. That was intentional.
And let's collect the key.
Again, a new zone where the camera will automatically reorient itself and stay aligned with the platform.
And it looks like there are some friends to free here, and I have the key, so let's do it.
And to finish, the new cinematic view with many characters running around. And here we have 3200 bots to animate, and we are going to achieve that in half a millisecond on this new iPad Pro. So this is really fast. We have been focusing on character animation performance in this release. And implementing all this has been much simpler with the new APIs, with the new animation APIs I will present later. So that's all the demo.
Other ways we are really happy to share this demo with you as a sample code, and the sample code runs on iOS, tvOS, and macOS, and is available in Swift and Objective-C.
During the demo, I insisted on the camera behavior because this is something that is really difficult to write. We are seeing many questions and related requests on developer forums about this, and so we are, we improved our camera API in this release to both simplify this problem and also improve the ongoing quality. So to do this, we are transitioning to a physically based camera API, and we leveraged this new API to implement the physically plausible depth of field. We also improved the motion blur and added built-in support for screen space ambient occlusion. And I will then, I will talk next about the new APIs, what I did to simplify the control of cameras. So transitioning to physically based API, camera API.
First, we are deprecating our legacy projection model, so will, of course, ensure the backward compatibility, but be aware that we are moving away from the x4 and y4 properties to adopt some things that closer match real photo cameras. For example, if you want to configure your perspective projection projection, you can now either set the fieldOfView property or configure the focalLength and sensorHeight.
These properties are linked together, so if you configure the focalLength for instance, it will update the fieldOfView accordingly, and vice versa.
Then, SceneKit is modeling a real photo camera depth of field. To achieve at the depth of field, set the wantsDepthOfField property to true, and then configure it by setting the focusDistance and fStop properties.
SceneKit will approximate real photo camera depth of field, and it will blur the scene in a way that is consistent with these parameters that come from the photography world. So new depth of field will also simulate the bokeh you would get with real photo cameras.
Bokeh appear on bright objects that are out of focus, and because they are generated by pixels with very high intensity, this feature works best if you render your scene with an HDR camera. And to configure this, just set wantsHDR property to true on SCNCamera. Then, the shape of the bokeh depends on the number of blade of the aperture. And you can also configure that on SCNCamera. Here are a few examples with a different value for this property.
Then, we improved our motion blur in this release. We already presented a first report of motion blur last year that was able to blur the scene based on the motion of the camera. That means that if your camera was moving fast in a scene, you would get the motion blur. But if your camera was static and the objects were moving around, you would not get any motion blur.
So in this release, we are adding support for object motion blur, and you get per-object blur automatically when you activate the motion blur on your camera. One more effect provided by SCNCamera is now the ambient occlusion. The principle of ambient occlusion is simple. The idea is that a point on a flat surface receives all the environment incoming light, whereas points in cavities will just receive part of it because some of this light will be occluded by the surface.
Simply to put, screen space ambient occlusion, which means that this occlusion factor will be computed in screen space for every pixels.
This is done by analyzing the depth buffer and normal buffer, and SceneKit will determine if a point is in a cavity or not by comparing its depth and normal with the neighbor fragments.
So here's an example of an object with no ambient occlusion, and here with a very strong ambient occlusion to make the effect of use on the slide.
To activate the screen space ambient occlusion, just set the screenSpaceAmbient OcclusionIntensity to a value greater than 0, and then you have a few parameters you can tune that will depend on the look you want to achieve and on the size and the topology of your scene. But let's see all these effects live in a demo, and for this, please welcome Anatole on stage.
Thank you, Thomas.
Good morning, everyone.
So let's come back to the first demo and bring up some debug UI to help me show you our new depth of field effect.
Here we have -- sorry.
Here we have a beautiful golden key in the foreground. The focus distance is set to a small distance because the key is close to the camera. And the effect number is set to a small value because we want a strong blur.
As you can see, nice work here on the background due to the illuminance of the particle. Now, with the second camera, I now have a far-focused distance. So the objects in the background are sharp and the flowers in the foreground are blurred.
Yet another point of view, and here I can, for example, play with the FStop number to get more or less blur. And I can wait for the camera. And then with the second slider, we define where we want to focus in this scene. So this new depth of field effect can be incredibly useful for instance to produce cinematic effect in video game. Now, let's open another app to show you of our new per object motion blur.
Here we have a scene with a tower of blocks. If I press the Shoot button, I throw some spheres on the tower. And this is how it looks by default without any motion blur.
Now, I reset the scene, enable the Motion Blur, and toss in some more spheres. You can see the effect of the motion blur on those spheres because they move really fast. And you can also see the effect applied to the blocks when the tower explodes.
But now, let's take a closer look. I can for that freeze the scene and move closer a little bit.
Our brain interprets the blur's movement, and you can actually see the objects in motion even if the image is static.
We can even change the point of view, and we still have a good idea of the direction of each object. So this really improves the perception of motion in the scene, and the result looks more realistic. And now, let's see the screen space ambient occlusion in demo. You can see some spheres illuminated by the sky and the direction of light.
With the first slider, I can add some ambient occlusion to the scene.
As you can see, some ambient shadow is added, and I can change the intensity to give the sphere more or less shadows.
The amount of occlusion depend on the curvature of the surface.
To know if a pixel is in the cavity or not, we inspect the neighbor pixels. And so we have this radius parameter that let us define how far we look from neighboring pixels.
The visual result is sharper occlusion with small radius and more spread out shadows with large radius. This is computed in real time, so it is perfect if you work with dynamic objects when pre-baked ambient occlusion maps or . That adds some detail to the perception of the deeps -- of the depths, sorry -- and take a look of global illumination to your scene.
That's it for the demo. Let's come back to the slide, and I'll hand it back to Thomas.
Thank you, Anatole. So we have talked about some new camera effects. Now, let's talk about camera control. I did say earlier this is a difficult problem, and we see many questions about it. And we identified two main use cases -- people will want to inspect a 3D object by rotating a 3D object or rotating around the 3D object. For example, developers who are building a simple 3D viewer or an editor and developers who need a, some more sophisticated camera behavior. For example, for games or a more advanced app. So let's start with the first use case for now.
Until now, if you wanted to manipulate a 3D object, you had to implement your own management of events and move the camera position and orientation based on gestures or mouse events.
For convenience, we are providing an allowCameraControl API on SCNView, but these were just providing you a default camera behavior that was not configurable and that was essentially there for debugging properties. So in this new release, we are introducing a new class named SCNCameraController, and the SCNCameraController allows you to manipulate a camera with the most common camera behavior you would find in 3D software.
So the, this behavior had built in in the camera controller, and the SCNView has a built-in default camera controller that you can directly configure for the need of your application.
Now, if you need something more specific, you can still instantiate your own SCNCameraController and drive it programmatically if you want. So the SCNCameraController provide out of the box most of the common camera operation tools.
And to give some examples, the Orbit Turntable allows you to orbit your camera around a 3D object and will prevent roll. That means that the horizon will always remain level, regardless of the rotations you are doing.
The Orbit Arcball will orbit the camera using the vertical and horizontal axes in screen space. So this mode can be more intuitive in some cases, but it doesn't prevent roll, so it really depends on your application.
the Fly mode is more suitable for lap scenes you want to maybe get into. And in that case, the center of rotation of the camera is a camera itself, which means that you rotate the camera to look around in a position to orbit around an object.
So again, we believe that the scene camera controller will provide most of the common camera operation tools. And if you need something very specific, you will be able, you can still try your camera controller and your camera programmatically.
Now, let's see the second class of programs. Developing, we need a more sophisticated camera behavior. For example, for games.
And we address this problem by chaining constraints to define a camera behavior.
SceneKit was already providing a bunch of built-in constraints, and we're adding a few new this year that will work on any arbitrary node but work well in particular for camera. And to illustrate some of them, the SCNDistanceConstraint forces a node to keep a minimum and maximum distance with another specified target node.
So replicate our constraints. We replicate a node position and orientation with an optional offset. And the acceleration constraint will ensure that the node won't move or accelerate faster than a given maximum velocity and acceleration. So these are just examples. Let's see what we can do with this constraint.
So here we have a character moving around in a scene, and the camera has no constraint yet, and he's therefore static. If I add the look at constraint to the camera with the character as a target node, I now have my camera that rotates to satisfy the look at constraint and I have the camera that look in the direction of the character.
If I add a replicator constraint and I have basic camera behavior, that replicate the movement of the character with some offset and continue to look in the direction of the character. If I add into that an acceleration constraint, I now have the same behavior, but that is most thanks to the acceleration constraint that is applied after the other constraint.
And if I replace the replicator constraint by a distance constraint, I now have a new camera behavior that now follows a character to satisfy the distance constraint.
It continued to look into the direction of the character, of course, and all this will be always moves, regardless of the movement of the character, thanks to the acceleration constraint. And so that's how easy it is to define a camera behavior, and that's what we did in our Fox 2 example. And just by defining a different set of constraints, depending on the different zones of the game, we were able to define the camera behavior for the entire game.
One more note about camera control.
We extended SCNNode with the categories that provide much utilities for you to convert and access vectors in different spaces.
But most of all, all the node transform properties, access position, rotation, scale, and matrix, and transform, are now directly available as SIMD properties for, to ease math operations. So thanks to SIMD types, operations on quaternions vectors, and matrices are much simpler to write and they are much more performant anyway. Just be aware of a few limitations with SMD types, as they are not KVO and KVC compliant and they cannot be included as an NSValue. So that's for camera controller. Now, I'll hand it over to Amaury to talk about tessellation and subdivision. Thank you, Thomas.
So we know that great graphics are essential to your application to build engagement and delight your users. And there are many aspects to great graphics. For instance, there is added realism. And that's why over the years we've introduced new rendering capabilities, such as physically-based shading and this year's push for more realistic camera optics.
But high-resolution assets is another very important aspect.
In your application, you want to be able to have both very smooth surfaces as well as very rich and detailed ones. Now, the issue is that when you deal with high-resolution assets, they require more memory, both on disk and at runtime, and they would require more processing time. So in this section, we will have a look at proper techniques that allow artists and you developers to work with low-resolution models that can become of high quality when rendered on the screen.
So I will start by explaining what tessellation is and how it works, then I will show you how you can leverage this in SceneKit. And finally, we will have a look at something a little different, subdivision surfaces.
So tessellation.
Tessellation is a feature applied last year in Metal, and the idea behind this feature is that you provide the GPU with a low-resolution mesh, or coarse mesh, and then you let the GPU generate model memory that has more vertices on the fly when the model is rendered.
So subdivision surfaces really are a powerful tool. They are what you use in the industry to easily create, store, and animate low-resolution models that can become of very high quality when rendered. So let's have a look. So this is a triangle, and this is a tessellated version of it. So SceneKit gets to decide how much an edge can be split. It can decide how much vertices can be created on the first edge and on the second edge. And of course, it can do that for the third edge.
And what's nice is that SceneKit also gets to generate more vertices inside the triangle.
So these are called tessellation factors. SceneKit is a high-level API, and we made it super easy for you to perform tessellation.
We are adding the new SCNGeometryTessellator class as well as the tessellator property on SCNGeometry. And the tessellator exposes a few properties that allow for different modes. So let's have a look at the simplest example first.
In this mode, you provide SceneKit with constant edge and inside tessellation factors that will be used for all the triangles in the coarse mesh. So with this mode, you will have a uniform tessellation, and you will add the same amount of geometry everywhere across the coarse mesh. So let's take a look at the more complex example now. Here you can ask SceneKit to come up with special tessellation factors so that no edge is too long. So here you provide a maximum edge length in local space.
Even more powerful, you can ask SceneKit to constantly evaluate at each frame the tessellation factors, depending on the project inside of the object.
So in this mode, you will provide the maximum edge length in screen space that is in pixels. So that's it for tessellation. And now, if you have a look at the original triangle and the tessellated version, you might be a little disappointed, and that's because all the new geometry data actually lies in the original triangle. So for your highly-detailed mesh, you want to do something with this extra geometry. And that leads us to the new tessellation-based geometry APIs. So first of all, let me remind you of shader modifiers.
Shader modifiers are completely supported with the new tessellation pipeline. And with a few lines of code, you can create really custom effects. So for instance, if you have an application, and there's water, and you want to simulate an ocean with waves or really any idea, an effect of your own, shader modifiers are the right tools, are really powerful for that. But of course, we are also adding out-of-the-box effects, such as geometry smoothing.
This is a new feature.
And for instance, if you specify the pnTriangles smoothingMode, SceneKit will take into account the position in normal of each vertex as well as the position in normal of its neighbors to project them on a smooth surface. Another effect that you might have heard about -- displacement maps and height maps.
So what's a height map? Well, height map is a gray-scaled image that stores the elevation or altitude of any point on the surface.
This technique is commonly used for effects such as terrain rendering, so let's take that as an example.
We start with a plane that we tessellate and that we deform using the height map.
So it's a really simple example, but it's highly effective.
And the API is really simple too. We are adding the new displacement material property on SCNMaterial, and so I bet you already know how to use it. You just specify its contents, and then if you modify its intensity, you can come up with the animation I just showed. Now, let's get one step further with vector displacement maps.
Vector displacement maps are the extension of height maps, but instead of only storing the elevation, you can store a displacement is all three directions, and that's why you have a cut-out image. For instance, green is the displacement along the normal, and red and blue along the tangent and bitangent.
So will you guess what that does? Let's have a look.
Okay, so this is a, see, the example, but in your application, you can use displacement maps to add the details, fine details to your geometry. So for instance, if you have a demo with a chameleon, you can add detail to its skin. Or if you have an application and there are rocks you can get really close to, vector displacement maps are the right tool.
The API is the same, except that instead of just red, you specify "all" for the texture components to indicate that you are interested in more than one color option of the input image. So that's it for tessellation and tessellation-based effects. Now, let's have a look at subdivision surfaces.
So you might already have heard of our subdivision surfaces and class subdivision. It's a standardized algorithm that starts with a coarse mesh and that iteratively refines it.
And so you see how quickly we get from a coarse mesh to a very smooth and detailed one.
Now, not everything is perfectly round, so with subdivision surfaces, you can specify creases and corners to have distinct sharpnesses for your edges and vertices.
So subdivision surfaces, they are extensively used in the industry to easily create, store, and animate your resolution models that can become of very high quality when rendered on the screen. And it turns out that we added support for subdivision surfaces a few years ago in SceneKit, but we used to run the subdivision code on the CPU. And so that takes some time and a huge amount of memory, especially when you go to higher level of subdivision, as the number of vertices generated grows exponentially.
So we have great news this year.
You might have heard about the OpenSubdiv project from Pixar, which is an open tools implementation for efficiently evaluation of subdivision surfaces. And last year at WWDC, Apple announced that we would be contributing to this project with a Metal-based implementation so that you can run the subdivision code on the GPU using Metal.
And so this year, you can leverage all these amazing technologies very easily. And with the Metal-based implementation come many advantages. First, we leverage tessellation, and it comes with all the memory benefits I've talked about earlier. And with the tessellation, we'll have very smooth surfaces even for low subdivision levels.
Now, in addition to uniform subdivision, we support feature-adaptive subdivision, which I will explain in a minute. And last, we have an all-GPU pipeline for efficient animation of subdivided meshes. So let's take a look at this example. It's the key you saw in the demo, and as you see, it's a very coarse mesh. Not much detail.
And this is a tessellation version, subdivided one.
So with asset, you can see that you have hard edges but also nice curves. So how did we do that? Using creases.
So with subdivision surfaces, artists can really come up with great designs and they can easily create them and then tune them to have the desired look they want. Now, feature-adaptive subdivision.
When uniform subdivision converges towards a face that this moves by exponentially increasing the number of polygons, feature-adaptive subdivision can isolate irregular parts of your mesh and create busy patches as well. So then, with tessellation, we can create new vertices on these perfect mathematical curves, and so it leads to very smooth surfaces with a lower memory footprint. Now, the API is really simple.
You just specify a subdivision level and then you opt into tessellation.
And then for feature-adaptive subdivision, it's really easy to configure too.
Now, last, animation of subdivision surfaces.
This year, we have an all-GPU pipeline that is really efficient. You can have a coarse mesh that you can deform using morphing. And then if you want, you can add skeletal with skinning. And finally, as the last step, we run the refinement code on the GPU. And this is very performant because we are working on the GPU on the low-resolution mesh, and then the very detailed one is generated on the fly by the GPU using tessellation.
So now that you want to give subdivision surfaces a try, just remember two things. First, if you are loading a set from files, specify the preserveOriginalTopology option.
And if you are creating geometries programmatically, remember to use the polygon primitive type.
This is because with subdivision, working with triangles is not the same as working with quads.
And with that, let's have a quick demo.
OK. So this is a simple pottery application I'm going to be working on. So I can easily pinch to zoom, and I can drag to rotate.
It's very easy. Now, if I pinch, then pay attention to the silhouette of the object.
As you can see, it's very smooth, and you can use a normal map to add detail on the surface. And now, the purpose of this application is very simple. I can take my finger and simply sketch on the mesh.
So let's kill that and let's write something. So as I draw, pay a look at the silhouette of the object.
Here I'm actually modifying the geometry. I'm not just adding surface details like we do with a normal map.
OK, so how was that done? Well, with subdivision surfaces, tessellation, and height maps. So let's have a look. What we do is that we start with a very coarse mesh. As you can see, very low polygon. It doesn't come with normals, so we have flat shading here, but it has texture coordinates. So we can simply map a normal on it and later a height map. So now, let's subdivide it.
Let's see the smooth normals that are generated, and then let's have a look at the wire frame and see how much vertices are created, how many vertices are created when I enabled subdivision on it. When I take my finger, I simply draw in the height map, and all the vertices are displaced.
Now, let's clear that. And just for fun, let's enable screen space tessellation. Now, I will pinch and take a look at when I get close or farther from the object.
SceneKit will come up with new tessellation factors, and it will create new vertices on the fly. And that's it for the demo.
Thank you. So as a wrap up, tessellation and features relying on tessellation are available with Metal on all Macs and available on iOS devices with the A9 chip or later, so that includes iPhone 6S and all of the iPad Pro models.
Now, let's have a look at something completely different -- our enhancements to animation APIs.
This year, we are introducing the new SCNAnimation protocol as well as the SCNAnimationPlayer class.
They make it easier to start animations and to mutate it, mutate them while they are running.
So for instance, now you can easily change the speed of an animation and you can blend animations on the fly. Of course, we still fully support CA animation APIs. CA animation conforms to our new protocol. But with the new APIs, it's much easier. You can work dynamically on animations while they are running, and these APIs are available on all platforms, including watchOS.
So let's take a look at the old way.
So let's say you had a character that could walk and jump. You would first start by adding the walk animation, and then when you wanted the character to jump, you would add the jump animation to replace the other one. Now, with the new API, you start by creating and configuring animation players, and then when you want a character to jump, you manipulate the player instead of the animation directly. So it's a very similar API.
The difference is that now you can mutate animations while they are running. So you can change their speed and you can mix animations.
And animation blending is actually new this year.
So let's take the example of Max, who can walk, run, but also step. We have different animation files for each kind of motion.
With the new blending API, you can easily transition from the step animation to the walk animation, and so you can bring fluidity and be more expressive in your applications. And after you blend animation, you can also play with the speed. So Max can run slower on it. Finally, let me mention enhancements to our animation evaluation code.
So we have a new implementation that makes it faster to start animations on arbitrary objects in the scene. And we made the evaluation of skeletal animation much more performant. So if you have a lot of characters, you have a lot of bones in your scene, such as in the Fox demo you just saw.
We have this new implementation that makes things much faster.
And with that, let me hand over to Sebastien for updates on our developer tools. Thank you, Amaury.
So last year, we introduced FPS Gauges.
It's a great way to have an overview of the performance of your application regarding SceneKit. And they split the categories so that you can see exactly where the time is taken for the CPU and the GPU. So you can know if it's going in the rendering and physics or particles. It's integrated in Xcode, and you can have a look at exactly how it behaves all the time. And it's cool to exactly know what's taking time so you can reuse your meshes or animations, for example.
But what happens with, when you skip a frame? How do you know exactly what happened? What caused the frame skipping? This year, we are introducing a new instrument. It's a template for SceneKit that you can use to record the trace of your application and exactly know what happens frame by frame. It's very simple to use. You just create a template as you would for any other trace, and it'll record your application performance, and you will get this view. It's full length that detail what's happening in your application. The first one is the frame. It gives you the time that is taken to render one frame for your application, and you can see exactly how long it takes.
The second one gives you the rendering time. It's the time that is taken by SceneKit to actually gather all the data and send it to the GPU. The third one gives you the updating stage. It's the time that is taken to update the physics, particles, and your custom delegate, if you have one. And the last one, but not the least, is the time that is taken to upload texture to the GPU as well as compile shaders.
And let's see how it looks when we have a skipped frame. This is a very simple example.
You can see that all the frames are very slow to render and that at one point we have one frame that is taking the time that it would take normally for four frames to render. And when we dig down, we dig what exactly is happening, we can see that a new shader was compiled, and it takes a lot of time. In this case, we can know, maybe try to find a strategy to load the shader at the start of the application. We have also added a way to combine the SceneKit instrument with the Metal instrument trace. So you can see the combination of both at the same time and see exactly what happens behind the screen to understand what's happening in your application. This year, we have also added a new debug tool.
It's an enhancement to the view debugger in Xcode. It's very simple to use. You just use the regular view debugger as you would normally. And it will capture the view hierarchy as well as the scene automatically. So if there is a SceneKit scene, SceneKit view in your application, it will capture the scene. And if you select the scene in the view hierarchy, it will automatically send it to the SceneKit editor, where you can inspect all your objects, move the camera around, and see exactly how, what is in your application and what is in your scene. We have also added support for the new features this year. So we have a new way to handle the cameras in Xcode. You can see that there is a new way to choose the behavior that you want.
We have added perspective and autographic cameras so that you can actually inspect your scene much easier, and you'll still have access to all your regular cameras. We have also added the new behavior, so you can fly around on turn and use the arcball. It's much easier to inspect very big scenes. We have also completely revamped the Shader Modifier Editor. So now, you can edit your shader modifiers as well as your material at the same time in one screen. You don't have to go back and forth in selecting objects. It's a completely new implementation, and it also supports custom material properties. So if there is no, not enough property slots in your material, you can add colors, floats, or vectors to add your own implementation in your shader modifier. Very easy to use. We have added support for more features that we have added this year. So first is there is a new displacement material slot that you can use for the tessellation. Of course, we have tessellation. We have added support for the new constraints. So you can add them into your nodes, and test them in real time in Xcode, and edit them in the inspector. We have support for cascaded shadows that we will tell you about more later. And we also have a new procedural sky that is very easy to use as a background or the lighting environment. For example, to test your material when you don't have a proper map set. And it's completely configurable, so you can get day or night sky, for example. And last, but not least, we have added the possibility to override the material for reference nodes. And what are reference nodes? Reference nodes are actually nodes that refer to one, only one, scene file but used more than once in your scene.
Until now, you could only have exactly the same rendering for both, for all the nodes. It's not only two nodes. But now, you can overwrite some or all of the materials of, that you are using in the scene and change the look of some of, or all the instance.
And with that, I hand it over to Thomas to tell you about related technologies. Thank you.
OK, so let's talk about related technologies and start with ARKit. So you have -- yeah. So you have all seen the introduction of ARKit on Monday during the keynote and State of the Union, and you may have noticed that ARKit provides an ARSCNView that provides an easy solution for AR that just work out of the box.
And in fact, the ARSCNView is a subclass of SCNView. And that means that you have full access to SceneKit via ARKit. You have access to the scene graph. You can add post processes, particles, physics, force fields, custom shaders. If you want, you can do basically everything.
And so with ARKit and SceneKit, it's really easy to set up a scene like in this example here. I would let you guess where this video is shot. There is a hint in the background, and my accent is another hint.
And so here, to set up a scene like this, we just needed to load a SceneKit object, as you usually do, and set up an ARView, and run an AR session. And then, from the ARSCNView delegate, you just need to attach your 3D objects to the anchor detected by ARKit. It's that simple.
And to support ARSCNView, we extended our material property to support AVCaptureDevice and AVPlayer as a natively supported type of content.
So that means that it's very easy. With just one line of code, you can now directly connect. So you do a feed off your iPhone or iPad directly to a texture in SceneKit or to the scene background. Yeah.
Now, I would like to give you a little trick regarding augmented reality and shadows. So the fact is that your object will look better integrated if your object casts some shadows on the ground like in this example here. And so the trick is that to achieve this, we editing our object in Xcode, we added a directional light that casts some shadows, and then we added a plane to receive the shadows. Now, the goal is to hide the plane because we don't want to see it in our scene, but we cannot simply make the plane hidden. Otherwise, the shadow would disappear. So the trick is to configure the plane to not write in the color buffer. This can be done programmatically or using the inspector in Xcode. And if I do this, the plane will disappear, but the shadow as well.
But the plane still writes in the depth buffer, which means that I cannot configure my light, and change the shadow technique, and move from forward here to default shadows.
And now, the shadow will come back because default shadows work as a second pass full screen, and it will compose shadows over the image based on the depth buffer of the scene and the depth buffer of the light map. And so for default shadows, the plane still exists because the plane still rendered into the depth buffer. So that's for the trick for shadows in ARKit. Now, I would like to talk about GameplayKit. GameplayKit entity and components now support driving SceneKit objects. The typical use case is when you want to implement character behaviors, and that's what we did in our Fox 2 example for the enemies. In our case, we have GKScene with two entities, one for each enemy. And we implemented two behaviors as GKComponent. Now, the main interest resides in the Xcode integration that allows us to directly assign the behavior to our enemies directly in Xcode. And we can also use the Xcode inspector to directly edit the properties of our behaviors.
Next, Model I/O. Model I/O improved their support for USD in the 3D. As a reminder, USD stands for Universal Scene Description, and it is a 3D file format developed by Pixar.
So SceneKit and Model I/O improved the support for USD and, in particular, for Metal I/O's and for animations. If you want more information about USD, you can check our previous presentation from last year, and that will also be in our session on Friday afternoon, From Art to Engine with Model I/O.
Next is UIFocus. And so UIFocus engine is part of UIKit, and it allows you to select and focus objects on your Apple TV using the Siri Remote.
Now, SCN, it now conforms to UIFocusItem, which means that you can decide which object in your scene, which object would be focusable on it.
And then, the focus engine will call you back and will tell you what object should be now focused in response to gestures on the Apple Siri Remote.
And then, it's up to you to decide what action you want to take and what visual feedback you want to provide. To briefly explain how this works, let's say here I configure the white pieces to be focusable.
SceneKit will automatically compute the projected area of your object and give that to the focus engine.
Then, the focus engine will take care of selecting the right object based on the gestures on the remote. And SceneKit will take care of keeping the projected areas on the screen updated if you move your objects or if you move the camera around.
So the only thing you have to do here is just to define what object should be focusable on it, and that's all.
Now, to conclude this presentation, I would like to mention a few rendering additions that we added to our renderer. And the first one is a support for point cloud rendering.
We improved our object and added these properties that will allow you to configure the appearance of point cloud. So these properties were point cloud, which means that they were created with an area of point and with a primitive type point. And with these properties, you can configure the size in screen space and in world space, and SceneKit will render your point cloud, and texture them, and light them according to the materials that are attached to your geometry. Then, we added two new transparency modes to our material to address the problems of double-sided objects or concave objects that are semitransparent. So if you take the object on the left, you can see a double-sided sphere, semitransparent, and you will notice the artifact because the polygons are not rendered from back to front. So SceneKit will render your objects from back to front for great transparency, but it won't sort the individual polygons of a geometry. And so to address that problem, we introduced two new transparency mode. So first one, single layer, will render your object in two passes to only render the front-most faces in the second pass. So that fixes the artifact, but, as you can see, object doesn't look double sided. And the typical use case for this mode is when you want to fade out an object and avoid the artifacts due to the overlapping polygon during the fade-out. So second mode, dual layer, will also render your object in two passes. So first pass with render the back faces, and the second pass will render the front faces. And it allows us to render this double-sided sphere correctly here and also the gem from our Fox demo. And this gem, that comes from the Swift Playground Learn to Code curriculum.
One last addition is a support for cascaded shadow maps. So cascaded shadow maps is an optimization of shadow maps. The idea is that it splits your shadow maps into multiple textures or multiple cascades to allocate more precision to areas that are closer to the camera and less precision for areas that are far away.
To configure cascaded shadow maps, just tell us how many cascades you want, you can configure the size of your cascade, and then you have a parameter named shadowCascadeSplittingFactor that you can adjust to control the distribution of your cascades, depending on the distance from the point of view. For example, this is a recording on, from our Fox 2 example, and in our case, we're using full cascade, represented by different color tint here, and here I'm playing with the splitting factor so that you can get an idea on how this impacts distribution on the cascade depending on the distance from the point of view.
As you can see, the right area represents the first cascade, and so it represents a smaller area of the world, and so we have higher precision there, whereas the green cascade in the back represents a much larger area, and so we're fewer precision for the green cascade.
Note that you can, you are also able to visualize cascaded shadow maps with your own scene using Xcode.
Okay, to wrap up, so this new version of SceneKit introduced new camera APIs to simplify the control of cameras with some new great effects like depths of field, motion blur, and screen space ambient occlusion.
Also, a great GPU support for tessellation and subdivision. Some new APIs. Much more performant for animations and with the animation blending.
Some new, great tools to capture, trace, and edit your scenes. And of course, a really great story for augmented reality thanks to ARKit.
For more information, please check our developer website. You can access our sample code from there. Some related sessions. So , graphic technologies that SceneKit integrate with, like Metal, SpriteKit, and Model I/O, of course.
There will be, there is a session about tvOS and UIFocus support. And there will be a great session tomorrow about SceneKit, using SceneKit in Swift Playground tomorrow morning. And that's about it. Thank you. [ Applause ]
-
-
찾고 계신 콘텐츠가 있나요? 위에 주제를 입력하고 원하는 내용을 바로 검색해 보세요.
쿼리를 제출하는 중에 오류가 발생했습니다. 인터넷 연결을 확인하고 다시 시도해 주세요.