大多数浏览器和
Developer App 均支持流媒体播放。
-
为空间计算优化 3D 素材资源
深入了解创建优化 3D 素材的端到端流程。探索使用数字内容创建工具来优化网格、材质和纹理的推荐做法。了解如何利用 Shader Graph、嵌入和材质实例来优化 3D 场景,同时提升性能。充分利用原生工具,更有效地处理素材并提升 App 的性能。
章节
- 0:00 - Introduction
- 1:05 - Before you begin
- 2:21 - Polygon count
- 2:59 - Export from DCCs
- 6:25 - Efficient texture use
- 13:07 - Optimizing materials
- 15:07 - Sky dome setup
- 16:03 - Image-based lighting
资源
相关视频
WWDC24
-
下载
Hi I’m Scott Wade. A technical artist working on Apple Vision Pro Apps and Content. Welcome to the session “Optimize Your 3D Assets for Spatial Computing”. In this session I’m going to use this sample scene to talk you through the steps you can take in order make your content look its best. You can download the sample linked to this session to follow along. The high resolution displays and high frame rate of Apple Vision Pro create a great experience for users but they do make developing content for Apple Vision Pro different than other platforms you may have worked on. In this session we’ll discuss some key considerations to keep in mind before you begin creating your 3D content. Then, we’ll discuss what to target for polygon count. What you need to know when exporting from your content creation tool of choice, how to make efficient use of your texture memory, optimizing your materials and shaders, tips for setting up a sky dome, and finally how you can customize image based lighting efficiently. Let’s get started with some preliminary aspects to keep in mind. Before you start creating assets the most important thing to consider, is how your content is going to be viewed. Will it be an immersive app with lots of assets? Or will it be in a volume that’s in your space? Will it be in the shared space meaning that other applications may be rendered at the same time? And importantly, will the presentation of your content change depending on user input? All of these choices will have an impact on the performance of your application and may necessitate different content optimizations The reason why the viewer’s view matters so much is that on Apple Vision Pro the GPU is only responsible for pixels you actually render not passthrough video. So the GPU workload scales based on how large your app appears to the viewer. Meaning that non immersive scenes may be much faster to render. By contrast an immersive scene is rendering the entire view, every frame meaning the GPU must render millions of additional pixels. With that in mind it can be hard to give specific targets for performance without testing, so you should test early. Apps that are in the shared space may have other applications running at the same time. But Immersive apps may use significantly more screen area, increasing frame times. Because of this, immersive applications will often need the most optimization work. The polygon count of your scene, measured in triangles, is a key performance metric to be aware of. As a general guideline, we recommend no more than 500 thousand triangles for an immersive scene, with 250 thousand for applications in the shared space. Remember that this is highly dependent on what the viewer can see at any one time. We also recommend splitting very large objects like terrain into chunks so that they can be culled when off camera. If you want a safe target, try to keep the number of triangles in view to around 100,000 as that should give you plenty of headroom. Keeping these factors in mind before you begin building your content can help you stay in budget. Next you may be asking what apps you can use to create and edit content. Luckily you have lots of options because the most important thing is being able to save your work out in USD format. With that in mind, applications like Blender, Autodesk Maya, SideFX Houdini and many others are great choices. Visit openusd.org to learn more about the USD ecosystem. I’m going to be using Blender 4.1, so let’s switch over to that.
For now I’ll just focus on a few individual assets within my larger scene. I’ve modeled them with a polygon count that’s appropriate for how far away they will be from the viewer. This can be hard to judge, so something I like to do is create a camera in my scene to help me to visualize what my content will look like from the viewer’s perspective. Here I’ve placed the camera at average eye height, 1.5m at the center of my scene. Now I can rotate the camera around to get an idea of how large or small my content will appear to the viewer.
I’m ready to export my assets as a USD but that means we need to choose a specific usd file type to use, because there are multiple. Let’s quickly discuss what they are best suited for! USDA is an ascii format, meaning it can be read as plain text. USDA is great for collaborative files like scenes where multiple people might be making edits. That’s why Reality Composer Pro uses it as its scene format. The text format also lets you resolve merge conflicts if you have multiple people working on the same file at once. USDC is a binary format. While you can open it in any USD compatible app, it can’t be read as plain text. But it’s much more efficient for storing large amounts of data such as geometry.
USDZ is a zipped format. If you’ve worked with AR assets in the past you might be familiar with it. USDZ takes all dependencies for an asset such as textures and makes an internal file structure for them. This makes USDZ great for publishing and distributing assets. However they can’t be edited without unzipping.
My asset is primarily geometry so I’m going to save it out as a USDC.
Even though I already have a material set up in Blender I’m going to uncheck materials, as I want to take advantage of some of Reality Composer Pro’s shader graph features. Then I’ll hit Export. Bringing this into Reality Composer Pro’s Project Browser and adding it to my scene you’ll notice two things.
First is the magenta color, this is just Reality Composer Pro telling us that no material is assigned, which is what we expect. The other thing you’ll notice is that the object is rotated.
This is a good opportunity to talk about coordinate systems.
Reality Kit uses a coordinate system that assumes -Z is forward, Y is up, and +X is to the right. Whereas Blender uses Z as up and Y as forward. Some other applications may also use a different unit scale. RealityKit uses meters. If your content is importing with a scale already applied this may be why. Be sure to test asset importing early on so that you can have a consistent plan for how to address any inconsistencies. New to macOS this year is the ability to change coordinates and scales in Preview. Check out the session, “What’s new in USD and MaterialX” to find out more. I’ll just adjust the assets when they get imported in Reality Composer Pro by applying a -90 degree rotation in X.
Now let’s move on to applying our textures and see how we can make them more efficient. We’ll use Reality Composer Pro’s Shader Graph to set up our materials and textures. If you need a primer on setting up materials in Shader Graph, I highly recommend you to check out: “Explore materials in Reality Composer Pro” session from 2023. I’ve got a basic PBR material already started for this asset and applied, I just need to apply my textures. With those in the project browser, I can go into the material and select the file reference to add in my base color.
When you add in a color image you’ll also need to select a Color Space.
Let’s do a quick overview of color spaces as they relate to textures. In Reality Composer Pro you’ll see color spaces as two terms separated by a dash. Such as sRGB - Display P3.
The first term is the Transfer Function, this defines the curve by which values are encoded. sRGB is intended for Perceptual textures, ones that we directly see in the renderer. Like base color or unlit color. Linear is used for data or HDR images. The next term is the Gamut. This defines the maximum extents of the color space. Vision Pro uses Display P3 as its native display gamut but textures using other gamuts will be converted into Display P3 when you build your app in Xcode. Make sure you select what the authored intent of your texture was in order to get consistent results. My textures were authored using sRGB Display P3 so I’ll select that.
Now I’ll add in the textures for our roughness, and ambient occlusion.
You’ll notice that these grayscale textures don’t have an option to select a color space. This is because Shader Graph correctly assumes they are data meaning we don’t want to apply any transformations to them. However these grayscale textures won’t get compressed when our app is built, meaning they’ll be taking up more space than they need to. There’s a way we can solve this, which is called Texture Packing.
Texture packing is where we combine texture data from separate files into one larger file by utilizing the different channels of a color texture. For example we could take our roughness, metallic, and ambient occlusion textures and use them as the Red, Green, and Blue channels respectively. Then combine that data into a single texture file. As an additional bonus, this RGB texture will now get compressed when we build our app. Just packing your grayscale textures together can reduce the total size of a PBR asset by up to 40%. Bringing in the packed texture, I’ll be sure to select the vector3 type for this node so that the shader knows that this is a data texture.
That way no color space transformations will be applied. To split out the RGB texture into separate individual channels, I’ll use a separate three.
Then I can hook up the roughness and ao textures from their individual inputs.
Now let’s add our normal map. Normal maps are regarded as data so again we’ll use the vector3 type.
But when we connect that texture suddenly the asset looks wrong. Don’t worry, If your normal maps look strange there are usually two main issues. Incorrect format, and incorrect data range. There are two commonly used formats for normal maps, OpenGL and DirectX. RealityKit uses normal maps that are in OpenGL format. They may look similar, but in the DirectX format the green channel is inverted. In my case I know my textures are in the OpenGL format, so let’s discuss the data range issue.
Our material is expecting a normal map with a range from -1 to 1. But our image only contains values from 0 to 1. To adjust the range on our image we need to do a little bit of math, we can multiply by 2 to increase the range from 0 to 2.
Then we can subtract 1 to shift the range down to -1 to 1.
Now it looks great! Thankfully Reality Kit has a node that will do this for you in one step, called Normal Map Decode.
A quick note, you may notice there’s also a node in Shader Graph called NormalMap. Unlike the Blender node of the same name, this node does not change the range of our texture. Be sure to read the tool tips on nodes in case they differ from other applications. With that added, now my asset looks great.
It would be nice if I didn’t have to do that setup with every single asset though. Luckily Reality Composer Pro has a way to do that called Material Instances. Material instances are a great way to reuse the logic of one material, while being able to change exposed parameters. This can save time when setting up assets but it’s also good for performance within your app because the engine doesn’t need to load redundant copies of the same shader graph. Right now there aren’t any exposed parameters in our material so every instance would be identical. In this case I’m going to select all of the file references, right click and promote to input.
You can see that those inputs are now accessible in the panel on the right. Now I can right click on my main material and select Create Instance, and I can see the texture inputs are editable even though I can’t edit the base graph itself. I’m going to add in my next asset.
And also rename my material instance so that I know what asset it goes with.
When I apply we can see it’s still using the old textures.
But now when I select that instance, I can swap in my base color, normal, and packed textures.
That saves me from having to split out my packed texture again or change the range on my normal map each time I create an asset. Now we have a second asset set up and we did it in a fraction of the time it took to set up our first asset. Next, let’s look at a our scene and discuss optimized material and shader choices. I’ve brought in the scene we saw earlier to go with my individual assets.
This entire scene uses unlit materials with most assets only using a single texture. I’ve accomplished this by setting up lighting in an external application like Blender and baking all of the shading information down into a single texture per object. Using realtime lights in RealityKit has performance implications, we recommend using dynamic lights sparingly and baking whenever possible. You’ll also notice that my scene is split up into chunks. This is important as it allows the entities which are not visible to be culled when they’re off camera.
If we look at my scene in Blender with a checkerboard material applied, you can also see how much my texture resolution scales down with distance. In fact more than half of my texture resolution is dedicated to the 5 to 10 meters directly around the viewer. If the viewer isn’t going to be moving much, we strongly recommend scaling your texture resolution based on distance and size on screen.
Something to consider even for unlit scenes is your use of alpha transparency which you should try to minimize as much as possible. Transparent materials are expensive because the GPU needs to compute the same pixels multiple times for each layer of transparency. This is referred to as overdraw. This is particularly problematic if you have many layers of transparency all on top of one another.
My grass for example might look like it’s using a transparent texture, but I’ve actually used geometry for my blades of grass. Similarly on my shrubs, I’ve used an opaque core and only added transparent cards, where they’ll have the most effect along the edge. This adds to my polygon count but in our experience it’s almost always better to trade more triangles in exchange for fewer transparent pixels. Provided you stay within your overall triangle budget of course. With our scene in place now we need a sky dome. Otherwise our audience will see their passthrough environment beyond the edge of our scene. In my case I’m going to use an inverted Sphere I created in Blender but you can use any geometry you like. Just make sure it’s large enough, or far enough away from the user. My sphere has a diameter of about 500 meters.
You’ll want skydomes or skyboxes to have fairly high resolution textures. For an image like this with small details you’ll want at least 8K horizontal resolution or potentially even higher to reduce blurriness.
Given the size of this texture it’s a good idea to crop it if there are parts that won’t be seen by the user. In this case I’ll be cropping just below the horizon line. Let’s take a look at my skydome in my scene. You can see how much of the screen is going to be taken up by just this one asset. It’s very important to use an Unlit material on skydomes and skyboxes, they’re one of the first assets you should optimize.
With our skybox in place the scene is really coming together.
But if we take a look at our PBR assets from before you may notice they don’t quite look right. They don’t seem to have the same lighting as the rest of the scene, they have highlights that clearly aren’t coming from the sun, which should be behind them. That’s because our skydome is just a mesh, it doesn’t actually contribute to how PBR assets are lit. For that we need to talk about how to implement custom image-based lighting. You may be familiar with image-based lighting or IBL, but the general idea is that we use a 2D image, representing the environment to add shading to our asset. In RealityKit, PBR materials use this method of shading. The main reason to use image-based lighting is so that your material can react dynamically to changes in the real world environment such as the user turning on a light in their room. That’s great for some experiences, but in this scene I just want my assets to look like they’re lit by the environment I’ve created, not the real-world environment. To do this we’ll need to add in a few components. First I’ll create a new transform component and name it IBL so I know what it’s for.
Then I’ll attach an Image Based Light component.
Here I’ll add in a pre rendered High Dynamic Range image of my scene.
This is one of the few textures that needs to be HDR in order to look good. IBL textures should be in the Lat/Long format also called equirectangular. HDR textures are expensive to load and to render but because I don’t have a lot of highly reflective objects in my scene I’m going to use a very small texture. This texture will be fine at 512px across, meaning it’s a fraction of the size of my sky dome texture. As long as you don’t have mirror-like surfaces often you won’t need much more resolution than that.
You’ll notice that our image isn’t affecting our assets yet, and that’s because we need to add one more thing: An Image Based Light Receiver component. I’ll select the parent node for my PBR objects and apply my receiver here.
This will apply it to all the child entities. Now I just need to select the IBL entity I created earlier to link them, which is why I made sure to label it.
Now you can see the lighting and reflections of my scene in my assets.
To recap, your skybox should be a large, low dynamic range texture ideally using an unlit material. Your IBL should be an HDR texture in Lat/Long format, ideally at a low resolution. And you’ll need to add both an Image-Based Lighting Component and Receiver in order to see an effect. Sometimes you’ll need a material that has more than unlit color but doesn’t need to be a full PBR shader. For that we can use the Environment Radiance node in shader graph. This wagon wheel is supposed to have a metal rim but because it’s baked it really doesn’t look much like metal.
At a distance it might be fine but up close you’d be able to tell that there are no reflections. You might think we should therefore use a PBR material, like we did on the previous assets, rather than an unlit material. But there’s actually a way to achieve this while still using an Unlit shader. In the material graph for this asset, which I’ve separated out, I’m going to use the EnvironmentRadiance node, which lets me use the shading information coming from our IBL inside an unlit graph. If we look at the specular output of the EnvironmentRadiance node we can see our scene reflected based on view angle.
We could also utilize the diffuse lighting but in our case we don’t need that because it’s already baked to our texture. In this case we just want to add the specular radiance and our baked texture together.
This gives us view dependent specular reflections and helps the material look a lot more like metal.
Using the Environment Radiance node is more expensive than just a normal unlit material. But it’s still cheaper than using a full PBR material. Use it in situations where you just want some dynamic lighting effects but don’t want to use full PBR. With that done our scene is looking really good! As a final check we can take a look at the Statistics Panel in Reality Composer Pro and review where we’re at in terms of budget. We’re at 108,000 triangles, so we’re right about on target for our polygon count. Even better that’s 108,000 triangles total. Because our scene is split into chunks, our users are probably only going to see about half of that at any one time. Our total number of meshes, textures, and materials is quite low. We have 1.2GB in textures, but remember that our textures haven’t been compressed yet. When we compile our application in Xcode our textures will be compressed, lowering this to around 300 MB. We’ve also minimized the amount of transparent materials and are using unlit shading whenever possible. Finally, if you want to profile the performance and rendering of your application, you can use RealityKit Trace. Or, if you just want to debug the entities in your scene or detect which parts of your code are causing problems, you can use the new RealityKit Debugger. Please check out these sessions to learn more about these tools.
We’ve talked about a lot of ways to optimize your 3D content for spatial computing, as a reminder: Keep your polygon count low, focusing on how many triangles can be seen at one time. Optimize your assets based on how large or small they’ll be on screen. Pack your textures to save memory and take advantage of compression. Use unlit materials and baked textures whenever possible. Consider using the EnvironmentRadiance node when unlit materials aren’t enough. And finally keep your skybox textures large but your IBL texture small. If you’d like to learn more about creating realistic custom environments for your immersive apps, please check out this session. With these concepts in mind you can create performant and high fidelity 3D content for Apple Vision Pro. We can’t wait to see what you will bring to the world of spatial computing. Thank you!
-
-
正在查找特定内容?在上方输入一个主题,就能直接跳转到相应的精彩内容。
提交你查询的内容时出现错误。请检查互联网连接,然后再试一次。