Optimisation for VR in Unity
Verificado com a versão: 5.3
As achieving the target frame rate for your chosen platform is an essential part of ensuring users have a great, nausea-free VR experience, optimisation is a critical part of VR development. Unlike some other platforms, with VR it’s best to optimise early and often, rather than leaving it to a later stage in development. Testing regularly on the target devices is also very helpful.
VR is computationally expensive compared to non-VR projects, mainly due to having to render everything once per-eye, so make sure you’re familiar with some of the common issues when creating your VR experience. If you’re aware of these issues beforehand, you can design your project around them, saving a lot of hard work later on in your project lifecycle.
Mobile VR can be particularly demanding. Not only do you have the overhead of running a VR application, but mobile devices are a fraction as powerful as a desktop PC, so optimisation will be of critical importance in your project.
As hitting the target framerate is crucial, all optimisation counts. Don’t forget to optimise your code as much as possible. See the Unity guide on optimising code for more detail.
A great deal of information on optimising for VR can be found on the Oculus website, and is worth familiarising yourself with these first.
Unity Editor optimisation tools
There are are number of useful tools and techniques in Unity that will help you optimise your content for VR.
The profiler will help you understand how much time is spent rendering each frame of your game, and splits this into CPU, Rendering, Memory, Audio, Physics and Network. Understanding how to use the Profiler is critical to examining performance, and identifying areas that need optimisation.
To find out more about using the Profiler, please see these links:
The Frame Debugger allows you to freeze playback on a frame and step through the individual draw calls to see how your scene is constructed, and identify where these could be optimised. You may find you’re rendering objects you don’t need to, which will greatly help in reducing the draw calls per frame.
More information on using the Frame Debugger can be found here:
VR Optimisation Fundamentals
As optimisation is a huge field, and requirements change per-platform and on your project’s requirements, we have provided areas of interest for further reading.
In general, existing optimisation techniques carry over well into VR development, so existing knowledge can often be applied.
Remove any faces from your geometry that will never be seen in VR. We don’t want to render something that will never be seen. For example, if the user will never see the back of a cupboard as it’s against the wall, we don’t want any of those faces on the model.
Simplify your meshes as much as possible. Depending on your target platform, you may want to look into adding detail via textures, and potentially parallax mapping and tessellation, though this can impact performance, and may not be suitable or available for your target platform.
Overdraw view allows you to see what objects are drawn on top of another, which is a waste of GPU time. Look at reducing overdraw as much as possible. You can view overdraw in the Scene View by using the Scene View Control Bar.
Normal shaded view:
Level of Detail
Level Of Detail (LOD) rendering allows you to reduce the number of triangles rendered for an object as its distance from camera increases. As long as your objects aren’t all close to the camera at the same time, LOD will reduce the load on the hardware and improve rendering performance by adding an LOD component and providing lower detail meshes for distance groups further from the camera.
Using Simplygon can automate much of the asset preparation process for LOD.
Draw Call batching
Batch Draw Calls wherever possible, using Static Batching and Dynamic Batching. This can greatly increase performance. See the Unity guide to Draw Call Batching.
Eliminate dynamic lighting wherever possible, and bake lighting where you can, and avoid realtime shadows.
See the Unity guide to Lighting and Rendering for more information.
Light probes allow you to sample the lighting at points in the scene and apply this to dynamic objects. This is relatively fast, and frequently has great visual results.
Reflection probes store cubemaps of their surroundings to allow for realistic reflections, and can impact performance. Please be aware that real-time mode reflection probes are too slow to use in VR at this time.
Occlusion Culling stops objects from being rendered if they cannot be seen. For example, we don’t want to render another room if a door is closed and it cannot be seen.
Depending on your project and target platform, you may be able to implement Occlusion Culling, which can significantly increase performance.
An example of frustum culling:
An example of Occlusion Culling:
Anti-Aliasing is highly desirable in VR, as it helps to smooth the image and reduce jagged edges. If you are using Forward Rendering, you may be able to enable MSAA in Quality Settings, and you should always look to enable this for Gear VR where possible.
In general, you’ll want to use Texture Atlasing as much as possible in your project, and reduce the amount of separate textures and materials used.
To simplify and speed up this process, MeshBaker can be used to bake textures, meshes, and materials to increase performance in your game.
Holden from Turbo Button Inc talks about optimisation in general, and using MeshBaker at Oculus Connect 2.
Please note that normal maps generally don’t look good in VR, so you may wish to avoid these. Please see the Oculus documentation on Rendering for more information on textures.
Where appropriate, try to use the most basic shaders. On Gear VR, you might want to make use of the inexpensive Mobile > Unlit (Supports Lightmap) shader, and to lightmap your scenes.
Fullscreen effects are expensive, and you will probably want to avoid these entirely on Gear VR.
The Quality Settings determine various aspects of the visual quality of your project. Altering these properties may help you increase performance at the expense of visual quality.
You may want to split your project into separate scenes to help performance. If you do this, be aware that when loading the next scene you will want to avoid freezing tracking of the head, as this will lead to nausea.
To help avoid this, you might want and implement a loading screen with head tracking while asynchronously loading your next scene using SceneManager.LoadSceneAsync
Optimisation Techniques in Sample Scenes
We have implemented a number of optimisation techniques in the sample scenes to ensure good performance on Gear VR and DK2.
As we wanted to target both platforms with the same project, we catered for the lowest-end hardware; in this case Gear VR. We chose a low-poly art style, with a few basic colours to help items stand out in the environment.
As we are using Forward Rendering, we enabled 4x MSAA in Edit > Project Settings > Quality Settings for better visual quality:
Let’s take a quick look at the techniques used in the scenes:
Menu scene optimisations
Like all of our scenes, this uses low-poly assets, and does not use realtime lighting.
We are using a custom shader called SeparableAlpha on the menu panels which allows a separate alpha channel to to be defined for the sequence of images. This means that not every frame needs it’s own alpha channel. This saves on filesize, and removes some aliasing.
Flyer scene optimisations
We enable fog at runtime in Flyer to ensure that the spawned objects don’t pop into view, and allows a shortened view distance, meaning fewer objects need to be rendered.
The asteroids have a low vertex count, meaning Dynamic Batching can be used, reducing draw calls.
Of note here is the Flyer Vehicle albedo texture, which has been optimised to use a small colour swatch using a secondary UV channel in the Detail Map slot. This allows us to save on overall texture size.
Maze scene optimisations
The Maze environment is lightmapped, meaning that it has better performance at runtime, especially on the GearVR as it uses lower powered mobile devices to run the project. Otherwise it is a simple scene with no realtime lighting and minimal effects.
Shooter180 (Target Gallery) and Shooter360 (Target Arena) scene optimisations
You should now have an understanding of some aspects of optimisation, how you can use built-in Unity tools to help analyse your performance, and some tips on how to get a better framerate.
The Oculus website has many useful resources that go into detail on this:
Finally, our Reading List article has links to other useful resources.