Search Unity

XR graphics: best practices and workarounds

Last updated: January 2019

What you will get from this page: tons of tips on optimizing your shaders and how to use the post-processing stack most efficiently. This is a useful article to read if are an experienced programmer with Unity and are or will be developing XR (VR and/or AR) content.

Currently, Unity supports four rendering methods for XR: multi-pass, single-pass, the recently released single-pass instancing, and Android single-pass (which is similar to single-pass instancing).

Multi-pass
  • Requires going through the scene graph twice: for each of the two textures you have that represent the eyes, you render your objects/scenes separately.
  • Does not share GPU work across the textures, which makes it the least efficient rendering path; but it works on most devices by default so requires few changes.
  • Does share culling and part of the shadow generation rendering.
Single-pass
  • With this method, two textures are packed into one bigger texture (referred to as a double-wide texture).
  • For the first object to be drawn, viewport is set to the left side, the object is drawn; then, the viewport switches to the right side and the object is drawn again.
  • Only goes through the scene graph once, so it’s much faster on the CPU. However requires a lot of extra GPU state changes to accomplish this.

unity xr graphics

A simple example of single-pass stereo rendering

Single-pass instancing
  • Available in 2017.2 and up.
  • The two textures that represent the eyes are both contained in one texture array (two slices of the same texture array).
  • When the viewport is set, it gets applied to both slices automatically. When you issue a draw, it issues an instance draw that’s twice as much as normal. Depending on the instance, Unity determines which slice to render to.
  • Involves the same amount of GPU work, but fewer draw calls and work for the CPU, so it’s significantly more performant.
  • Available for:
    • Windows 10 with D3D11 and recent graphics drivers.
    • Hololens.
    • PS4.
  • Will revert to multi-pass if extension is not supported on target device.
  • DrawProceduralIndirect(...) requires manual changes.

unity xr graphics

An example of single-pass instancing

Add these macros in your shaders if you use single-pass instancing

If you choose single-pass instancing, we highly recommend that you add the following macros to your vertex and fragment shaders. They’re very simple to add in, and all of Unity’s built-in shaders are updated to support them.

For vertex shaders:

unity xr graphics

Shader code without macros.

unity xr graphics

The same code with the first macro added...

unity xr graphics

The second macro...

unity xr graphics

The third...

unity xr graphics

And the fourth macro added.

If you happen to need to use unity_StereoEyeIndex in the fragment shader, this is what you need to do:

unity xr graphics

Some differences when using RenderScale vs. RenderViewportScale

Let’s say that you want to resize a texture from 1 to 0.5. If you do this using RenderScale, it deletes the original texture, and creates a new texture that’s a quarter of the size of the original texture, as it’s .5 in each dimension.

This is an expensive operation to do dynamically while your game is running, and that’s why RenderViewportScale is a more efficient choice. It sets the view port to a smaller portion of the same texture and just renders to that, like this:

unity xr graphics

This is much more efficient, but presents some issues since you are only using a portion of the texture (see section further down with more tips on RenderViewportScale).

So, RenderScale actually deletes then creates new textures, while RenderViewportScale modifies the viewport. Some other differences to be aware of:

  • XRSettings.eyeTextureResolutionScale is for actual texture size while XRSettings.renderViewportScale applies to the viewport used for rendering.
  • XRSettings.renderViewportScale isn’t supported when using deferred rendering.
  • Scale is applied to both dimensions, so a value of 0.5 will lead to an image that is 25% of the original size.
The post-processing stack in XR: challenges and solutions

The latest version of the excellent post-processing stack is here. Using the stack in XR content presents some challenges; this next section covers those, as well as the key solutions available to you now.

Challenge

To use many of the latest post-processing effects, you’ll need to render to an intermediate render target that’s based on the source target size and properties:

  • In the case of double-wide, the render texture is at least twice the width of the individual eye portion.
  • With single-pass instancing and the similar Android single-pass, we need to make a texture array, with one slice per eye.

Solution

  • To reduce guess work and time in trying to generate the correct screen space texture format, Unity exposes XRSettings.eyeTextureDesc (in 2017.2 and up).
  • This returns a RenderTextureDescriptor that is sourced from the engine XRTexture manager, meaning it’s configured to match the engine-managed textures.
  • You can also query RenderTextureDescriptor from a source texture if that’s available. If you’re using the legacy MonoBehaviour.OnRenderImage infrastructure, for example, you’ll have that source texture and you can grab the descriptor right out of it.
  • By using eyeTextureDesc, you can replace all your RenderTexture allocations with the supplied RenderTextureDescriptor, which is more efficient than manually generating parameters.
  • RenderTextureDescriptor is a simpler API; it matches what the underlying APIs do. If you use explicit arguments, Unity packages them in a RenderTextureDescriptor, and passes that into the core engine (into RenderTexture.GetTemporary or CommandBuffer.GetTemporaryRT). So, you’re managing this on the script side instead of having the intermediate layer doing all the managing for you.

Challenge

In XR, there’s a difference between the physical render texture size and logical screen/eye size.

Solution

  • For intermediate render texture allocation, you want to use eyeTextureDesc as this will allocate those physical textures.
  • If you have script or shader logic based on screen size (e.g. if you’re using the screen size parameters in a shader, or building a texture pyramid), you want to base this on the logical size. For that you can use XRSettings.eyeTextureHeight and XRSettings.eyeTextureWidth.
  • These represent “per eye” texture sizes, which are needed to know the size of the screen you are operating on.
  • Be aware that in the case of double-wide, eyeTextureWidth isn’t exactly half the width of eyeTextureDesc. This is because the descriptor, for the purposes of texture allocation, doubles the width and then pads it a bit to make sure it’s set up for mip mapping.

Here is an example of code you could have used previous to the availability of eyeTextureWidth, to determine what width you wanted for your screen width:

unity xr graphics

Now, you can just source the screen width from eyeTexturewidth, and use it for something like a screen ratio, see the script example below:

unity xr graphics

Challenge

How do you ensure your texture coordinates are correct for stereo, and are sampling from outputting to the correct eye?

If you are sampling from a double-wide texture, you need to make sure your texture coordinates are sampling from the correct half of the texture (each half corresponds to an eye).

Solution

You’ll need a texture coordinate corrector per eye, and for this, Unity provides two solutions:

1. TRANSFORM_TEX macro

  • This is a macro that works with _ST properties in ShaderLab. It’s most commonly used with MainTex_ST.
  • The _ST property is updated before each eye in single-pass double-wide mode, transforming your texture coordinates to the correct range for the eye. It also handles RenderViewportScale automatically for you.

unity xr graphics

  • Unity will populate _ST shader properties if you’re using Graphics.Blit or CommandBuffer.Blit:
    • MainTex_ST is always populated.
    • Other _ST properties will be filled in if the texture is an XR texture (VRTextureUsage!=None).
  • This macro won’t work with custom blit procedures, such as BlitFullScreenTriangle in the latest post-processing stack.

2. unity_StereoScaleOffset and UnityStereoTransformScreenSpaceTex/TransformStereoScreenSpaceTex helper functions

unity_StereoScaleOffset is an array of float4s:

  • UnityStereoTransformScreenSpaceTex
  • TransformStereoScreenSpaceTex

It is declared as part of the single-pass constant block (UnityStereoGlobals). It gets indexed into this with Unity_StereoEyeIndex.

In the case of double-wide, unity_StereoEyeIndex is bound in a constant buffer and then each draw gets the correct eye updated. So, the left-eye value will be put in a constant buffer, and left eye will draw: then the right-eye value will be put in the constant buffer and then right-eye will draw. The shader instances will know that for all the draws it’s using, it can go look in the constant buffer and find the correct value for the unity_StereoEyeIndex.

unity xr graphics

Once you know that unity_StereoEyeIndex is populated correctly, we can use it with confidence that it’s selecting the right element out of unity_StereoScaleOffset.

unity xr graphics

There are a few drawbacks with using unity_StereoScaleOffset:

  • Only available for single-pass, so you need to use the helper methods.
  • Using it directly leads to multi-pass and monoscopic and will result in shader compile errors. Additionally, it does not take RenderViewportScale into account.

How do you make sure that your texture arrays are declared correctly, and that you’re sampling from the correct slice?

Solution

Use UNITY_DECLARE_SCREENSPACE_TEXTURE. Shader languages treat texture arrays differently than ‘normal’ textures. Single-pass instancing and Android single-pass use texture arrays, so, we need to make sure that we declare our textures correctly.

unity xr graphics

Similar to double-wide, we need to sample from the correct eye portion; those eye portions are placed into the slices of the texture array. We can source the correct slice from unity_StereoEyeIndex and by using the macro UNITY_SAMPLE_SCREENSPACE_TEXTURE.

unity xr graphics

Managing RenderViewportScale: a few points to keep in mind

  • unity_StereoScaleOffset offers no support.
  • Most post-processing effects use CommandBuffer.Blit/Graphics.Blit, meaning they can use_MainTex_ST (and other _ST methods) to support RenderViewportScale.
  • But, with v.2 of Unity’s post-processing stack, you cannot use these methods. To work around this you’ll need to roll your own support, which is pretty straight-forward because v.2 of the stack uses its own version of shader infrastructure, meaning it’s easy to override. Here’s what you do:

    On the shader side:

    • Create a new constant: Declare "float rvsGlobal" inside xrLib.hlsl
    • Re-work TransformStereoScreenSpaceTex to utilize rvsGlobal

    On the Script side:

    • Bind RenderViewportScale value as a global property from script
    • Use Shader.SetGlobalFloat

    unity xr graphics

Stereo clamping

If you are doing neighborhood screen-space samples in a shader, you want to make sure you don’t accidentally sample the wrong eye in single-pass double-wide, or outside the valid RenderViewportScale area. That’s where UnityStereoClamp comes in: it simply clamps the coordinate samples to the correct portion of the texture.

unity xr graphics

Usage is very simple, though manual inspection is required to find neighborhood sampling.

Wherever you are offsetting from the interpolation generated texture coordinate, you probably need to use Unity.StereoClamp().

  • Be careful of effects from sampling the same coordinate because of clamping.
  • Don’t use this in the vertex shader.
A few more tips before we go:
  • Make sure you think about whether the post-processing effect you are using makes sense for XR.
    • Most temporal effects add blurriness and latency
    • Simulating depth of field, a property of physical cameras, in a headset, which is simulating the human eye, will most likely cause nausea.
  • If you are using multi-pass and keep history textures, you need one set of history textures per eye.
    • Singular history set could/will lead to sharing history between eyes!
    • Works automatically with single-pass
  • If you’re running into issues, try the different stereo rendering modes.
    • Nature of artifact can provide hints on where things might be going wrong.
More resources

We just gotta know! Did you like this content?

Yes. Keep it coming Meh. Could be better
Got it

We use cookies to ensure that we give you the best experience on our website. Visit our cookie policy page for more information.