Interaction in VR

Проверено с версией:: 5.3

-

Сложность: Базовая

This content has been deprecated, steps to follow may be incorrect but some principles are still valid. Please view our documentation documentation and our getting started resources

Overview

In VR, we frequently need to activate an object that a user is looking at. For the VRSamples, we have built a simple, extendable, lightweight system allowing users to interact with objects. This consists of three main scripts: VREyeRaycaster, VRInput, and VRInteractiveItem - a short description of these classes is below. The source code is also commented.

VREyeRaycaster

VREyeRaycaster

This script needs to be placed on the Main Camera. Every Update() the script casts a ray forwards using Physics.Raycast to see if the ray hits any colliders. This can also exclude certain layers - depending on your scene, you may wish to move all interactive objects onto a separate layer for performance reasons.

If a collider has been hit by the raycast, this script attempts to find a VRInteractiveItem component on the GameObject:

Code snippet

VRInteractiveItem interactible = hit.collider.GetComponent<VRInteractiveItem>();    //attempt to get the VRInteractiveItem on the hit object

From this we can determine whether the user is looking at an object, or has stopped looking at an object. If the user has started or stopped looking at an object, we can do something with it, such as call a method.

VRInput

VRInput

VRInput is a simple class that determines whether swipes, taps, or double-taps have occurred on the GearVR - or the equivalent controls setup for PC input when using a DK2.

You can subscribe to events on VRInput directly:

Code snippet

public event Action<SwipeDirection> OnSwipe;                // Called every frame passing in the swipe, including if there is no swipe.
public event Action OnClick;                                // Called when Fire1 is released and it's not a double click.
public event Action OnDown;                                 // Called when Fire1 is pressed.
public event Action OnUp;                                   // Called when Fire1 is released.
public event Action OnDoubleClick;                          // Called when a double click is detected.
public event Action OnCancel;                               // Called when Cancel is pressed.

For more information on subscribing to events, please see our Events tutorial.

VRInteractiveItem

This is a component you can add to any GameObject that you would like the user to interact with in VR. It requires a collider on the object it is attached to.

There are six events you can subscribe to:

Code snippet

public event Action OnOver;             // Called when the gaze moves over this object
public event Action OnOut;              // Called when the gaze leaves this object
public event Action OnClick;            // Called when click input is detected whilst the gaze is over this object.
public event Action OnDoubleClick;      // Called when double click input is detected whilst the gaze is over this object.
public event Action OnUp;               // Called when Fire1 is released whilst the gaze is over this object.
public event Action OnDown;             // Called when Fire1 is pressed whilst the gaze is over this object.

And one boolean that can be used to see if it’s currently Over:

Code snippet

public bool IsOver
{
    get { return m_IsOver; }              // Is the gaze currently over this object?
}

You can then create your own script that will react to these events. Here’s a very simple example, using some of these events:

Code snippet

using UnityEngine;
using VRStandardAssets.Utils;

namespace VRStandardAssets.Examples
{
    // This script is a simple example of how an interactive item can
    // be used to change things on gameobjects by handling events.
    public class ExampleInteractiveItem : MonoBehaviour
    {
        [SerializeField] private Material m_NormalMaterial;                
        [SerializeField] private Material m_OverMaterial;                  
        [SerializeField] private Material m_ClickedMaterial;               
        [SerializeField] private Material m_DoubleClickedMaterial;         
        [SerializeField] private VRInteractiveItem m_InteractiveItem;
        [SerializeField] private Renderer m_Renderer;


        private void Awake ()
        {
            m_Renderer.material = m_NormalMaterial;
        }


        private void OnEnable()
        {
            m_InteractiveItem.OnOver += HandleOver;
            m_InteractiveItem.OnOut += HandleOut;
            m_InteractiveItem.OnClick += HandleClick;
            m_InteractiveItem.OnDoubleClick += HandleDoubleClick;
        }


        private void OnDisable()
        {
            m_InteractiveItem.OnOver -= HandleOver;
            m_InteractiveItem.OnOut -= HandleOut;
            m_InteractiveItem.OnClick -= HandleClick;
            m_InteractiveItem.OnDoubleClick -= HandleDoubleClick;
        }


        //Handle the Over event
        private void HandleOver()
        {
            Debug.Log("Show over state");
            m_Renderer.material = m_OverMaterial;
        }


        //Handle the Out event
        private void HandleOut()
        {
            Debug.Log("Show out state");
            m_Renderer.material = m_NormalMaterial;
        }


        //Handle the Click event
        private void HandleClick()
        {
            Debug.Log("Show click state");
            m_Renderer.material = m_ClickedMaterial;
        }


        //Handle the DoubleClick event
        private void HandleDoubleClick()
        {
            Debug.Log("Show double click");
            m_Renderer.material = m_DoubleClickedMaterial;
        }
    }
}

To see a basic example of this, take a look at the InteractiveItem scene in VRSampleScenes/Scenes/Examples/

SelectionRadial and SelectionSlider

We make use of a both a radial selection bar (SelectionRadial), and a selection slider (SelectionSlider) to allow the user to hold Fire1 down to confirm an interaction:

SelectionRadial

SelectionSlider

As the input is held down, the selection bar fills, and dispatches the OnSelectionComplete or OnBarFilled event when full. The code for this can be found in SelectionRadial.cs and SelectionSlider.cs, and is thoroughly commented. With VR we like to ensure that from a UX standpoint the user knows what they are doing and feels in control at all times - and whilst giving a timed ‘held input’ style of confirmation does compromise on immediacy, it ensures that no one can select something by mistake.

Interaction Examples in VRSampleScenes

Now let’s look at some examples of interaction that we have included in our VR Samples project. We will discuss some of the interactions each scene uses, and how they are implemented.

Interactions in the Menu scene

Each of the menu screens have several components. Of particular interest here are the MenuButton, VRInteractiveItem, and Mesh Collider.

Menu

The MenuButton component subscribes to the OnOver and OnOut events on the VRInteractiveItem component, so that when the reticle is over the menu screen, the selection radial appears, and when looking away from the menu item, the selection radial disappears. When the selection radial is visible, and Fire1 is held down, the radial fills:

SelectionRadial

This class also subscribes to the OnSelectionComplete event on SelectionRadial, so that when the radial is full, HandleSelectionComplete is called, which fades the camera out and loads the selected level.

Code snippet

private void OnEnable ()
{
    m_InteractiveItem.OnOver += HandleOver;
    m_InteractiveItem.OnOut += HandleOut;
    m_SelectionRadial.OnSelectionComplete += HandleSelectionComplete;
}


private void OnDisable ()
{
    m_InteractiveItem.OnOver -= HandleOver;
    m_InteractiveItem.OnOut -= HandleOut;
    m_SelectionRadial.OnSelectionComplete -= HandleSelectionComplete;
}
      

private void HandleOver()
{
    // When the user looks at the rendering of the scene, show the radial.
    m_SelectionRadial.Show();

    m_GazeOver = true;
}


private void HandleOut()
{
    // When the user looks away from the rendering of the scene, hide the radial.
    m_SelectionRadial.Hide();

    m_GazeOver = false;
}


private void HandleSelectionComplete()
{
    // If the user is looking at the rendering of the scene when the radial's selection finishes, activate the button.
    if(m_GazeOver)
        StartCoroutine (ActivateButton());
}


private IEnumerator ActivateButton()
{
    // If the camera is already fading, ignore.
    if (m_CameraFade.IsFading)
        yield break;

    // If anything is subscribed to the OnButtonSelected event, call it.
    if (OnButtonSelected != null)
        OnButtonSelected(this);

    // Wait for the camera to fade out.
    yield return StartCoroutine(m_CameraFade.BeginFadeOut(true));

    // Load the level.
    SceneManager.LoadScene(m_SceneToLoad, LoadSceneMode.Single);
}

Let’s look at some examples of the Selection Radial, note the pink elements in the center of the screenshots below -

Reticle only

Radial Off

Empty Selection Radial visible, as the user is looking at the menu screen

Radial On

Selection Radial filling (the user is looking at the menu screen, and holding Fire1 input)

Radial Filling

We attempt to maintain this style throughout the sample project with bars and radials that fill in a consistent amount of time. We recommend that you think about this when working on your own VR projects - as consistency in UX is reassuring to the user, and will help them acclimatise to what is likely to be a new medium for them.

Interactions in the Maze scene

The Maze game gives an example of table-top style interaction, in which you guide a character to the exit, avoiding a turret which can be deactivated via a switch (spoilers!).

When selecting a destination for the character, a destination marker will appear, along with a trail to show you the route they will take. You can rotate the view by swiping on the touchpad, pressing the arrow keys, or by using the left stick on a gamepad.

Maze Overview

The MazeFloor object has a MeshCollider and VRInteractiveItem attached, to allow interaction in VR:

Maze Floor

The MazeCourse GameObject is the parent which contains the MazeFloor and MazeWalls GameObjects, which in turn contain the geometry used for the layout of the maze.

Maze Course

MazeCourse has the MazeTargetSetting script attached, which has a reference to the VRInteractiveItem component on MazeFloor:

MazeTargetSetting

MazeTargetSetting subscribes to the OnDoubleClick event on VRInteractiveItem, which then dispatches the OnTargetSet event. This event passes the Transform of the reticle as a parameter:

Code snippet

public event Action<Transform> OnTargetSet;                     // This is triggered when a destination is set.

   
private void OnEnable()
{
    m_InteractiveItem.OnDoubleClick += HandleDoubleClick;
}


private void OnDisable()
{
    m_InteractiveItem.OnDoubleClick -= HandleDoubleClick;
}


private void HandleDoubleClick()
{
    // If target setting is active and there are subscribers to OnTargetSet, call it.
    if (m_Active && OnTargetSet != null)
        OnTargetSet (m_Reticle.ReticleTransform);
}

Both the Player component on the MazeCharacter GameObject and the DestinationMarker component on the MazeDestinationMarkerGUI GameObject subscribe to this event, and react accordingly.

The character uses the Nav Mesh systems to pathfind around the Maze. The Player component features the HandleSetTarget function which sets the destination of the Nav Mesh Agent to the Reticle’s Transform position, and updates the Agent trail - the visual rendering of the character’s path:

Code snippet

private void HandleSetTarget(Transform target)
{
    // If the game isn't over set the destination of the AI controlling the character and the trail showing its path.
    if (m_IsGameOver)
        return;
            
    m_AiCharacter.SetTarget(target.position);
    m_AgentTrail.SetDestination();
}

DestinationMarker moves the marker to the Reticle’s Transform position:

Code snippet

private void HandleTargetSet(Transform target)
{
    // When the target is set show the marker.
    Show();

    // Set the marker's position to the target position.
    transform.position = target.position;

    // Play the audio.
    m_MarkerMoveAudio.Play();

    // Play the animation on whichever layer it is on, with no time offset.
    m_Animator.Play(m_HashMazeNavMarkerAnimState, -1, 0.0f);
}

You can see the reticle, the destination marker, the player, and the trail in action here:

Maze

The switch in the Maze is also an example of interacting with objects in VR. This uses a Collider, as well as the VRInteractiveItem, and SelectionSlider classes.

SelectionSlider

As shown above with other interactive objects, the SelectionSlider script listens to the events dispatched by VRInteractiveItem and VRInput:

Code snippet

private void OnEnable ()
{
    m_VRInput.OnDown += HandleDown;
    m_VRInput.OnUp += HandleUp;

    m_InteractiveItem.OnOver += HandleOver;
    m_InteractiveItem.OnOut += HandleOut;
}

Interactions in the Flyer scene

The Flyer scene is a timed ‘endless flyer’ where the player guides the ship by looking around, shoots with the Fire1 input and scores points by hitting asteroids and by guiding the ship through gates in the sky - akin to gameplay of Pilotwings or Starfox for example.

In terms of interaction, the Flyer is a more simple use case; with FlyerLaserController subscribing to the OnDown event of VRInput to fire lasers:

Code snippet

private void OnEnable()
{
    m_VRInput.OnDown += HandleDown;
}


private void HandleDown()
{
    // If the game isn't running return.
    if (!m_GameController.IsGameRunning)
        return;

    // Fire laser from each position.
    SpawnLaser(m_LaserSpawnPosLeft);
    SpawnLaser(m_LaserSpawnPosRight);
}

Interaction in the Shooter180 and Shooter360 scenes (Target Gallery / Target Arena)

VR Samples features two target shooting games - a gallery - a 180 degree view down a corridor of potential targets, and a 360 degree arena shooter where the player is surrounded by potential targets in an x-men Cerebro style environment.

Each of the spawning targets in the Shooter games has a Collider, VRInteractiveItem and ShootingTarget.

Shooting Target

ShootingTarget

The ShootingTarget component subscribes to the OnDown event on VRInteractiveItem to determine whether the target has been shot. This method is suitable for instantaneous shots; for a game using projectiles we’d need to implement a different solution.

You should now have an overview on the basic VR interaction components and how they are used within the VR Samples project. Let’s now discuss how gaze and reticles work in our VR Samples.

Gaze

Detecting where the user is looking is very important in VR - whether that’s allowing a user to interact with an object, triggering an animation, or firing a bullet at a target for example. We refer to the act of looking in VR as the ‘gaze’, and we’ll be using this term a lot in our articles on VR.

As most HMDs do not yet support eye tracking we can only estimate the user’s gaze. As the distortion on the lenses means that users are generally looking roughly straight forward, there’s a simple solution to this. As covered in the overview, we only need to cast a ray forward from the center of the camera, and find what the ray has collided with. Of course, this means that anything that could be collided (or interacted with via gaze) with must have a Collider component attached.

The Reticle

A reticle can help to indicate the center of the user’s vision. The style of the reticle could be a simple dot, or perhaps a crosshair, depending on your project.

In traditional 3D games a reticle will often be set at a fixed point in space; often the center of the screen. Positioning a reticle in VR is more complex: As a user looks around a VR environment, the eyes will converge on objects closer to the camera. If the reticle is in a fixed position, then the user will see double: You can simulate this by holding a finger in front of your eyes and focussing on objects closer and further away - if you focus on your finger, the background will double, and vice-versa. This is known as voluntary Diplopia.

To avoid the user seeing two reticles as they look around the environment and focus on objects at various distances, we need to position the reticle at the same point in 3D space as the surface of the object they’re currently looking at.

Simply positioning the reticle at that point in space will mean that the reticle will be tiny at large distances, and large when up close. To ensure the reticle stays the same size regardless of distance, we need to scale it with distance to the camera.

To illustrate this, here are some examples of the reticle at different distances and scales, taken from the Examples/Reticle scene.

Reticle positioned on an object close to the camera:

Reticle close

Reticle positioned on an object further away:

Reticle far

Reticle positioned in the far distance:

Reticle distance

Due to the scaling and positioning we use, the reticle appears to be the same size to the user, regardless of distance.

When no object is hit, we simply place the reticle at a predefined distance. In an outdoor environment this could be just in front of the camera’s Far clip plane, or in an indoor scene this could be much closer.

Rendering the reticle over other GameObjects

If the reticle is placed at the same position as an object, then the reticle may clip into nearby objects:

Reticle clipped

To solve this, we need to ensure that the reticle is rendered on top of everything in the scene. With VR Samples we have supplied a shader based on the existing Unity “UI/Unlit/Text” shader called UIOverlay.shader. When selecting a shader for a material, it can be found under “UI/Overlay”.

This works with both UI elements and text, and will draw on top of other objects in the scene:

Reticle not clipped

Aligning the Reticle to GameObjects in the scene

Finally, we might want the rotation of the reticle to match the normal of the object it’s hit. We can do this by using RaycastHit.normal. Here’s how it’s set in Reticle:

Code snippet

public void SetPosition (RaycastHit hit)
{
    m_ReticleTransform.position = hit.point;
    m_ReticleTransform.localScale = m_OriginalScale * hit.distance;

    // If the reticle should use the normal of what has been hit...
    if (m_UseNormal)
        // ... set it's rotation based on it's forward vector facing along the normal.
        m_ReticleTransform.rotation = Quaternion.FromToRotation (Vector3.forward, hit.normal);
    else
        // However if it isn't using the normal then it's local rotation should be as it was originally.
        m_ReticleTransform.localRotation = m_OriginalRotation;
}

You can see this in action in the Maze scene.

Here’s the reticle matching the normal of the wall:

Reticle on wall

Here's the reticle matching the normal of the floor:

Reticle on floor

We’ve also included a sample Reticle script. This works with the VREyeRaycaster to position the reticle at the correct point in the scene, and optionally conforms to the normal of the object that’s hit.

Reticle

All of the above can be viewed in the Reticle scene in VRSampleScenes/Scenes/Examples/

Rotation and Position of the head in VR

The obvious use of rotation and position of the HMD is for looking around the environment, but it can be useful to make objects react to these values.

To access these values we need to use the VR.InputTracking class, and specify which VRNode we need to access. To get the rotation of the head, we will want to use VRNode.Head, rather than the individual eyes. For more information, see Camera Nodes in the Getting Started with VR Development article.

An example of using rotation as a type of input might be to subtly rotate a menu or other object based on the head rotation. You can see an example of this in the VRSampleScenes/Examples/Rotation scene, and the ExampleRotation script:

Code snippet

// Store the Euler rotation of the gameobject.
var eulerRotation = transform.rotation.eulerAngles;

// Set the rotation to be the same as the user's in the y axis.
eulerRotation.x = 0;
eulerRotation.z = 0;
eulerRotation.y = InputTracking.GetLocalRotation(VRNode.Head).eulerAngles.y;

You can see how the object rotates depending on where the user is looking:

Rotation left

Rotation right

In our example Flyer game, you can see the spaceship change position based on the rotation of the head in FlyerMovementController:

Code snippet

Quaternion headRotation = InputTracking.GetLocalRotation (VRNode.Head);
m_TargetMarker.position = m_Camera.position + (headRotation * Vector3.forward) * m_DistanceFromCamera;

Flyer

Touchpad and Keyboard Interactions during VR gameplay

The Gear VR has a touchpad on the side of the HMD. This appears as a mouse to Unity, and so we can use the following:

Input.mousePosition

Input.GetMouseButtonDown

Input.GetMouseButtonUp

With Gear VR we also need to get swipe data from the touchpad. We’ve included a sample script called VRInput which deals with swipes, taps, and double-taps. It also allows directional arrows and Left-Ctrl (Fire1 in Unity default input terms) on a keyboard (or Left Mouse button) to trigger swipes and taps.

While in the Unity Editor, we may be testing Gear VR content using the DK2 as there is currently no way to test content from Unity to the GearVR. As the Gear VR touchpad is effectively a mouse, we can simply use the mouse to simulate the input. As locating the keyboard is easier while wearing a HMD, for convenience VRInput also handles directional arrow presses as swipes, and Left-Ctrl (Fire1) as taps.

To allow basic functionality with gamepads, the left stick acts as swipes, and one of the buttons acts as a tap.

An example of subscribing to swipes can be found in VRSampleScenes/Scenes/Examples/Touchpad

Below is the ExampleTouchpad script, which applies AddTorque to a Rigidbody depending on the swipe direction, allowing the object to spin.

Code snippet

using UnityEngine;
using VRStandardAssets.Utils;

namespace VRStandardAssets.Examples
{
    // This script shows a simple example of how
    // swipe controls can be handled.
    public class ExampleTouchpad : MonoBehaviour
    {
        [SerializeField] private float m_Torque = 10f;
        [SerializeField] private VRInput m_VRInput;                                        
        [SerializeField] private Rigidbody m_Rigidbody;                                    


        private void OnEnable()
        {
            m_VRInput.OnSwipe += HandleSwipe;
        }


        private void OnDisable()
        {
            m_VRInput.OnSwipe -= HandleSwipe;
        }


        //Handle the swipe events by applying AddTorque to the Ridigbody
        private void HandleSwipe(VRInput.SwipeDirection swipeDirection)
        {
            switch (swipeDirection)
            {
                case VRInput.SwipeDirection.NONE:
                    break;
                case VRInput.SwipeDirection.UP:
                    m_Rigidbody.AddTorque(Vector3.right * m_Torque);
                    break;
                case VRInput.SwipeDirection.DOWN:
                    m_Rigidbody.AddTorque(-Vector3.right * m_Torque);
                    break;
                case VRInput.SwipeDirection.LEFT:
                    m_Rigidbody.AddTorque(Vector3.up * m_Torque);
                    break;
                case VRInput.SwipeDirection.RIGHT:
                    m_Rigidbody.AddTorque(-Vector3.up * m_Torque);
                    break;
            }
        }
    }
}

Examples of VRInput within VR Samples

As described above, all of our sample games use VRInput to handle the touchpad and keyboard. The camera in Maze also reacts to swipes:

Maze

In this scene, CameraOrbit listens for swipes, allowing the viewpoint to be changed:

Code snippet

private void OnEnable ()
{
    m_VrInput.OnSwipe += HandleSwipe;
}


private void HandleSwipe(VRInput.SwipeDirection swipeDirection)
{
    // If the game isn't playing or the camera is fading, return and don't handle the swipe.
    if (!m_MazeGameController.Playing)
        return;

    if (m_CameraFade.IsFading)
        return;

    // Otherwise start rotating the camera with either a positive or negative increment.
    switch (swipeDirection)
    {
        case VRInput.SwipeDirection.LEFT:
            StartCoroutine(RotateCamera(m_RotationIncrement));
            break;

        case VRInput.SwipeDirection.RIGHT:
            StartCoroutine(RotateCamera(-m_RotationIncrement));
            break;
    }
}

For more information on why the camera orbits in this scene rather than rotating the Maze itself, see the Movement article.

You should now have a good understanding of how basic interaction in VR works within the VR Sample Scenes. There are many ways of accomplishing this, but this method is quick and easy to get started with. In the next article, we will discuss different types of User Interfaces in VR. Also remember that to ask questions about VR with fellow Unity users, you can jump onto the Unity forum for VR.