UI for Recumbent VR

When a VR user is lying down (recumbent), their default orientation is orthogonal to the world’s, and a few challenges are introduced.  The one I am currently tackling is how to have an intuitive, but non-invasive menu.

In most Cardboard apps, the standard is to place the menu at the “feet” of the user, or directly downward from their head. Clearly, this will not work for someone who is lying on their back, as the menu would be beneath them. Placing the menu at their “feet” also doesn’t work, because one of the most necessary uses of the in-game menu is to recenter the world’s orientation to that of the user.

The gyroscopes of most mobile phones introduce drift to the user’s orientation, causing it to slowly change over time. The user needs to be able to set up the initial orientation of the virtual world to match their own, and to occasionally re-synchronize the two. To this end, I need to be able to have the user bring up the menu from any point around the world’s vertical axis.

Since the entirety of this VR experience’s action takes place in the sky, I consider the hemisphere above the user to be the “game space”. Firing fireworks into the ground, after all, does not produce a desirable effect (also it means I would need to add collision detection to the terrain). So, when the user looks down past a certain angle, the UI menu appears, and tracks their orientation around the vertical axis, allowing them to re-orient the world to their physical position with ease. Once the user looks back up into the sky, the menu slides underground again, out of view.

I find it fairly easy to use, although I’ll leave it to user testing to see if it’s an effective and intuitive method of presenting a menu to a prone user. The fact that the user is lying down looking the sky also means that synchronizing their orientation with their real self is not particularly important, but maybe I just like an interesting problem to solve.

Ambient Audio

Because the user should inhabit their virtual selves as much as possible, and their virtual self is also lying down, I thought to emulate the effect of muffling the sound in one ear when it is pressed down against the ground. Of course, the actual user is wearing earbuds or headphones, so the sound will not be muted, but since I can track the angle their head is positioned in, I can fake it by panning the ambience away from the ear that is pressed down.

Ideally, the master sound channel would be muted on the muffled side, but panning an audio mixer channel does not appear to be possible. Unless there’s something I’m missing, this is the first time I’ve run up against a limitation of Unity, rather than my own ability or budget.

Anyhow, individual 2D audio sources can be panned. So, I can at least control the ambient sound. First, to figure out how to determine if the user’s head is tiled to one side. I can read their output and determine the range of values that represent the user tilting their all the way to the right or left. Once I had those , I used Mathf.InverseLerp to turn the range of angles into a float from 0 to 1. The following is called form the Update() function:

void MuffleAmbience (){
    float panningAmount = 0.0f;

    if (Camera.main.transform.eulerAngles.z > 60 && Camera.main.transform.eulerAngles.z < 86 ) {
      //	The user's head is tilted to the left.
      panningAmount = Mathf.InverseLerp( 60, 86, Camera.main.transform.eulerAngles.z );
    } else if (Camera.main.transform.eulerAngles.z > 274 && Camera.main.transform.eulerAngles.z < 300) {
      //	The user's head is tilted to the right.
      panningAmount = Mathf.InverseLerp( 300, 274, Camera.main.transform.eulerAngles.z ) * -1;
    } 

    ambience.panStereo = panningAmount;
  }
}

Mathf.Lerp returns a value from 0 to 1.0 represented how far far between two numbers a third number is. For example, 8 is halfway between 6 and 10, so Mathf.InverseLerp (6,10,8) would return 0.5. I do two checks, one each for left and right, and then set the ambience’s stereo panning respectively. This only occurs at the edge of the range of movement, so the sounds are not muffled in either channel until the user’s head is just about resting on the pillow. I also cut off the range a bit so it doesn’t go fully silent in either channel.

Trying it out, it feels quite natural, and is the sort of feature that isn’t really noticed by the user unless it’s pointed out.The 3D audio of the fireworks playing their whistles, booms and cracks still plays in 3D space regardless of the user’s head orientation though. Perhaps I can come up with another hack for that.

Skyboxing

Using a Skybox in VR was easier than I imagined. I had concerns that the ability to perceive depth would allow the user to see the box as a cube, but it all appears infinitely distant, as far as I can tell.

As for the image, there are plenty of starry night skyboxes available on the asset store. First try was a free fantasy skybox with gorgeous clouds. The fantasy skybox looked great, but having static clouds on the skybox looks worse in VR than it usually does. Not only is it apparent that the clouds and stars at at the same distance, I can’t have static clouds if the user is going to be staring primarily at the sky!

I now have purchased a skybox starter pack with a variety of plain day and night skyboxes and went with a clear starry night with supermoon.. So, I must figure out a method for making dynamic clouds that works on mobile. Scrolling UVs should do the trick if I can get that to work with a transparent shader. I’ll cover it in another entry.

Modelling Horn

Because I’m really trying to get in the spirit of the user’s head as their primary controller, I’ve gone ahead and started modeling a small variety of “launchers” to place on the user’s virtual forehead. I think I may allow them to switch between a few options or remove it entirely if desired.

The obvious first choice is a unicorn horn, and here is my first attempt at modeling a low-poly one, or using Blender pretty much at all. Not bad for someone whose last shot at actual modeling took place on an SGI Indigo2.

First attempt at modeling unicorn horn.

Texturing and figuring out the UVs will have to come later. Doing all this stuff myself really makes me once again appreciate all the artists who made far more complex things than this on a daily basis.

Got a lot of help on how to create the shape I was looking for from this discussion on modelling onion domes: http://blender.stackexchange.com/questions/2297/how-would-i-go-about-creating-a-spiralled-dome/2318

Looking Up

While more luxurious VR units have head straps and headphones, etc. Google Cardboard is often meant to simply be held up to the eyes for short, controlled bursts of VR. Since it does not have positional tracking, the user laying prone is ideal for Cardboard, since the range of movement is limited and not likely to get out of sync with the VR experience due to users shifting in their seats, trying to move their heads closer to objects, and so on.

For this reason and because I like to be a bit weird, I’ve started my first VR projects with the sky as the canvas. One of VR’s strengths is its ability to convey scale, and there are few environments so vast as the sky. One of its drawbacks is that small details are often lost, which is also helped by focusing on the sky, rather than nearby objects and environments.

My first project has a working title of “Forehead Fireworks” and is a VR experience of the user laying on a bed in the middle of a grassy field at night. The user is able to fire colorful fireworks out of their foreheads into the starry sky.

The first priority for this project will be to create as convincing and engaging a skyscape as I am able given the technical limitations of the mobile platform, which I will expand upon in a later post.