How VR Is Overcoming Visual Shortcomings

As virtual reality has matured, there has been an interesting growth in the number of applications for the technology.  Many of these have been in training scenarios where VR has been used to place people in ‘realistic’ virtual settings to test their skills.

The challenge, of course, is whether these environments are suitably realistic to provide worthwhile training.  A recent study from the University of Wisconsin-Madison examined a particular problem with VR – getting users to perceive images the same way they do in real life.

“The companies leading the virtual reality revolution have solved major engineering challenges — how do you build a small headset that does a good job presenting images of a virtual world,” the researchers say. “But they have not thought as much about how the brain processes these images. How do people perceive a virtual world?”

Trained to see

The researchers discovered that we don’t tend to perceive virtual objects in the same way we do real ones, at least without a significant amount of training.  In previous research, they discovered that people were poor at discerning the direction a target was moving in, especially if it was coming at or away from them.

The team aimed to try and improve matters, and tested the use of a range of tools to help VR provide a more realistic indication of motion across all three dimensions.

“We thought it was as easy as taking the same object-tracking task, putting it in the virtual environment, and having people do it the same way,” they explain. “And they did do it the same way. They made the same mistakes.”

Things improved once subjects were given both audio and visual feedback.  For instance, subjects would be given visual feedback as to the full path of an object, with noises given to signify successes and failures.  This simple level of feedback doubled the success rate of the participants.

The results suggest that participants perform best when they receive both small cues of motion and are also permitted small head movements.  The team hope that the findings will help developers create more intuitive and practical virtual environments.

“Google packages a virtual reality YouTube viewer with their headset. That’s a passive experience, and not the best thing to do,” they say. “What they should be doing is packaging action games with their headset, something that forces users to interact with the environment. That teaches them to use the information available in virtual reality, and treat it more like the real world and less like a computer screen.”

Fixing ‘lazy eye’

Interestingly, the team also believe their work can help to refine treatment for conditions such as amblyopia, or lazy eye.  It’s a condition whereby the signals between the brain and one eye go awry, thus making the other eye predominant.  It’s usually treated by forcing the less dominant eye to adapt, either via lab training or through wearing a patch.  A recent study has tested an augmented reality alternative however.

“With this altered-reality system, participants interact with the natural world that is changed through real-time image processing. The system delivers altered but complementary video to each eye in real time, forcing participants to make use of the visual inputs to both eyes cooperatively,” the authors explain.

The system is a version of augmented-reality that sees some parts of a scene altered before the video is displayed to the user.  It doesn’t display any unnatural or nonexistent objects however.  The team believe that this approach overcomes the limitations of training in the lab because it allows training to be done every day.

When the results of wearing the device were analyzed, it did indeed show positive improvements in ocular balance, even up to 2-months after the users stopped wearing it.

“Several 3-hour adaptation sessions produced effects that strengthened when people returned to their normal visual environment after the training ended,” the authors explain.

The team believe that their work has some important implications for a wide range of areas, including clinical ophthalmology and product development.  They hope to continue researching this phenomenon to explore the exact mechanisms that deliver the boost seen during this work.