Taipei, Sunday, Nov 24, 2024, 19:28

News

Stanford, NVIDIA Cooperate on More Immersive VR Display

Published: Aug 11,2015

Sandwiching two displays together creates a more natural, immersive image.

NVIDIA announced that the company is collaborating with Stanford University to demonstrate a new technology at this week’s SIGGRAPH graphics conference that makes VR more natural and comfortable.

More on This

NVIDIA Expands Cooperation with NCKU – Re-Deploys Five DGX-1 Supercomputers

TAIPEI, Taiwan - Nvidia Corporation (NVIDIA) and National Cheng Kung University (NCKU) are continuing their cooperation on the NVIDIA DGX-1 platform...

NVIDIA to Collaborate with DARPA to Develop Systems for Post-Moore’s Law Era

NVIDIA has been selected by the Defense Advanced Research Projects Agency (DARPA) to work with a team of university and ...

NVIDIA said, the basic principle of VR remains the same as when Sir Charles Wheatsone invented the first stereoscopic headset in 1838. Sir Charles put two images of the same scene — drawn from slightly offset angles — inside a box attached to a viewer’s head. Your brain combines what each eye is seeing into something it interprets as three-dimensional.

“The only thing that’s really changed is that today we have computers,” explains NVIDIA researcher Fu-Chung Huang

Working with the Stanford Computational Imaging Group, Huang is using GPUs to generate not two but 50 different images of the same scene many times each second to generate a sharper, more natural VR experience.

According to NVIDIA that human brains direct their eyes to move in unison and focus at the same time. When an object is far away, eyes change focus so that people see an object clearly. But at the same time, they’ll slightly diverge, or rotate, to shift the pupils of each eye a little further apart.

VR that relies on just two images, one for each eye, breaks that relationship. It lacks what researchers call “focus cues.” When human eyes rotate to look at a part of a VR scene that appears closer, the eye changes focus as well. But the actual image remains at the same distance. That disconnect can result in blurred vision, fatigue or even queasiness.

Stanford and NVIDIA provided a new solution: sandwich two transparent screens together to create a kind of hologram — or light field — that shows each eye 25 slightly offset versions of the same scene.

NVIDIA explained how it works. To create a scene, GPUs generate a different pattern for each display. Sandwich together two transparent displays with these patterns, and anything you see is a combination of these two patterns. As your eye moves from one part of the display to another, the two patterns line up differently to present a slightly different image. One that accounts for the change in the eye’s focus as it moves.

The idea is simple enough for Huang to demonstrate with a cardboard box and a pair of transparencies. But turning that into a moving image requires generating 25 different images of a scene, for each eye, many times each second.

The result is dramatic. Strap on a headset and it’s much easier to shift focus to different parts of a 3D scene.

CTIMES loves to interact with the global technology related companies and individuals, you can deliver your products information or share industrial intelligence. Please email us to en@ctimes.com.tw

1479 viewed

comments powered by Disqus