How do we do VR?
- Provides a field of view roughly 30°
- This is not very realistic.
How much harder is VR display?
- It must be "Immersive"
- Human visual field 240°H ¥ 120°V
- about 32 times more difficult.
What about 3D, Real Time and True Colour?
- Requires two distinct images each from different viewpoint. This
roughly doubles the complexity again.
Goggles (Head Mounted Display (HMD))
- "Looking At" paradigm
- Objects in the immediate visual range are simulated.
- Requires two small screens
- CRT, Active Matrix
- A typical screen has is about .7" with 120,000 pixels in total,
arranged in a "delta" pattern--one each for the red, green and blue
- The apparent resolution of the screen is some 40,000 pixels, or 200x200.
- Image warping/depixelizing
- This is usually done with optics rather than digitally transforming
- Increases the apparent resolution of the screens. We can get away with
this because only the objects that we are focusing on are clear--perhiperal areas are rendered in very low resolution.
- "Inside Out" paradigm
- This is similar to the Star Trek "Holodeck." The VR user appears
in a room, and one's hand is really one's hand.
- The Cave Automatic Virtual Environment is the most famous
implementation. It is a 10'x10' cube-shaped room with projectors for
the walls and floor.
Where's the 3D?
- Stereo Shutter Glasses
- Since you need two images to generate 3D, the user wears
LCD shutter glasses which are synchronized with the scene generator which
produces alternate images for each eye. These operate at 96 or 120Hz.
- Two monitors
- This is essentially two monitors bolted to a Luxo lamp. It is
intended for passive rather than interactive use of VR. It would be
difficult to climb into the full VR gear every time a developer needed
to test the application. The BOOM makes desktop testing a possibility.
There are essentially two ways to incorporate realistic, immersive audio
into the VR environment:
The sound must appear to come from some arbitrary point in space.
- Multiple sources
- There are multiple speaker sources. For example one could mount eight speakers in each corner of a CAVE environment
- Only two speakers normally mounted inside a HMD. The 3D-ness must be simulated my some means other than physical.
Cocktail party effect
- When confronted with the task of 3D sound, how is hard is it
to do with only two headphone speakers? Very hard. How the brain
processes sound is not well understood.
White noise location
- one can only "tune in" to a conversation if there is stereo
- A fixed white noise source is only locatable if the listener
is able to move their head.
It was discovered how to modify sounds by trial and error: by placing
microphones in people's ears and determining exactly how the source
sound was modified from the original depending on the source sound's
position around the head of the subject.
- The literal interpretation is "stereo" but in the context
it means that the audio is spatially pinpointable in 3 dimensions
- This is a desktop sized unit which costs around $25,000. The
state of the art for some years.
- DSP Based systems
- Recently the development of DSP technology has reduced the cost
of such systems to about $10 and has reduced the size enough to make
- Reverberation, Echo and Realism
- Special effects must be computed relative to the VR user's apparent
position. This is very difficult, comparable to doing radiosity in the
- Feedback to user
- Also known as "Haptics"
Digression: Much of the pre-commercial VR technology
was developed by the military. Of particular interest was military
aircraft simulation which not only protected the (somewhat expendable)
inexperienced pilot, but (primarily) the U.S. military's multi-million
- Simulates "Crash and burn"
Wand solenoids and Motors
- Provides feedback with pneumatic bladders that inflate inside the glove
when simulated contact is made in the virtual world. Generally too
slow to be effective.
- Force feedback
- When modeling molecules in VR, an exoskeleton can simulate
repulsion and attraction.
- Prevents movement.
- Normally in a virtual world, there is nothing
physically to stop you from moving through walls--unless
there is an exoskeleton with sufficiently powerful motors
to control movement.
Largely an unsolved problem
There are two basic movement paradigms:
- Provides feedback to the machine
- Provides a one-to-one "real-world" movement and the
corresponding virtual world movement.
- In some circumstances, this can be rather tiring and/or
slow. It can also severely limit the travel distance
without more hardware such as treadmills used in
some architectural walk-throughs.
- Constant motion
- The user only indicates motion and speed changes. Usually
found in aircraft or land vehicle simulators, or games where
"flying" is required.
The idea here is to provide as many degrees of freedom as possible for
realistic motion response. The ultimate is 6 DOF: Yaw, Pitch, Roll in
addition to the X, Y, and Z coordinates in 3-space.
- Three magnetic field generators are placed in proximity to a sensor
located on the wand. The generators fire sequentially and a computer
triangulates, based on the magnetic strength, where the sensor is in 3-space.
- This works much like the magnetic sensors, only using sound. Instead of "strength" these sensors measure the time it takes for the sound to get from the generator to the sensor.
- Sensors at each joint or rotation axis are able to keep track
of user movement very accurately. The trade-off is that this technology
is quite cumbersome.
- On HMDs, intertial sensors are oft-used for keeping track of head
rotation. Unfortunately, these types of sensors are particularly
unreliable for precise measurement such as that required for VR work.
They are more frequently used to keep track of how far a missile has
- Sensors in the Wand or HMD "count" light emmissions from the
environment to determine position.
- Robot Vision
- Sufficiently sophisticated HMD-mounted pattern matching that is
able to determine position and orientation by taking visual cues from the
- This is not currently feasible (although highly desirable).
- Detect finger flexion
- Fiber optics
- Glass filaments located in the fingers of the glove
that have been appropriately "slashed" leak
light in proportion to the amount that the fiber has been
flexed. This is detected by optoelectronics in the glove
- Chordgloves--Pinch Sensors
- Determines how much pressure one is using to
grab a virtual object. It imparts the potential to "crush" or deform
You can never get enough processing power.
The basic goal is to sufficiently simulate reality. Of course, this
depends on how demanding one's definition of VR is. If "Doom" is
sufficient, then you will need significantly less power.
- Models objects
- Point light sources
- View dependent
- Rays of light from the eye propagate through objects.
- Models light
- Diffuse, area lighting; realistic shadows.
- View independent (conditionally)
- Surface patches ("emitters") bounce light from one to the other.
If attempting to do any kind of heavy-duty VR, life begins and ends at
- Platform Independent
- Integrate with the WWW
- Large shared space
Instead of providing the VRML browser with network addresses, one
simply gives it a set of three dimensional coordinates. Servers which
potentially have objects in this space return return them.
- These are the actual "things" available in the VRML space.
- Where the objects are, relative to one another.
- Binding mechanism
- Matches the 3D space to VRML servers on the network.
No pictures yet! It is largely theoretical, however the claim is that
a browser is currently being developed: Labyrinth
width 2 # SFFloat
height 2 # SFFloat
depth 2 # SFFloat
Wired's VRML page
- Some of the things that VR promises are practically
impossible without surgery.
- A system such as n-Vision cost about $65,000. Some U.S. Military
simulation system cost $50 Million. Amortized over 10 years, this
works out to be about $3000/hour. In comparison, instructor assisted
flight (actual reality) at the Calgary airport is $100/hour.
- There are currently no hard standards for interfacing
much of the VR peripheral equipment. For example, some
of the CAVE transducer output needed to be "massaged"
in a i486 machine before it was sent to their
Low resolution feedback
- In VR with an HMD display, one tends to adapt to rotating
one's head rather than simply moving your eyes.
- Eye strain: your eye finds the focus of the screen nearer
(usually a constant of about 3m)
than the apparent focus of the object.
- Visual/Audio mismatches can cause headaches.
- Much of the sensor technology today is not particularly
accurate. Even when the user is completely still there
is often some "jitter" associated with many types of
sensors (e.g. magnetic and ultrasonic)
- Processing the transducer input can take quite a bit of
time, generating some lag time between movement and
- The rendering process consumes a huge piece of the processing
power pie. Some compromise is usually reached, such as
reducing the LOD (level of detail) or partial scene rendering.
- VR users tend to be encumbered with a veritable
plethora of heavy and inconvenient appendages such as suits,
gloves and goggles, many attached with wires and/or arms
affixed to the ceiling.
Back to report list