CAVE (Cave Automatic Virtual Environment) studio is used for visualization of computer generated three-dimensional structures, objects and worlds to in a virtual space where the viewer is an active operator surrounded by five display walls of computer generated virtual reality.

CAVE is run by a PC Visualization Cluster - a multi display graphics computing system. There is one computer per one display wall, total number of computers is 6, of which one is a master computer and 5 are graphics slave computers.

To display a 3D world around the user in a cubical display, tracking system is required to locate the exact position and orientation of the viewer inside the CAVE. Optical tracking provides those exact coordinates of user's eyes and the wand by small retro reflective balls attached to the viewer's stereo glasses and the navigation wand.

The image on each display wall is stereoscopic. A bit different images are drawn for viewers left and right eye, providing the depth in the image. This way the objects shown in the display walls are not restricted in the display wall plane, but they seem to float in front of the viewer instead.

It is also possible to navigate in the virtual environment by using a "magic wand", which provide the user a way to move into the direction pointed with the wand. It s a controller, which has tracking enabled with the reflective balls, it has a thumb driven analog stick for moving forward and backward and to turn left and right. It also includes several buttons to control the behavior of the active 3D model.

Full Body Motion Capture or MoCap is also possible in the CAVE. The users wears a special motion cature suit, which has dozens of reflective balls attached to it and the 12 camera system is capturing the CAVE volume constantly at 100 Hz. The computer system calculates the position of each reflective ball and is able to create a living point cloud of the operator. This point cloud is then refined as a skeleton system which may be used as a skeleton of any 3D character within a animation software.

 Head Mounted Display (HMD) devices - such as Oculus Rift Development Kit 2 (DK2), HTC Vive and OSVR (Open Source Virtual Reality) Razer HDK (Hacker's Development Kit) - provide a stereoscopic high definition image for viewer's eyes and exact and low latency tracking system which enables user to view the 3D model in any angle - 360 degreen in every orientation.

A haptic device provides a force into user's hand through a pen-like stylus that the user holds in his/her hand and controls the application (usually moves the pointer) with. Those forces may be static or dynamic, impulse, or simulating the weight of an object, inertia, centrifugal force, viscosity of a liquid, gravitation, spring force, etc.

With a 3D Motion Capture Sensor such as Microsoft XBOX 360 Kinect, you may scan a 3D model, do some motion capture or gesture recognition, video conference, voice command control, image mapping, etc.

Ylös / Up