Reconfigurable, Tiled Multi-display Display Environment (TiDE)
In many virtual reality installations and applications the need arises for the projection of synthetic imagery on multiple surfaces, or on non-planar or discontinuous surfaces, as is the case with a CAVE system or a dome. In such cases, not only multiple graphics cards need to be used, but also the viewing and projection transformation of each one of the renderer views must be configured individually to produce a seamless final result.
Having that in mind, a display library named TiDE* (Tiled Display Environment) was designed and implemented, which operates as a projection matrix configuration mediator between the actual rendering procedure and the graphics outputs of a system. The configuration of TiDE is enabled through a simple yet effective XML script, which allows multiple configurations to be present in a single file and share some common features if necessary. The XML configuration file provides a list of any possible scripted configurations a computer (or cluster node) may have as well as global data such as the center of projection and global transformations of the projections setup (needed for example for easy and global tilting of the virtual views at various angles - yes this was required when building the system for the FHW dome theater for example). Each setup section has a unique name specified by the user and TiDE can switch from one setup to another in real-time, as well as re-calculate the projection matrices according to the tracked or simulated head input. Setup switching is important because it enables a fast camera switching, on-the-fly VR display reconfiguration or - if used in a computing cluster - flexible node switching and redundancy.
TiDE uses the notion of a tile, which represents a physical projection surface or medium. A tile is the final destination of the generated image, it exists in a real coordinate system, and is therefore measured in physical units. Instead of adhering to the OpenGL or other SDK frustum setup procedure, where arbitrary projections are explicitly configured in an unintuitive manner, TiDE buries all necessary calculations inside the library's code and provides a more natural and virtual reality installation-targeted configuration interface. TiDE is designed for single/multiple channel setup in monoscopic, passive stereo, active stereo or left/right individual eye operation and the generation of symmetrical or oblique frusta. It is suitable for configuring an application to display OpenGL graphics, run on a CAVE-like environment, a Reality Center, a dome theatre, a Power wall, or any other tiled or blended display configuration, as well as on single screen desktop VR systems, HMDs or reconfigurable projection spaces with disjoint panels (see figure below). A Tile is a physical surface in space where three-dimensional graphics are projected. We can consider it as a "window" in reality, which the spectator can glimpse the virtual environment from. In other words, when declaring a tile, we are referring to a real surface in the hardware setup of the projection system. For instance, in the case of a four-wall cubical surround screen projection environment (like the CAVE), each one of the rear or front projection panels comprising the immersive environment is a TiDE tile. For a blended curved screen projection system, a tile corresponds to the area an image projector covers on the screen. For a desktop system, the video display screen is the counterpart of a tile and so on. A tile can be declared in various ways, using arbitrary axes in space, angular offsets or frusta in relative or absolute coordinates, making TiDE compatible with every installation environment. A TiDE tile may consist of multiple viewports (e.g. for compositing or image superposition).
Solving Stereoscopic Display Issues with Unidirectional Eye Separation Setup
In contrast to pre-rendered graphics, where spherical or cylindrical projection surfaces can be easily mapped to image space by radial ray casting, real-time rendering performs perspective projection on planar windows in space. The visible area through the extents of the window defines in general an oblique frustum centered at the user location and passing through the edges of an arbitrary rectangular viewport. Mapping the planar projections on a curved screen is not a trivial problem and results in multiple intersecting frusta at odd angles. The projectors that display the image perform projective mapping of screen image coordinates to points on the curved display surface. This should rectify the inconsistency between the perspective distortion introduced by the rendering projection and the arc-length linearity of the curved surface, if it were not for the fact that the physical frustum of the projector is different from the displayed virtual frustum. To counter this, the image is warped before being driven to the projector.
The most common and easiest practice when calculating the left/right eye frusta is to assume that the eye disparity vector is parallel to the projection plane. This is hardly an unrealistic scenario, as the spectators tend to face a display surface when viewing the three-dimensional content. So this frustum setup works pretty well for tiled flat panels, CAVE environments or cylindrical panoramas with a small bend angle. The problem that arises is that when the planar projections intersect at a considerable angle (as in the case of domes) the corresponding disparity vectors are crossing each other too at the same angle. Even for small inter-ocular distances, the generated frames cannot match at the seams and the whole problem can only be partially alleviated by the image blending. Another problem associated with the particular case of a spherical projection surface is inverted stereo or cross-eye viewing. When the disparity vector is parallel to the projection plane, the top and rear left/right offsets are inverted. One solution to the crossing of the vectors is to completely remove the stereo effect from the rear tiles, although even this, needs to be done gradually by progressively diminishing the inter-ocular distance from the front tiles toward the back of the spherical projection surface.
The generation of a pair of frusta for stereoscopic rendering by moving the eye point of each one parallel to the monoscopic viewing plane is quite trivial and it also has the advantage of keeping the rigid camera transformation disassociated from the projection transformation, a computational formulation that graphics programmers are accustomed to. In order to overcome though the problems stated above, an eye pair configuration is required that does not depend on the projection plane direction and remains fixed for all display tiles.
Given a primary looking direction, for example the expected front direction (direction toward the main point of interest) in a dome virtual reality theater, the center of projection for the left and right eyes can be defined on a line perpendicular to that direction in the world coordinate system (WCS) for all frusta. This ensures that the disparity vector is the same for all projections and therefore there is no mismatch between projected viewports. Furthermore, the traditional eye-space stereo separation technique could produce inappropriate frusta in the case where the display tile was tilted. The resulting disparity vector when transformed to the world coordinate system (the spectators’ center of projection coordinate system) would not be parallel to the horizon. Using a unidirectional configuration relative to the WCS, we avoid this phenomenon. Another useful property is that as the eye-pair vector remains fixed, its projection on the display tiles varies according to the relative angle between the respective viewing direction and the primary one (front). This provides a natural transition from the maximum apparent inter-ocular distance to a zero stereo effect.
- D. Christopoulos, A. Gaitatzes, G. Papaioannou, G. Zyba, Designing
a Real-time Playback System for a Dome Theater, Proc. Eurographics
7th International Symposium on Virtual Reality, Archaeology and Intelligent
Cultural Heritage VAST, pp. 35-40, 2006.