Acquisition of omnidirectional stereoscopic images and videos of dynamic scenes: a review
Different camera configurations to capture panoramic images and videos are commercially available today. However, capturing omnistereoscopic snapshots and videos of dynamic scenes is still an open problem. Several methods to produce stereoscopic panoramas have been proposed in the last decade, some of which were conceived in the realm of robot navigation and three-dimensional (3-D) structure acquisition. Even though some of these methods can estimate omnidirectional depth in real time, they were not conceived to render panoramic images for binocular human viewing. Alternatively, sequential acquisition methods, such as rotating image sensors, can produce remarkable stereoscopic panoramas, but they are unable to capture real-time events. Hence, there is a need for a panoramic camera to enable the consistent and correct stereoscopic rendering of the scene in every direction. Potential uses for a stereo panoramic camera with such characteristics are free-viewpoint 3-D TV and image-based stereoscopic telepresence, among others. A comparative study of the different cameras and methods to create stereoscopic panoramas of a scene, highlighting those that can be used for the real-time acquisition of imagery and video, is presented.
Stereoscopic cameras for the real-time acquisition of panoramic 3D images and videos
There are different panoramic techniques to produce outstanding stereoscopic panoramas of static scenes. However, a camera configuration capable to capture omnidirectional stereoscopic snapshots and videos of dynamic scenes is still a subject of research. In this paper, two multiple-camera configurations capable to produce high-quality stereoscopic panoramas in real-time are presented. Unlike existing methods, the proposed multiple-camera systems acquire all the information necessary to render stereoscopic panoramas at once. The first configuration exploits micro-stereopsis arising from a narrow baseline to produce omni-stereoscopic images. The second panoramic camera uses an extended baseline to produce poly-centric panoramas and to extract additional depth information, e.g., disparity and occlusion maps, which are used to synthesize stereoscopic views in arbitrary viewing directions. The results of emulating both cameras and the pros and cons of each set-up are presented in this paper.
Depth consistency and vertical disparities in stereoscopic panoramas
In recent years, the problem of acquiring omnidirectional stereoscopic imagery of dynamic scenes has gained commercial interest, and consequently, new techniques have been proposed to address this problem. The goal of many of these new panoramic methods is to provide practical solutions for acquiring real-time omnidirectional stereoscopic imagery for human viewing. However, there are problems related to mosaicking partially overlapped stereoscopic snapshots of the scene that need to be addressed. Among these issues are the conditions to provide a consistent depth illusion over the whole scene and the appearance of undesired vertical disparities. We develop an acquisition model capable of describing a variety of omnistereoscopic imaging systems and suitable to study the design constraints of these systems. Based on this acquisition model, we compare different acquisition approaches based on mosaicking partial stereoscopic views of the scene in terms of their depth continuity constraints and the appearance of vertical disparities. This work complements and extends our previous work in omnistereoscopic imaging systems by proposing a mathematical framework to contrast different acquisition strategies to create stereoscopic panoramas using a small number of stereoscopic images.
Improvements in the Visualization of Stereoscopic 3D Imagery
A pleasant visualization of stereoscopic imagery must take into account factors that may produce eye strain and fatigue. Fortunately, our binocular vision system has embedded mechanisms to perceive depth for extended periods of time without producing eye fatigue; however, stereoscopic imagery may still induce visual discomfort in certain displaying scenarios. An important source of eye fatigue originates in the conflict between vergence eye movement and focusing mechanisms. Today's eye-tracking technology makes possible to know the viewers' gaze direction; hence, 3D imagery can be dynamically corrected based on this information. In this paper, I introduce a method to improve the visualization of stereoscopic imagery on planar displays based on emulating vergence and accommodation mechanisms of binocular human vision. Unlike other methods to improve visual comfort that introduce depth distortions in the stereoscopic visual media this technique aims to produce a gentler and more natural binocular viewing experience without distorting the original depth of the scene.
A Model for the Omnidirectional Acquisition and Rendering of Stereoscopic Images for Human Viewing
Interactive visual media enable the visualization and navigation of remote-world locations in all gaze directions. A large segment of such media is created using pictures from the remote sites thanks to the advance in panoramic cameras. A desirable enhancement is to facilitate the stereoscopic visualization of remote scenes in all gaze directions. In this context, a model for the signal to be acquired by an omnistereoscopic sensor is needed in order to design better acquisition strategies. This omnistereoscopic viewing model must take into account the geometric constraints imposed by our binocular vision system since we want to produce stereoscopic imagery capable to induce stereopsis consistently in any gaze direction; in this paper, we present such model. In addition, we discuss different approaches to sample or to approximate this function and we propose a general acquisition model for sampling the omnistereoscopic light signal. From this model, we propose that by acquiring and mosaicking sparse sets of partially overlapped stereoscopic snapshots, a satisfactory illusion of depth can be evoked. Finally, we show an example of the rendering pipeline to create the omnistereoscopic imagery.
Efficient panoramic sampling of real-world environments for image-based stereoscopic telepresence
A key problem in telepresence systems is how to effectively emulate the subjective experience of being there delivered by our visual system. A step toward visual realism can be achieved by using high-quality panoramic snapshots instead of computer-based models of the scene. Furthermore, a better immersive illusion can be created by enabling the free viewpoint stereoscopic navigation of the scene, i.e. using omnistereoscopic imaging. However, commonly found implementation constraints of telepresence systems such as acquisition time, rendering complexity, and storage capacity, make the idea of using stereoscopic panoramas challenging. Having these constraints in mind, we developed a technique for the efficient acquisition and rendering of omnistereoscopic images based on sampling the scene with clusters of three panoramic images arranged in a controlled geometric pattern. Our technique can be implemented with any off-the-shelf panoramic cameras. Furthermore, it does not require neither the acquisition of additional depth information of the scene nor the estimation of camera parameters. The low the computational complexity and reduced data overhead of our rendering process make it attractive for the large scale stereoscopic sampling in a variety of scenarios.
Optimum alignment of panoramic images for stereoscopic navigation in image-based telepresence systems
The addition of stereoscopic navigation to an image-based virtual environment is a desirable enhancement. This can be implemented by sampling the scene with a number of stereoscopic panoramas. In this regard, clusters of panoramas in a known spatial arrangement can be used to render omnistereoscopic views. However, slight misalignments between panoramas introduced by single-shot panoramic cameras must be corrected in order to capture the depth of the scene consistently in every direction. In this regard, a novel alignment correction method is proposed herein based on the dense disparity map between panoramas. This technique was successfully tested in the rendering of omnistereoscopic images in different scenarios and it is applicable to any off-the-shelf panoramic cameras. Unlike other panorama alignment methods, this is a featureless and uncalibrated solution to the alignment of closely taken panoramic snapshots. Furthermore, the omnistereoscopic rendering and aligning techniques proposed herein are computationally inexpensive alternatives for creating stereoscopic image-based virtual environments.