Research - Sampling and Compression
-
Sampling Clear Sky Models using Truncated Gaussian Mixtures
N. Vitsas, K. Vardis, G. Papaioannou, Proc. Eurographics Symposium on Rendering, 2021.
Abstract. Parametric clear sky models are often represented by simple analytic expressions that can efficiently generate plausible, natural radiance maps of the sky, taking into account expensive and hard to simulate atmospheric phenomena. In this work, we show how such models can be complemented by an equally simple, elegant and generic analytic continuous probability density function (PDF) that provides a very good approximation to the radiance-based distribution of the sky. We describe a fitting process that is used to properly parameterise a truncated Gaussian mixture model, which allows for exact, constant-time and minimal-memory sampling and evaluation of this PDF, without rejection sampling, an important property for practical applications in offline and real-time rendering. We present experiments in a standard importance sampling framework that showcase variance reduction approaching that of a more expensive inversion sampling method using Summed Area Tables.
Author-prepared version of the paper Supplemental material Github source code and data repository
-
A GPU-Based Real-time Video Compression Method for Video Conferencing
S. Katsigiannis, G. Papaioannou, D. Maroulis, proc. 18th IEEE/EURASIP Int. Conf. on Digital Signal Processing (DSP2013), 2013.
Abstract. Recent years have seen a great increase in the everyday use of real-time video communication over the internet through video conferencing applications. Limitations on computational resources and network bandwidth require video encoding algorithms that provide acceptable quality on low bitrates and can support various resolutions inside the same stream. In this work, the authors present a scalable video coding algorithm based on the contourlet transform that incorporates both lossy and lossless methods, as well as variable bitrate encoding schemes in order to achieve compression. Furthermore, due to the transform utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. The proposed algorithm is designed to achieve realtime performance by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video conferencing that demand real-time performance. The performance and quality evaluation of the algorithm shows that the proposed algorithm achieves satisfactory quality and compression ratio.
Downloads: author-prepared paper version
-
Progressive Screen-space Multi-channel Surface Voxelization
A. Gaitatzes, G. Papaioannou, In GPU Pro 4 (Ed.: W. Engel), CRC Press, 2013.
Abstract. To alleviate the problems of screen-space voxelization techniques, but maintain their benefit of predictable, controllable and bound execution time relative to full-scene volume generation methods, we introduce the concept of Progressive Voxelization. The volume representation is incrementally updated to include the newly discovered voxels and discard the set of invalid voxels, which are not present in any of the current image buffers. Using the already available camera and light source buffers, a combination of volume injection and voxel-to-depth-buffer re-projection scheme continuously updates the volume buffer and discards invalid voxels, progressively constructing the final voxelization. The algorithm is lightweight and operates on complex dynamic environments where geometry, materials and lighting can change arbitrarily. Compared to single-frame screen-space voxelization, the method provides improved volume coverage (completeness) over non-progressive methods, while maintaining its high performance merits.
-
Texture Compression using Wavelet Decomposition
P. Mavridis, G. Papaioannou, Computer Graphics Forum (Proc. Pacific Graphics 2012), 31(7), 2012.
Also presented at: P. Mavridis, G. Papaioannou, proc. I3D 2012: ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, p. 218 (Poster).
Abstract. In this paper we introduce a new fixed-rate texture compression scheme based on the energy compaction properties of a modified Haar transform. The coefficients of this transform are quantized and stored using standard block compression methods, such as DXTC and BC7, ensuring simple implementation and very fast decoding speeds. Furthermore, coefficients with the highest contribution to the final image are quantized with higher accuracy, improving the overall compression quality. The proposed modifications to the standard Haar transform, along with a number of additional optimizations, improve the coefficient quantization and reduce the compression error. The resulting method offers more flexibility than the currently available texture compression formats, providing a variety of additional low bitrate encoding modes for the compression of grayscale and color textures.
Downloads: author-prepared paper version supplemental material presentation slides i3D poster abstract
Reference: BibTex
-
A Contourlet Transform based algorithm for real-time video encoding
S. Katsigiannis, G. Papaioannou, D. Maroulis, Real-Time Image and Video processing Conference, SPIE Photonics Europe, 2012.
(best student paper award)
Abstract. In recent years, real-time video communication over the internet has been widely utilized for applications like video conferencing. Streaming live video over heterogeneous IP networks, including wireless networks, requires video coding algorithms that can support various levels of quality in order to adapt to the network end-to-end bandwidth and transmitter/receiver resources. In this work, a scalable video coding and compression algorithm based on the Contourlet Transform is proposed. The algorithm allows for multiple levels of detail, without re-encoding the video frames, by just dropping the encoded information referring to higher resolution than needed. Compression is achieved by means of lossy and lossless methods, as well as variable bit rate encoding schemes. Furthermore, due to the transformation utilized, it does not suffer from blocking artifacts that occur with many widely adopted compression algorithms. Another highly advantageous characteristic of the algorithm is the suppression of noise induced by low-quality sensors usually encountered in web-cameras, due to the manipulation of the transform coefficients at the compression stage. The proposed algorithm is designed to introduce minimal coding delay, thus achieving real-time performance. Performance is enhanced by utilizing the vast computational capabilities of modern GPUs, providing satisfactory encoding and decoding times at relatively low cost. These characteristics make this method suitable for applications like video-conferencing that demand real-time performance, along with the highest visual quality possible for each user. Through the presented performance and quality evaluation of the algorithm, experimental results show that the proposed algorithm achieves better or comparable visual quality relative to other compression and encoding methods tested, while maintaining a satisfactory compression ratio. Especially at low bitrates, it provides more human-eye friendly images compared to algorithms utilizing block-based coding, like the MPEG family, as it introduces fuzziness and blurring instead of artificial block artifacts.
Downloads: author-prepared version of the paper
Reference: BibTex
-
High Quality Elliptical Texture Filtering
P. Mavridis, G. Papaioannou, In GPU Pro 3 (Ed.: W. Engel), AK Peters/CRC Press, 2012.
Abstract. In this chapter, we present a series of simple and effective methods to perform high quality texture filtering on modern GPUs. We base our methods on the theory behind the elliptical weighted average (EWA) filter. We first present an exact implementation of the EWA filter that smartly uses the underlying bilinear filtering hardware to gain a significant speedup. We then proceed with an approximation of the EWA filter that uses the underlying anisotropic filtering hardware of the GPU to construct a filter that closely matches the shape and the properties of the EWA filter, offering vast improvements in the quality of the texture mapping. To further accelerate the method, we also introduce a spatial and temporal sample distribution scheme that distributes samples in space and time, permitting the human eye to perceive a higher image quality, while using fewer samples on each frame.
-
Two Simple Single-pass GPU methods for Multi-channel Surface Voxelization of Dynamic Scenes
A. Gaitatzes, P. Mavridis, G. Papaioannou, to be presented at Pacific Graphics 2011 (short paper).
Abstract. An increasing number of rendering and geometry processing algorithms relies on volume data to calculate anything from effects like smoke/fluid simulations, visibility information or global illumination effects. We present two real-time and simple-to-implement novel surface voxelization algorithms and a volume data caching structure, the Volume Buffer, which encapsulates functionality, storage and access similar to a frame buffer object, but for threedimensional scalar data. The Volume Buffer can rasterize primitives in 3d space and accumulate up to 1024 bits of arbitrary data per voxel, as required by the specific application. The strength of our methods is the simplicity of the implementation resulting in fast computation times and very easy integration with existing frameworks and rendering engines.
Downloads: author-prepared version of the paper
Reference: BibTex
-
High Quality Elliptical Texture Filtering on GPU
P. Mavridis, G. Papaioannou, Proc. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games 2011 (I3D), San Francisco, CA, USA, pp. 23-30, 2011.
Abstract. The quality of the available hardware texture filtering, even on state of the art graphics hardware, suffers from several aliasing artifacts, in both spatial and temporal domain. Those artifacts are mostly evident in extreme conditions, such as grazing viewing angles, highly warped texture coordinates, or extreme perspective and become especially annoying when animation is involved. In this paper we introduce a method to perform high quality texture filtering on GPU, based on the theory behind the Elliptical Weighted Average (EWA) filter. Our method uses the underlying anisotropic filtering hardware of the GPU to construct a filter that closely matches the shape and the properties of the EWA filter, offering vast improvements in the quality of texture mapping while maintaining high performance. Targeting real-time applications, we also introduce a novel spatial and temporal sample distribution scheme that distributes samples in space and time, permitting the human eye to perceive a higher image quality, while using less samples on each frame. Those characteristics make our method practical for use in games and other interactive applications. For cases where quality is more important than speed, like GPU renderers and image manipulation programs, we also present an exact implementation of the EWA filter that smartly uses the underlying bilinear filtering hardware to gain a significant speedup.
Downloads: author-prepared version of the paper demo video conference presentation
Reference: BibTex