Research - Interactive Rendering Algorithms
-
A Neural Builder for Spatial Subdivision Hierarchies,
I. Evangelou, G. Papaioannou, K. Vardis, A. Gkaravelis, The Visual Computer, 39, pp. 3797–3809, 2023
Abstract. Spatial data structures, such as k-d trees and bounding volume hierarchies, are extensively used in computer graphics for the acceleration of spatial queries in ray tracing, nearest neighbour searches and other tasks. Typically, the splitting strategy employed during the construction of such structures is based on the greedy evaluation of a predefined objective function, resulting in a less than optimal subdivision scheme. In this work, for the first time, we propose the use of unsupervised deep learning to infer the structure of a fixed-depth k-d tree from a constant, subsampled set of the input primitives, based on the recursive evaluation of the cost function at hand. This results in high-quality upper spatial hierarchy, inferred in constant time and without paying the intractable price of a fully recursive tree optimisation. The resulting fixed-depth tree can then be further expanded, in parallel, into either a full k-d tree or transformed into a bounding volume hierarchy, with any known conventional tree builder. The approach is generic enough to accommodate different cost functions, such as the popular surface area and volume heuristics. We experimentally validate that the resulting hierarchies have competitive traversal performance with respect to established tree builders, while maintaining minimal overhead in construction times.
Downloads: Official paper version
Github page: https://github.com/cgaueb/nss
-
Parallel Transformation of Bounding Volume Hierarchies into Oriented Bounding Box Trees
N. Vitsas, I. Evangelou, G. Papaioannou, A. Gkaravelis, Computer Graphics Forum (Proc. Eurographics), 42(2), pp. 245-254, 2023
Abstract. Oriented bounding box (OBB) hierarchies can be used instead of hierarchies based on axis-aligned bounding boxes (AABB), providing tighter fitting to the underlying geometric structures and resulting in improved interference tests, such as ray-geometry intersections. In this paper, we present a method for the fast, parallel transformation of an existing bounding volume hierarchy (BVH), based on AABBs, into a hierarchy based on oriented bounding boxes. To this end, we parallelise a high-quality OBB extraction algorithm from the literature to operate as a standalone OBB estimator and further extend it to efficiently build an OBB hierarchy in a bottom up manner. This agglomerative approach allows for fast parallel execution and the formation of arbitrary, high-quality OBBs in bounding volume hierarchies. The method is fully implemented on the GPU and extensively evaluated with ray intersections.
Downloads: Official paper version
Github page: https://github.com/cgaueb/obvh
Short promotional video: https://youtu.be/zCg31NUkZGs
-
Remote Teaching Advanced Rendering Topics Using the Rayground Platform
A. A. Vasilakis, G. Papaioannou, N. Vitsas, A. Gkaravelis, IEEE Computer Graphics and Applications, 41(5), p. 99-103, 2021.
Abstract. Rayground is a novel online framework for fast prototyping and interactive demonstration of ray tracing algorithms. It aims to set the ground for the online development of ray-traced visualization algorithms in an accessible manner for everyone, stripping off the mechanics that get in the way of creativity and the understanding of the core concepts. Due to the COVID-19 pandemic, remote teaching and online coursework have taken center stage. In this work, we demonstrate how Rayground can incorporate advanced instructive rendering media during online lectures as well as offer attractive student assignments in an engaging, hands-on manner. We cover things to consider when building or porting methods to this new development platform, best practices in remote teaching and learning activities, and time-tested assessment and grading strategies suitable for fully online university courses.
Downloads: author-prepared version of the paper
Github page: https://cgaueb.github.io/publications/remote_teaching_rg/
-
WEBRAYS: Ray Tracing on the Web
N. Vitsas, A. Gkaravelis, A. A. Vasilakis, G. Papaioannou, Ray Tracing Gems II, A. Marrs (Ed.), P. Shirley (Ed.), I. Wald (Ed.), ISBN 978-1-4842-4427-2, p. 281-299, 2021.
Abstract. This chapter introduces WebRays, a GPU-accelerated ray intersection engine for the World Wide Web. It aims to offer a flexible and easy-to-use programming interface for robust and high-performance ray intersection tests on modern browsers. We cover design considerations, best practices, and usage examples for several ray tracing tasks.
Downloads: the open access-book chapter the open-access book.
Github project page: https://cgaueb.github.io/publications/webrays/
-
Fast Radius Search Exploiting Ray Tracing Frameworks
I. Evangelou, G. Papaioannou, K. Vardis, A. A. Vasilakis, Journal of Computer Graphics Techniques (JCGT), vol. 10, no. 1, 25-48, 2021.
Abstract. Spatial queries to infer information from the neighborhood of a set of points are very frequently performed in rendering and geometry processing algorithms. Traditionally, these are accomplished using radius and k-nearest neighbors search operations, which utilize kd-trees and other specialized spatial data structures that fall short of delivering high performance. Recently, advances in ray tracing performance, with respect to both acceleration data structure construction and ray traversal times, have resulted in a wide adoption of the ray tracing paradigm for graphics-related tasks that spread beyond typical image synthesis. In this work, we propose an alternative formulation of the radius search operation that maps the problem to the ray tracing paradigm, in order to take advantage of the available GPU-accelerated solutions for it. We demonstrate the performance gain relative to traditional spatial search methods, especially on dynamically updated sample sets, using two representative applications: geometry processing point-wise operations on scanned point clouds and global illumination via progressive photon mapping.
Online paper: http://jcgt.org/published/0010/01/02/
-
Rasterisation-based progressive photon mapping
I. Evangelou, G. Papaioannou, K. Vardis, A. A. Vasilakis, The Visual Computer, 2020.
Abstract. Ray tracing on the GPU has been synergistically operating alongside rasterisation in interactive rendering engines for some time now, in order to accurately capture certain illumination effects. In the same spirit, in this paper, we propose an implementation of Progressive Photon Mapping entirely on the rasterisation pipeline, which is agnostic to the specific GPU architecture, in order to synthesise images at interactive rates. While any GPU ray tracing architecture can be used for photon mapping, performing ray traversal in image space minimises acceleration data structure construction time and supports arbitrarily complex and fully dynamic geometry. Furthermore, this strategy maximises data structure reuse by encompassing rasterisation, ray tracing and photon gathering tasks in a single data structure. Both eye and light paths of arbitrary depth are traced on multi-view deep G-buffers and photon flux is gathered by a properly adapted multi-view photon splatting. In contrast to previous methods exploiting rasterisation to some extent, due to our novel indirect photon splatting approach, any event combination present in photon mapping is captured. We evaluate our method using typical test scenes and scenarios for photon mapping methods and show how our approach outperforms typical GPU-based progressive photon mapping.
Downloads: author-prepared version of the paper. The final publication is available at link.springer.com.
-
Rayground: An Online Educational Tool for Ray Tracing
N. Vitsas, A. Gkaravelis, A. A. Vasilakis , K. Vardis, G. Papaioannou, Proc. Eurographics (education papers), 2020.
Abstract. In this paper, we present Rayground; an online, interactive education tool for richer in-class teaching and gradual self-study, which provides a convenient introduction into practical ray tracing through a standard shader-based programming interface. Setting up a basic ray tracing framework via modern graphics APIs, such as DirectX 12 and Vulkan, results in complex and verbose code that can be intimidating even for very competent students. On the other hand, Rayground aims to demystify ray tracing fundamentals, by providing a well-defined WebGL-based programmable graphics pipeline of configurable distinct ray tracing stages coupled with a simple scene description format. An extensive discussion is further offered describing how both undergraduate and postgraduate computer graphics theoretical lectures and laboratory sessions can be enhanced by our work, to achieve a broad understanding of the underlying concepts. Rayground is open, cross-platform, and available to everyone.
Downloads: the paper
Link: Rayground web site
Media: teaser (Eurographics 2020 fast forward video) Eurographics 2020 presentation
-
DIRT: Deferred Image-based Ray Tracing
K. Vardis, A. Vasilakis, G. Papaioannou, proc. High Performance Graphics, 2016.
Abstract. We introduce a novel approach to image-space ray tracing ideally suited for the photorealistic synthesis of fully dynamic environments at interactive frame rates. Our method, designed entirely on the rasterization pipeline, alters the acceleration data structure construction from a per-fragment to a per-primitive basis in order to simultaneously support three important, generally conflicting in prior art, objectives: fast construction times, analytic intersection tests and reduced memory requirements. In every frame, our algorithm operates in two stages: A compact representation of the scene geometry is built based on primitive linked-lists, followed by a traversal step that decouples the ray-primitive intersection tests from the illumination calculations; a process inspired by deferred rendering and the path integral formulation of light transport. Efficient empty space skipping is achieved by exploiting several culling optimizations both in xy- and z-space, such as pixel frustum clipping, depth subdivision and lossless buffer down-scaling. An extensive experimental study is finally offered showing that our method advances the area of image-based ray tracing under the constraints posed by arbitrarily complex and animated scenarios.
Downloads: author-prepared version of the paper demo with shader source code
-
A Multiview and Multilayer Approach for Interactive Ray Tracing
K. Vardis, A. Vasilakis, G. Papaioannou, Proc. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (i3D 2016), pp. 171-178, 2016.
Abstract. We introduce a generic method for interactive ray tracing, able to support complex and dynamic environments, without the need for precomputations or the maintenance of additional spatial data structures. Our method, which relies entirely on the rasterization pipeline, stores fragment information for the entire scene on a multiview and multilayer structure and marches through depth layers to capture both near and distant information for illumination computations. Ray tracing is efficiently achieved by concurrently traversing a novel cube-mapped A-buffer variant in image space that exploits GPU-accelerated double linked lists, decoupled storage, uniform depth subdivision and empty space skipping on a per-fragment basis. We illustrate the effectiveness and quality of our approach on path tracing and ambient occlusion implementations in scenarios, where full scene coverage is of major importance. Finally, we report on the performance and memory usage of our pipeline and compare it against GPGPU ray tracing approaches.
Downloads: author-prepared version of the paper video YouTube video demo with shader source code presentation
-
Real-time Radiance Caching using Chrominance Compression
Kostas Vardis, Georgios Papaioannou, and Anastasios Gkaravelis, Journal of Computer Graphics Techniques (JCGT), 3(4), pp. 111-131, 2014
Abstract. This paper introduces the idea of expressing the radiance field in luminance/chrominance values and encoding the directional chrominance in lower detail. We exploit this alternative radiance representation in a low-cost real-time volume-based radiance caching method. Reducing the spherical harmonics coefficients for the chrominance components allows the finer representation of luminance transitions, stored in higher order spherical harmonics and the support for arbitrary light bounces and view-independent indirect occlusion. We combine the radiance field chrominance compression with an optimized cache population scheme, where cache points are generated only at locations, which are guaranteed to contribute to the reconstructed surface irradiance. These computation and storage savings allow the use of third-order spherical harmonics representation to sufficiently capture and reconstruct the directionality of diffuse irradiance, while maintaining fast and customizable performance. Our method performs well in highly complex and dynamic environments and is mainly aimed at real-time applications, although our general qualitative evaluation indicates benefits for offline rendering as well.
Online paper: http://jcgt.org/published/0003/04/06/
Downloads: video demo and shader source code
Reference: BibTex
-
Multi-view Ambient Occlusion with Importance Sampling
K. Vardis, G. Papaioannou, A. Gaitatzes, Proc. ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (i3D 2013), pp. 111-118.
Abstract. Screen-space ambient occlusion and obscurance (AO) techniques have become de-facto methods for ambient light attenuation and contact shadows in real-time rendering. Although extensive research has been conducted to improve the quality and performance of AO techniques, view-dependent artifacts remain a major issue. This paper introduces Multi-view Ambient Occlusion, a generic per-fragment view weighting scheme for evaluating screen-space occlusion or obscurance using multiple, arbitrary views, such as the readily available shadow maps. Additionally, it exploits the resulting weights to perform adaptive sampling, based on the importance of each view to reduce the total number of samples, while maintaining the image quality. Multi-view Ambient Occlusion improves and stabilizes the screen-space AO estimation without overestimating the results and can be combined with a variety of existing screen-space AO techniques. We demonstrate the results of our sampling method with both open volume- and solid angle-based AO algorithms.
Downloads: paper (author-prepared version) video supplemental material shader source code
Reference: BibTex
-
Real-Time Diffuse Global Illumination Using Radiance Hints
G. Papaioannou, Proc. High Performance Graphics 2011, pp. 15-24, 2011.
Abstract. GPU-based interactive global illumination techniques are receiving an increasing interest from both the research and the industrial community as real-time graphics applications strive for visually rich and realistic dynamic three-dimensional environments. This paper presents a fast new diffuse global illumination method that generates a sparse set of low-cost radiance field evaluation points (radiance hints) and computes an arbitrary number of diffuse inter-reflections within a given volume. The proposed approximate technique combines ideas from exiting grid-based radiance caching techniques with reflective shadow maps as well as a stochastic scheme for visibility calculations, in order to achieve high frame rates for multiple light bounces.
Downloads: paper (author-prepared version) demo video1 video2 shader code presentation
Reference: BibTex
-
Global Illumination Using Imperfect Volumes
P. Mavridis, G. Papaioannou, Proc. GRAPP 2011 (Int. Conf. on Computer Graphics Theory and Applications), pp. 160-165.
Abstract. This paper introduces the concept of imperfect volumes, a fast one-pass point-based voxelization algorithm, and presents its applications to the global illumination problem. As often noted, diffuse indirect illumination has the characteristics of a low frequency function, consisting of smooth gradations. We exploit this by performing the indirect lighting computations on a rough approximation of the scene, the imperfect volume. The scene is converted on the fly to a dense point cloud, and each point is directly rendered to a volume texture, marking the corresponding voxel as occupied. A framebuffer reprojection scheme ensures that voxels visible to the main camera will get more points. Ray-marching is then used to compute the ambient occlusion or the indirect illumination of each voxel, and the results are stored using spherical harmonics. We demonstrate that the errors introduced by the imperfections in the volume are small and that our method maintains a high frame rate on scenes with high geometric complexity.
Downloads: the paper performance comparison extreme case comparison
-
Real-Time Volume-Based Ambient Occlusion
G. Papaioannou, M. L. Menexi, C. Papadopoulos, IEEE Transactions on Visualization and Computer Graphics, pp. 752-762, September/October, 2010.
Abstract. Real-time rendering can benefit from global illumination methods to make the 3D environments look more convincing and lifelike. On the other hand, the conventional global illumination algorithms for the estimation of the diffuse surface interreflection make heavy usage of intra- and interobject visibility calculations, so they are time-consuming, and using them in real-time graphics applications can be prohibitive for complex scenes. Modern illumination approximations, such as ambient occlusion variants, use precalculated or frame-dependent data to reduce the problem to a local shading one. This paper presents a fast real-time method for visibility sampling using volumetric data in order to produce accurate inter- and intraobject ambient occlusion. The proposed volume sampling technique disassociates surface representation data from the visibility calculations, and therefore, makes the method suitable for both primitive-order or screen-order rendering, such as deferred rendering. The sampling mechanism can be used in any application that performs visibility queries or ray marching.
Downloads: author-prepared version of the paper video 1 video 2
Reference: BibTex
-
Volume-based Diffuse Global Illumination
P. Mavridis, A. Gaitatzes, G. Papaioannou, Proc. CGVCVIP ’10: Proceedings of Computer Graphics, Visualization, Computer Vision and Image Processing 2010.
Extended version: Gaitatzes, P. Mavridis, G Papaioannou, Interactive Volume-based Indirect Illumination of Dynamic Scenes, Proc. 3IA ’10: Proceedings of the 2010 International Conference on Computer Graphics and Artificial Intelligence, May 2010 (Studies in Computational Intelligence, Vol. 321 Plemenos, Dimitri; Miaoulis, Georgios (Eds.) ).
Abstract. In this paper we present a novel real-time algorithm to compute the global illumination of scenes with dynamic geometry and arbitrarily complex dynamic illumination. We use a virtual point light (VPL) illumination model on the volume representation of the scene. Light is propagated in void space using an iterative diffusion approach. Unlike other dynamic VPL-based real-time approaches, our method handles occlusion (shadowing and masking) caused by the interference of geometry and is able to estimate diffuse inter-reflections from multiple light bounces.
Downloads: the CGVCVIP paper the 3IA paper
-
Fast Approximate Visibility on the GPU Using Precomputed 4D Visibility Fields
A. Gaitatzes, A. Andreadis, G. Papaioannou, Y. Chrysanthou, Fast Approximate Visibility on the GPU Using Precomputed 4D Visibility Fields, Proc. WSCG 2010, pp. 131-138, 2010.
Abstract. We present a novel GPU-based method for accelerating the visibility function computation of the lighting equation in dynamic scenes composed of rigid objects. The method pre-computes, for each object in the scene, the visibility and normal information, as seen from the environment, onto the bounding sphere surrounding the object and encodes it into maps. The visibility function is encoded by a four-dimensional visibility field that describes the distance of the object in each direction for all positional samples on a sphere around the object. In addition, the normal vectors of each object are computed and stored in corresponding fields for the same positional samples for use in the computation of reflection in ray-tracing. Thus we are able to speed up the calculation of most algorithms that trace rays to real-time frame rates. The pre-computation time of our method is relatively small. The space requirements amount to 1 byte per ray direction for the computation of ambient occlusion and soft shadows and 4 bytes per ray direction for the computation of reflection in ray-tracing. We present the acceleration results of our method and show its application to two different intersection intensive domains, ambient occlusion computation and stochastic ray tracing on the GPU.
Downloads: the paper
Reference: BibTex
-
Realistic Real-time Underwater Caustics and Godrays
C. Papadopoulos, G. Papaioannou, Proc. GraphiCon '09, pp. 89-95, 2009.
Abstract. Realistic rendering of underwater scenes has been a subject of increasing importance in modern real-time 3D applications, such as open-world 3D games, which constantly present the user with opportunities to submerge oneself in an underwater environment. Crucial to the accurate recreation of these environments are the effects of caustics and godrays. In this paper, we shall present a novel algorithm, for physically inspired real-time simulation of these phenomena, on commodity 3D graphics hardware, which can easily be integrated in a modern 3D engine.
Downloads: the paper demo video additional images conference presentation
Reference: BibTex
-
Presampled Visibility for Ambient Occlusion
A. Gaitatzes, Y. Chrysanthou, G. Papaioannou, Proc. WSCG 2008, Journal of WSCG, 16(1-3), pp. 17-24, 2008.
Abstract.
We present a novel method to accelerate the computation of the visibility function of the lighting equation, in dynamic scenes composed of rigid, non-penetrating objects. The main idea of the technique is to pre-compute for each object in the scene its associated four-dimensional field that describes the visibility in each direction for all positional samples on a sphere around the object, we call this a displacement field. We are able to speed up the calculation of algorithms that trace visibility rays to near real time frame rates. The storage requirements of the technique, amounts from one byte to one bit per ray direction making it particularly attractive to scenes with multiple instances of the same object, as the same cached data can be reused, regardless of the geometric transformation applied to each instance. We suggest an acceleration technique and identify the sampling method that gives the best results based on experimentation.
Downloads: the paper demo video1 demo video 2
Reference: BibTex