2016

Bas Dado, Timothy R. Kol, Pablo Bauszat, Jean-Marc Thiery, and Elmar Eisemann
Computer Graphics Forum, 2016
Hyuntae Joo, Soonhyeon Kwon, Sangmin Lee, Elmar Eisemann, and Sungkil Lee
Computer Graphics Forum, 2016
Niels de Hoon, A.C. Jalba, Elmar Eisemann, and Anna Vilanova
In Proceedings of Visual Computing in Biology and Medicine, 2016
Thomas Höllt, Nicola Pezzotti, Vincent van Unen, Frits Koning, Elmar Eisemann, et al.
Computer Graphics Forum, 2016
Leonardo Scandolo, Pablo Bauszat, and Elmar Eisemann
Computer Graphics Forum, 2016
Gerard Simons, Marco Ament, Sebastian Herholz, Carsten Dachsbacher, Martin Eisemann, and Elmar Eisemann
In Proceedings of Vision, Modeling, and Visualization, 2016
Leonardo Scandolo, Pablo Bauszat, and Elmar Eisemann
Computer Graphics Forum, 2016

2015

We simulate the change of image appearance between photopic conditions (left) and appearance in scotopic conditions close to the absolute threshold (right), where consistent vision fades into temporally varying (not reproducible in print) noise.
Petr Kellnhofer, Tobias Ritschel, Karol Myszkowski, Elmar Eisemann, and Hans-Peter Seidel
Computer Graphics Forum, 2015
Depth Layering: While a single layer will result in a single average over all geometry (a), we can slice our scene in multiple depth layers to obtain averages for each layer separately (b).
Quintijn Hendrickx, Leonardo Scandolo, Martin Eisemann, and Elmar Eisemann
In Proceedings of High Performance Graphics, 2015
Prototype Visualization
Michael Stengel, Steve Grogorick, Martin Eisemann, Elmar Eisemann, and Marcus Magnor
In Proceedings of ACM Multimedia, 2015
A complex scene with fine details and global illumination. Left: Images rendered with PBRT [PH10] using 32 samples per pixel rendered in 2.5 minutes. Middle: Image reconstructed by our algorithm in 2.6 minutes including rendering and filtering. Right: Equal error image with 200 samples per pixel rendered in 12.7 minutes.
Pablo Bauszat, Martin Eisemann, Elmar Eisemann, and Marcus Magnor
Computer Graphics Forum, 2015
Petr Kellnhofer, Tobias Ritschel, Karol Myszkowski, Elmar Eisemann, and Hans-Peter Seidel
Computer Graphics Forum, 2015
Left full exposure, pins can be deployed anywhere on the bone/cartilage. Right limited exposure. The orthopedic surgeon paints the areas on the bone that are deemed accessible during surgery, thus limiting where pins can be deployed
Thomas Kroes, Edward R. Valstar, and Elmar Eisemann
International Journal of Computer Assisted Radiology and Surgery, 2015
Manipulation of different entity types within the same scene. A street entity led to the generation of several buildings, represented as shapes. Each building was given a number which was then used to produce a custom texture to attach next to each door. Light sources were then instanced on the location of each street lamp, just above the doors. The generation of each entity type therefore greatly benefits when processed in sequence, in a single pipeline, instead of in separate environments.
Pedro Silva, Elmar Eisemann, Rafael Bidarra, and Antonio Coelho
Int J Comput Games Technol, 2015
Left: single scattering using the original geometry of the scene. The leaves of the tree block most of the light, causing only a subtle scattering effect. Right: scattering created by occluder manipulation. Using our system, an artist can easily add holes into the shadow map of the tree, causing an increased amount of and more interesting scattering effects. While physically incorrect, it is not visible to the viewer that the right image uses fake occlusion information. Insets show the scattering only. Surface shadows are created from the unmodified shadow map.
Oliver Klehm, Timothy R. Kol, Hans-Peter Seidel, and Elmar Eisemann
In Proceedings of Graphics Interface, 2015
Exposure comparison
Michael Stengel, Pablo Bauszat, Martin Eisemann, Elmar Eisemann, and Marcus Magnor
IEEE Transactions on Visualization and Computer Graphics, 2015
We compute the product of approximated visibility and environment map lighting in a stochastic Monte Carlo volume renderer to steer a joint importance sampling of the direct lighting. Our proposed two-step approach is well suited for dynamic changes in visibility and lighting functions due to a fast sweeping-plane algorithm to estimate visibility. The insets show how our technique (blue) achieves faster convergence with less samples compared to a uniform sampling (red) and importance sampling of the environment map (yellow). Here, 64 samples per pixel have been used. The Manix data set consists of 512×512×460 voxels.
Thomas Kroes, Martin Eisemann, and Elmar Eisemann
In Proceedings of Graphics Interface, 2015
Comparison with Mean Shift on 2D data
Daniel van der Ende, Jean-Marc Thiery, and Elmar Eisemann
In Proceedings of DATA ANALYTICS 2015, The Fourth International Conference on Data Analytics, 2015