2019

From an input mesh representing the surface of a vessel, our algorithm detects and segments the aneurysms in the vessel. Afterwards, a report is generated including meta information on the patient as well as summaries of characteristics of the aneurysms, e.g., their widths and heights.
Kai Lawonn, Monique Meuschke, Ralph Wickenhoefer, Bernhard Preim, and Klaus Hildebrandt
Computer Graphics Forum, 2019
Pixel-perfect hard shadows produced by our method in the Citadel scene from different viewpoints.
Baran Usta, Leonardo Scandolo, Markus Billeter, Ricardo Marroquim, and Elmar Eisemann
In Proceedings of High Performance Graphics (Short Papers), 2019
 Our novel interactive approach for shape detection in point clouds allows for sophisticated interactions: Left: A Lasso selection selects only points that lie on the support shape as shown in the top image. Points in front and back of the support shape are not selected (bottom). Middle: A volumetric brush selection is performed on the selected support shape (top). Points are only selected if they belong to the support shape and intersect the brush (bottom). Right: Interactive LoD increment interaction along the selected support shape (drawn in red). The top image shows the original rendering model of the point cloud; the bottom image shows the point cloud with the additional points.
Harald Steinlechner, Bernhard Rainer, Michael Schwärzler, and Georg Haaser
In Proceedings of I3D, 2019
: Comparison between gradient-domain reconstruction and Monte Carlo denoising. For surface rendering, gradient-domain rendering is less efficient than Monte Carlo denoisers that use auxiliary buffers (NFOR [BRM∗ 16]) or histograms of path samples (BCD [BB17]). NFOR could also be applied to address noisy regions remaining in gradient-domain path tracing by using the reconstructed image as guiding features, leading to improved image quality (see G-PT + NFOR in the KITCHEN scene). For volume rendering, gradient-domain rendering is comparable to Monte Carlo denoisers, particularly with photon density estimation.
B. S. Hua, A Gruson, Victor Petitjean, M Zwicker, D Nowrouzezahrai, et al.
Computer Graphics Forum, 2019
 CyTOFmerge pipeline: Split the sample, stain each partial sample with a different marker panel and apply CyTOF to obtain the panels’ measurements. Both panels A and B share a set of markers m (green). L1 (red) are unique markers of panel A, and L2 (blue) are unique markers of panel B. Both panel measurements are combined to obtain an extended markers measurements per cell, which is input to downstream computational analysis as, for example, clustering in a t-SNE mapped domain shown here.
Tamim Abdelaal, Thomas Höllt, Vincent van Unen, Boudewijn P. F. Lelieveldt, Frits Koning, et al.
Bioinform, 2019
Thomas Höllt, Anna Vilanova, Nicola Pezzotti, Boudewijn P. F. Lelieveldt, and Helwig Hauser
Computer Graphics Forum, 2019
Left: Input shapes X1 and X2 (taken from [PRMB15]) and reconstruction of linear average (Z(X1) +Z(X2))/2 ∈ M/ with the local violations of the integrability condition as color map. Rightmost shapes: reconstruction using various spanning trees color coded with respect to the order of traversal.
Josua Sassen, Behrend Heeren, Klaus Hildebrandt, and Martin Rumpf
CoRR, 2019
 Four highly glossy spheres moving in different directions with 64 samples per pixel. In each subfigure: corresponding render, difference with reference and highlighted regions.
Jerry Guo and Elmar Eisemann
CoRR, 2019
Streak visualization showing the formation, shedding and breakdown of a vortex in a patient with an aortic dissection in the aortic arch and regurgitation is present in the ascending aorta. The corresponding video can be found in the supporting material.
Niels de Hoon, Kai Lawonn, A.C. Jalba, Elmar Eisemann, and Anna Vilanova
In Proceedings of Visual Computing in Biology and Medicine, 2019
The components of LightGuider: (a) a 3D modeling view to place and modify luminaires, augmented with (b) a provenance tree, depicting several sequential modeling steps and parallel modeling branches, integrating information on the quality of the individual solutions, and providing guidance by pre-simulating and suggesting possible next steps to improve the design. A film-strip-like visualization (c) of screenshots helps to depict the evolution up to the currently selected state. A quality view (d) informs about the fulfillment level of the illumination constraints that need to be met, using bullet charts. Changing the weights of these constraints (e), and therefore, the lighting designer’s focus, triggers an update of the provenance tree node visualizations (reflecting the weights of the constraints in the distribution of the treemap space). Moreover, the defined weights are also considered in the generation of new suggestions, which are tailored towards satisfying constraints with higher weights.
Andreas Walch, Michael Schwärzler, Christan Luksch, Elmar Eisemann, and Theresia Gschwandtner
CoRR, 2019
 HotPipe: screen capture of an early game level
Liam Mac an Bhaird, Mohammed Al Owayyed, Ronald van Driel, Huinan Jiang, Runar A. Johannessen, et al.
In Proceedings of GALA, 2019
The setup used to play Loud and Clear
Berend Baas, Dennis van Peer, Jan Gerling, Matthias Tavasszy, Nathan Buskulic, et al.
In Proceedings of GALA, 2019
OMiCroN overview. A renderable hierarchy is maintained while inserting incoming nodes in parallel. This cycle is repeated until the whole hierarchy is constructed.
Vinicius da Silva, Claudio Esperança, and Ricardo Marroquim
Computers & Graphics, 2019
A portal through which a laser can be teleported.
Bob Dorland, Lennard van Hal, Stanley Lageweg, Jurgen Mulder, Rinke Schreuder, et al.
In Proceedings of GALA, 2019
Left: single frame from 240Hz short-exposure video and a simulated long exposure at 30Hz by averaging 8 frames. Middle: Using the 240Hz input, our method enables mixing a long exposure in the periphery with a short exposure for the details on the pendulum. Via user annotations in the video, different shutter functions can be defined (top right). Annotations and shutter functions can be keyframed over time. Based on the annotations, our method defines an interpolated shutter function for each pixel (bottom right).
Nestor Z. Salamon, Markus Billeter, and Elmar Eisemann
Computer Graphics Forum, 2019
The valuables (magenta) have to be dragged to the dropzone (green circle). If the junk (brown) ends up in the dropzone the score decreases. The space snot (large green blob) is an obstacle in which ships and valuables can get stuck.
Shaad Alaka, Max Lopes Cunha, Jop Vermeer, Nestor Z. Salamon, J. Timothy Balint, and Rafael Bidarra
Int J Serious Games, 2019
 Facet orchestration in Angelina, where different online sources are used to combine visuals and audio based on the mood and keywords of a Guardian article acting as (external) narrative. The level generator however was not connected to the remaining facets. The in-game screenshot is from [56].
Antonios Liapis, Georgios N Yannakakis, Mark J Nelson, Mike Preuss, and Rafael Bidarra
IEEE Trans Games, 2019
Two armadillos (274k tetrahedra) in a pool of water (633k particles) simulated at 60 FPS with a time step of 1/60s. Fluid-deformable interaction and (self-)collisions are handled. The user can interact with the scene through click-and-dragging the meshes.
Christopher Brandt, Leonardo Scandolo, Elmar Eisemann, and Klaus Hildebrandt
ACM Transactions on Graphics, 2019
Saskia J Santegoets, Vanessa J van Ham, Ilina Ehsan, Pornpimol Charoentong, Chantal L Duurland, et al.
Clin Cancer Res, 2019