2012

Voorspellen van visueel functieverlies door epilepsiechirurgie
B.M. ter Haar Romeny, C. Tax, R. Duits, Anna Vilanova, C. Jacobs, et al.
Epilepsie : Periodiek voor Professionals, 2012
A distance query returns all information related to the structures inside the sphere.
Noeska Natasja Smit, Anne C. Kraima, Daniel Jansma, Marco C. Deruiter, and Charl P. Botha
In Proceedings of 3D Physiological Human Workshop, 2012
Real-time rendering applications exhibit a considerable amount of spatio-temporal coherence. This is true for camera motion, as in the Parthenon sequence (left), as well as animated scenes such as the Heroine (middle) and Ninja (right) sequences. Diagrams to the right of each rendering show disoccluded points in red, in contrast to points that were visible in the previous frame, which are shown in green (i.e. green points are available for reuse). [Images courtesy of Advanced Micro Devices, Inc., Sunnyvale, California, USA]
Daniel Scherzer, Lei Yang, Oliver Mattausch, Diego Nehab, Pedro V. Sander, et al.
Computer Graphics Forum, 2012
An example of our randomly generated search space, consisting of interconnected “chambers.” The high-level graph is represented in blue.
Sandy Brand and Rafael Bidarra
Comput Animat Virtual Worlds, 2012
Overview of the visualization framework, based on spatiotemporal hierarchical clustering. The gray dashed arrows depict pre-processing steps. (1) A tMIP volume is generated, and (2) an iso-threshold captures the voxels that are clustered. (3) Next, the cluster hierarchy is constructed. (4) Using the cluster tree, labels are generated per cardiac phase. After preprocessing, the real-time visualization is generated using the available data structures, as depicted by the solid blue arrow.
R.F.P. van Pelt, S.S.A.M. Jacobs, B.M. ter Haar Romeny, and Anna Vilanova
Computer Graphics Forum, 2012
FU maps for multichannel EEG coherence visualization. Brain responses were collected from three subjects using an EEG cap with 119 scalp electrodes. During a so-called P300 experiment, each participant was instructed to count target tones of 2000Hz (probability 0.15), alternated with standard tones of 1000Hz (probability 0.85) which were to be ignored. After the experiment, the participant had to report the number of perceived target tones. Shown are FU maps for target stimuli data, with FUs larger than 5 cells, for the 1-3Hz EEG frequency band (top row) and for 13-20Hz (bottom row), for three datasets.
Hanspeter Pfister, Verena Kaynig, Charl P. Botha, Stefan Bruckner, V. J. Dercksen, et al.
CoRR, 2012
a) Input: Kaleidoscope image. b) User-drawn mask (green checkerboard pattern). c) User-drawn mask isolated. d) Approximate visual hull generated from (c) using image-based shading. e) Resulting labeling via visual hull
Oliver Klehm, I. Reshetouski, Elmar Eisemann, Hans-Peter Seidel, and Ivo Ihrke
In Proceedings of Vision, Modeling, and Visualization, 2012
A 3D model of the relevant vascular anatomy is surrounded by map views that display scalar flow features of five sides (features at the left, right, bottom, and up side are shown at the corresponding ring portions). Scalar features of the backside are shown at the most right display. The lines pointing from the map portions to the 3D view indicate correspondences, where scalar features are shown in both views. If the user drags a point, representing an interesting feature from a map view to the center, the anatomical model is rotated to make that region visible. All map views change accordingly.
Anna Vilanova, Bernhard Preim, R.F.P. van Pelt, R. Gasteiger, M. Neugebauer, and Thomas Wischgoll
CoRR, 2012
Snapshots of our 3D framework, visualizing an example of simulation
Rafael Hocevar, Fernando Marson, Vinicius Cassol, Henry Braun, Rafael Bidarra, and Soraia R. Musse
In Proceedings of IVA, 2012
Two examples of interactive visualizations made with the volume renderer of Kroes et al.
Charl P. Botha, Bernhard Preim, Arie Kaufman, Shigeo Takahashi, and Anders Ynnerman
CoRR, 2012
Left: Brute force method. Center: Hierarchical template matching. Right: Floating Texture’s optical flow implementation
Matteo Dellepiane, Ricardo Marroquim, Marco Callieri, Paolo Cignoni, and Roberto Scopigno
IEEE Transactions on Visualization and Computer Graphics, 2012
 Example of a transformation into a multi-component target
Marcelo Renhe, Antonio Oliveira, Claudio Esperança, and Ricardo Marroquim
In Proceedings of SIBGRAPI, 2012

2011

Fernando V. Paulovich, D. M Eler, J. Poco, Charl P. Botha, Rosane Minghim, and L. G Nonato
2011
Julian Togelius, Jim Whitehead, and Rafael Bidarra
IEEE Transactions on Computational Intelligence and AI in Games, 2011
Tim Tutenel, Roland van der Linden, Martin Kraus, Bart Bollen, and Rafael Bidarra
In Proceedings of PCG Workshop, 2011
Possible game scenario with lighting using bent normals: 2048×1024 pixels, 60.0 fps, including direct light and DOF on an Nvidia GF 560Ti. Environment mapping produces natural illumination, while bent normals cause colored shadows.
Oliver Klehm, Tobias Ritschel, Elmar Eisemann, and Hans-Peter Seidel
In Proceedings of Vision, Modeling, and Visualization, 2011
This figure shows glyph visualizations of HARDI and DTI-images of a 2D-slice in the brain where neural fibers in the corona radiata cross with neural fibers in the corpus callosum. Here DTI and HARDI are visualized differently; HARDI is visualized according to Def. 1, whereas DTI is visualized using Eq. (1).
R. Duits, T.C.J. Dela Haije, A. Ghosh, E.J. Creusen, Anna Vilanova, and B.M. ter Haar Romeny
In Proceedings of SSVM, 2011
Ruben M. Smelik, Tim Tutenel, Klaas Jan de Kraker, and Rafael Bidarra
Computers & Graphics, 2011