2012

Standard 4,0962 shadow map with perspective warping, rendering at 64 FPS (left). QVSM with a maximum refinement level of 32 × 32 and 1,0242 tiles, rendering at 32 FPS (right). Adaptive subdivision effectively removes aliasing in cases that are difficult for reparametrization and global partitioning methods.
Elmar Eisemann, Ulf Assarsson, Michael Schwarz, Michal Valient, and Michael Wimmer
In Proceedings of SIGGRAPH Courses, 2012
An overview of the application window with two datasets. Each dataset is opened in its own child window. Each child window contains of three views: the pixmap view on the top-left, the slice views on the top right and the anatomical view on the bottom.
Andre F. van Dixhoorn, Julien Milles, Baldur van Lew, and Charl P. Botha
In Proceedings of Visual Computing in Biology and Medicine, 2012
 Example of a transformation into a multi-component target
Marcelo Renhe, Antonio Oliveira, Claudio Esperança, and Ricardo Marroquim
In Proceedings of SIBGRAPI, 2012
Left: Brute force method. Center: Hierarchical template matching. Right: Floating Texture’s optical flow implementation
Matteo Dellepiane, Ricardo Marroquim, Marco Callieri, Paolo Cignoni, and Roberto Scopigno
IEEE Transactions on Visualization and Computer Graphics, 2012
Two examples of interactive visualizations made with the volume renderer of Kroes et al.
Charl P. Botha, Bernhard Preim, Arie Kaufman, Shigeo Takahashi, and Anders Ynnerman
CoRR, 2012
Snapshots of our 3D framework, visualizing an example of simulation
Rafael Hocevar, Fernando Marson, Vinicius Cassol, Henry Braun, Rafael Bidarra, and Soraia R. Musse
In Proceedings of IVA, 2012
A screenshot of an interactive simulation with GALES. The 3D cloud field visualization is shown using volume rendering. During the simulation, the visualization can be actively zoomed and rotated to directly obtain insight into the simulation process.
Jerome Schalkwijk, Eric Griffith, Frits H. Post, and H.J.J. Jonker
Bulletin of the American Meteorological Society (BAMS), 2012
Saggital view of DT ellipsoids generated for a healthy Human Brain
N. Sepasian, J.H.M. ten Thije Boonkkamp, and Anna Vilanova
SIAM J Imaging Sci, 2012
a) Input: Kaleidoscope image. b) User-drawn mask (green checkerboard pattern). c) User-drawn mask isolated. d) Approximate visual hull generated from (c) using image-based shading. e) Resulting labeling via visual hull
Oliver Klehm, I. Reshetouski, Elmar Eisemann, Hans-Peter Seidel, and Ivo Ihrke
In Proceedings of Vision, Modeling, and Visualization, 2012
An example of our randomly generated search space, consisting of interconnected “chambers.” The high-level graph is represented in blue.
Sandy Brand and Rafael Bidarra
Comput Animat Virtual Worlds, 2012
Extended Serious Games Multidimensional Interoperability Framework (SG-MIF)
Ioana Stanescu, Antoniu Stefan, Milos Kravcik, Theo Lim, and Rafael Bidarra
In Proceedings of eLSE 2012 - 8th International Conference on eLearning and Software for Education, 2012
Fiber-tracking results for the corpus callosum, using stream-line (a) or geodesics via the raytracing method (b).
N. Sepasian, J.H.M. ten Thije Boonkkamp, B.M. ter Haar Romeny, and Anna Vilanova
SIAM J Imaging Sci, 2012
Potential blockers (orange) for rays going from s to T are found in the shaft-like shape (green).
Lionel Baboud, Elmar Eisemann, and Hans-Peter Seidel
IEEE Transactions on Visualization and Computer Graphics, 2012
Heatmaps for Case 2. The left map uses the same color scheme as the heatmaps in Figure 5; in the right map, blue traces are males and pink traces are females.
Nick Kraayenbrink, Jassin Kessing, Tim Tutenel, Gerwin de Haan, Fernando Marson, et al.
In Proceedings of VS-GAMES, 2012
R.F.P. van Pelt, H. Nguyen, B.M. ter Haar Romeny, and Anna Vilanova
International Journal of Computer Assisted Radiology and Surgery, 2012
Ray-traced colored models: bottle (left), vase (middle), coffee mug (left top), and the same model registered in inverse ordering (left bottom)
Ricardo Marroquim, Gustavo Pfeiffer, Felipe de Carvalho, and Antonio Oliveira
Vis Comput, 2012
Antonio Brisson, G. Pereira, Rui Prada, Ana Paiva, Sandy Louchart, et al.
In Proceedings of AAAI Workshop on Human Computation in Digital Entertainment and Artificial Intelligence for Serious Games, co-located with AIIDE 2012 - 8th Conference on Artificial Intelligence and Interactive Digital Entertainment, 2012
Stunt arenas generated for players modeled as high (a) and medium (b) Sunday Drivers.
Ricardo Lopes, Tim Tutenel, and Rafael Bidarra
In Proceedings of PCG Workshop, 2012
Real-time rendering applications exhibit a considerable amount of spatio-temporal coherence. This is true for camera motion, as in the Parthenon sequence (left), as well as animated scenes such as the Heroine (middle) and Ninja (right) sequences. Diagrams to the right of each rendering show disoccluded points in red, in contrast to points that were visible in the previous frame, which are shown in green (i.e. green points are available for reuse). [Images courtesy of Advanced Micro Devices, Inc., Sunnyvale, California, USA]
Daniel Scherzer, Lei Yang, Oliver Mattausch, Diego Nehab, Pedro V. Sander, et al.
Computer Graphics Forum, 2012
FU maps for multichannel EEG coherence visualization. Brain responses were collected from three subjects using an EEG cap with 119 scalp electrodes. During a so-called P300 experiment, each participant was instructed to count target tones of 2000Hz (probability 0.15), alternated with standard tones of 1000Hz (probability 0.85) which were to be ignored. After the experiment, the participant had to report the number of perceived target tones. Shown are FU maps for target stimuli data, with FUs larger than 5 cells, for the 1-3Hz EEG frequency band (top row) and for 13-20Hz (bottom row), for three datasets.
Hanspeter Pfister, Verena Kaynig, Charl P. Botha, Stefan Bruckner, V. J. Dercksen, et al.
CoRR, 2012