GRC Science Viz: Day Four

Day four of GRC SciViz 2011 brought us the theme “Revealing Unseen Complexity”.  The speakers were Wilmot Li (Adobe Inc.), Randy Sargent (Carnegie Mellon University), and Jayanne English (University of Manitoba). The evening session featured Susan Martinez-Conde and Stephen Macknik (Barrow Neurological Institute) for a talk on neuroscience, attention and magic.

Wilmot Li

Wilmot Li (Adobe Inc) started the day with his talk, Challenges, Occlusions, Visual clutter, and Simultaneous Motions.  He began with an overview of historical illustration techniques….for instance the cut-away (Frank Netter), the exploded view, and the style perfected by David Macaulay in his The Way Things Work books (motion arrows, framed sequences, etc).  Li showed us examples of experimental illustration tools, developed at Adobe, based on these traditional illustration techniques, that are designed to convey the spatial and working relationships in complex 3D objects interactively.

The first tool was the Automatic Diagram Creation technique, where a user chooses (from a palette) the part of the complex structure they want to see and the illustration system goes to the optimal view (with cut-aways) to reveal the selection, in situ, and expose the parts of interest. As an adaptation of the exploded view approach, he showed us a sample where the user can create and tweak an “explosion axis” to expand and collapse the entire assembly (car parts, hydraulic pumps) with intelligent tracing tools.

Inspired by the David Mccauley approach, they build an input model system.  With this, you can animate the moving parts, generate static illustrations (with motion arrows to indicate how the motion happens), and frame sequences.  In the end, you have a total animation that the user can step through to see the various parts of the animation.  They also combined this method with the exploded view technique in order to extract and examine parts of the animation on demand.

His take-way summary:  there’s a lot of visualization technology out there but he’s concerned that it’s not getting into the hands of the scientific and research community for use.  So he concluded his talk with a discussion about ways that authors can work with these tools.  For instance, using an authoring tool (like the ones he showed) applied a s rigging to your diagram (something you drew or found), churned through flash and shared on a web page.

Randy Sargent

Next up was Randy Sargent with a talk called “Exploring Gigapixel-Scale Images and Video”.  He started with two observations:  1) that we are very good at remembering places that we explore and 2) the important distinction between being driver and passenger (people like to drive).  When you drive, you make the decisions, so you have a better sense of situatedness.  These observations inform his work.

Using Google Earth technology, he took us to Mars, following the path of the Mars Rover, Opportunity, and examined the high-resolution images taken by the cameras on the rover’s stalk-like masthead as it maneuvered inside a crater.  Because the Rover’s cameras were mounted at eye-level, scientists could look around, as if they were hiking through the planet’s craters themselves, rather than seeing the images on a computer screen.

Mars Rover, Opportunity

What’s more, the images taken by the Mars Rover, imported into Google Earth, were such high-resolution that you could see the stratigraphy of the various crater cliffs, including concretions formed in the bedrock, most likely due to the effect of water.

Gigapan Camera

After their success with the Mars Rover, Sargent and his colleague, Illah Nourbakhsh (CMU), decided to bring the technology back down to the home planet and get it in the hands of other scientists and photographers.  Using the same concept, they designed and produced a gigapan camera device, pictured on the right, where you mount your camera on the gigapan device and it systematically takes images across, up and down, over a given area.  The device can be configured to sample images over any desired area under investigation. The images are then stitched together – linking geographical, and spatial data are used to align them properly.  You may remember the stunning panoramic photograph taken by David Bergman of Barack Obama’s inauguration?  Same technology.  You can pan and zoom around that photo to see the color of Michelle Obama’s gloves or who was sitting behind George Bush in the second row. The image, which is actually made up of 220 images, taken over a 15 minute period of time and stitched together, is huge –  1,474 megapixels to be exact (for comparison, an ordinary digital image is typically about 10 megapixels).  This gigapan technology is now available to the average person – low-end, consumer devices for standard, point/shoot cameras cost about $300. The device for more sophisticated SLR cameras are closer to $900.

Sargent also demonstrated Gigapan Time Machine, mounting a camera on a Gigapan timelapse device, so that images are captured, over time in order to observe/explore changes over time. He showed a couple of wonderful samples – plant growth (from seedlings, to flowering, to seed propagation), the Jane Goodall institute monitoring habitats in Tanzania, and NOVA documenting a nuclear control room.

Sargent also told us about the Global Connection Project – a joint effort between Carnegie Mellon, NASA, Google, and National Geographic – to connect people through this affordable and accessible technology; bridging the gaps in physical and cultural distances through the exploration of digitally rich and dynamically viewable online images.  There’s some really interesting stuff on that Global Connection Project web site – most definitely worth a wander. For instance, you can download a dynamic Google Earth overlay to view the landslides and structural damage resulting from the 2005 Pakistan earthquake. This technology offers interesting insights into addressing such large-scale environmental disasters.

Sargent wrapped up his talk by highlighting the tension between leading the user to something interesting versus letting the user discover themselves; the need to find more effective ways to annotate.

Jayanne English

The third talk was given by Jayanne English, University of Manitoba, entitled “Cosmos and Canvas:  Art Revealing Science in Astronomy Images”. English talked about the tension between art and science in astronomical image making and research.  She gave us very useful color lessons (primary, complimentary colors) and guidance on the best ways for scientists to iterate until the colours support the intended message.  On this website she gives guidance to astronomers on how to render their photographs most effectively.  Here is a podcast of English being interviewed.  And here are a few of her stunning images.

Advertisements

Leave a comment

Filed under visualization

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s