We started the day with a bang – an absolutely fabulous talk from Jeffrey Schnapp (Director of Stanford University’s Humanities Lab). His talk was titled, The Augmented Museum. For me, this one made the whole conference worth the price of admission. Schnapp and his team have put together an online virtual exhibit, made by linking three, independent, discontinuous real life exhibits on the 3D platform called Sirikata, an open-source platform for creating virtual worlds. This is a WebGL-based environment that runs in a browser.
First Schnapp gave us a good run down on what immersive environments mean for museums. He talked about a shift from museums’ traditional guardian notion of collecting (“we are the vault”), what Schnapp referred to as a “scarcity based model”, towards a new model of access and abundance.
Schnapp described the new “augmented museum” in this way….
– it has porous walls
– is built on a public square
– is dedicated to exploration
– it multipurposes and multichannels everything that it does
– it is a theater of research where the show is work in progress as the museum opens up its own design and planning processes to the public
– it creates perpetually visit-able archives that compliment and traditional supports like catalogs, photographs and web sites.
– it embraces interactive models of programming with an emphasis on bottom-up counterparts to more traditional top-down approaches.
Schnapp showed us a movie of an instance using Sirikata to deliver a museum exhibit called “Speed Limits” (which has been staged in the real world, and will be in Miami Beach in September 2010). You start with a click on an email link in and immediately enter (without plug-ins!) a 3D, navigable gallery of the exhibit. You can walk around, move the paintings, add a picture to the wall (from Flickr), resize it, add comments. Then, just for fun, the participant in the movie released a ball into the museum that knocked down all the paintings and created havoc. What fun for children. Just imagine the possibilities here – what a way to extend a museum collection for teaching, mashing-up, playing, exploration and creation.
In addition to these fascinating insights into the future of museums was this notion of a 3D-world delivered seamlessly in a browser. Gone are the barriers of downloading clients, apps, or plug-ins – you just click and arrive. It is clear to me that the current virtual world entry barriers are just to high for the average participant (not to mention fleets of teachers and students) but the technology that Schnapp introduced suggests something much easier, more accessible, and much more inviting.
Next up, Charles Morris (Virtual Helping Hands), Janyth Ussery (Texas State Technical College), Denise Wood (University of South Australia) to talk about the ways they’ve used Second Life in teaching, with a particular emphasis on working with the handicapped. Charles was there in person, Janyth and Denise were skyped in from their homes. Here were a few highlights…Dance in SL (dancing with an avatar), presentations with an invited international speaker, students creating games (each team given their own skybox on which they create their own immersive game), theater/opera staged in-world, as well as business and financial modeling. Janyth teaches two classes in SL – one on critical thinking and the other on business etiquette. In the etiquette class, they take the students out for a business dinner (in SL) to model appropriate business dining – a role playing experience that is moderated.
Morris and Helping Hands work to improve the accessibility of SL for handicapped. To that end, they are working on their own SL viewer called Access Globe. Some of the features include text to speech applications, automated capturing, enhanced accessibility of menus, audio notification of on-screen events, visual notification of sound events, text list of avatars in the area for hearing impaired, and functioning without a mouse. Morris talked about Max, the virtual guide dog, an interesting project that is an assistive device to aid the handicapped in an immersive environment. Here’s a video of Access Globe in use.
Next up, Nicholas Nagle from the Immersive Education Initiative with Transcoder Project. Nagle’s pitch was to establish standards and build code that will allow the independent deployment of content in immersive education platforms. He points out that 3D worlds on the internet today are walled gardens (as were online communities like AOL, Prodigy & CompuServe were before the WWW). Immersive Education Initiative’s Transcoder Project will allow you to create content that exists independently of the particular 3D immersive platforms so that the objects look and behave the same way, regardless of where they are deployed. So, for instance, with their Transcoder, you could take objects from Google’s 3D warehouse (objects developed in Collada) and deploy them in your virtual world of choice.
The morning ended with a rather unsatisfying workshop-style “conversation” entitled “Towards an Immersive Education Core Curriculum in Higher Education.” Aaron Walsh did a heroic job of trying to lead/guide but it was too much like herding cats. Everyone talking/thinking/going in different directions with a few strong-minded and confident people taking over. Some useful gems though… importance of shifting focus from objects/plumbing to a focus on designing/creating experience as well as a few interesting thoughts on authentication/credentialing/certification.
That brought us to lunch. And I think I’ll continue the afternoon’s reporting in the next blog post.