We now know pretty much what we want to happen inside the space from an audience perspective. Its becoming much easier now to get deeper into the participation and interactivity levels of the work on paper at least. This is a good sign as it means we all have a collective understanding of what thoughts and feelings we are trying to provoke in the participant as well as what could potentially be ambiguous and unique to the individual.
The last week has essentially been devoted to the 'how'... with the biggest questions concerning what platform we are going to run the experience on, and where the visual and audio elements could be designed.
Firstly our criteria for the platform: The 'must haves'
- It must allow us to build a static 3 dimensional environment
- It must support object building and texturing
- It must be audio compatible in some form or another
- It must in some way allow us to integrate 3 different areas into a single space so that we can express 3 distinct time periods
The 'essential for artistic content/interactivity'
- It should allow us to animate portions
- It should support a first person perspective or support a controllable avatar
- It should allow for appropriate triggering of sound and also be proximity controlled
- It should allow us to model the spaces accurately with a range of textures
The 'would be nice if's'
- It could support a timeline of events to give us additional content control
- Be fully flexible with animation
- Support video content
- Allow us to give the illusion of time travel (teleporting etc)
Second life (SL) was our first option. This initially seemed a fairly good solution as it was hosted on a virtual world allowing for a wider audience and was also cross platform between Mac and PC. It was possible to build objects and building but was a little poor in its execution and quality. This however wasn't such a problem as you could import from another platform such as 3D studio Max. Sound and media could be distributed as well.
We decided to move away from this for a number of reasons. Firstly its usability was poor and did not allow for full control of the environment we were in. This restricted our content flow and the ability to direct the participants thinking. Texturing was also quite limited as it required fairly complicated modelling for alpha layering and UV mapping. The sound elements would also have been a complete compromise due to SL's 10 second maximum audio clip length. This would have made ambient sounds and time line control very limiting.
Moving away from game platforms we started to explore other 3D modelling software. This would however mean that the work we produced would only be accessible through an installation environment. Knowing we could have to compromise we continued to explore. We looked at several modelling programs including Maya, 3D studio Max, and illustrator. We worked a little in each program and found design was easier in 3D studio Max and Maya. Maya however was a little difficult to navigate and there was only a limited amount of online help available.
We were still stumped for a platform so we started looking at Flash. Jovi has had some experience with flash and could produce good working animation situations. It had full avatar control although in a third person perspective. This was great but we were still having to compromise with a 2D environment. Flash has recently offered some 3D rendering solutions but this was very tricky and possibly unachievable in our time restraints. Another solution was needed.
This continued for a couple of days until Iona, and a little help from a friend, suggested Half Life 2 as a platform. Having never really looked at editing a map in Half life we were keen to explore what was possible. It supports a first person gaming perspective as well as theoretically suitable solutions for 3D modelling and UV mapping. Coding looks achievable and there appears to be good support for sound. Half Life is designed to be modified by users and the level of online help and support mirrors this intention.
Animation and user participation is a big thing on half life. The physics engine is good and would allow us to move objects around if required. There is also functionality for all sorts of interactivity such as pushing buttons to trigger animation which is a fundamental element to user participation. We do however need to explore animation in Half life in some detail as we don't yet know its complexity. In addition we haven't discovered any way of using video media in the platform.
We will attempt to create a single user map in Half life then test its functionality with some simple object rendering and texture assignment. If it behaves as we want it too then its defiantly worth going down this route.