Thanks to everyone you helped us throughout this project - the presentation went well and we learnt a lot from the experience
Here follows a short video we put together - enjoy
For a larger version please visit the following link VRU Watertower Video
Sunday, 17 January 2010
Today was a trial run with all the equipment in place to see if all the technologies worked together properly and that the sound/visual was coherent and sat well in the space. Firstly we set up the internet connection without a hitch. 10/10 :D - that was my biggest concern.
We sourced all the speakers and leads we needed and set up the HUD and keypad. All communicated and worked well. The quad speaker arrangement and samples worked ok after a little tweaking with my audio interface (as id set some of the dac outputs incorrectly). A small behringer desk served as a line mixer for the separate audio samples, while the halls main speaker pair routed the in-game sound very well.
There are a couple of things we need to sort out before tomorrow. Firstly we are having trouble displaying the tower animation on an external mac screen using a laptop to run it on. We think this is probably just a broken adapter so hopefully it will be a quick fix tomorrow. If not we'll have to run off a desktop mac. Secondly there are a couple of models left to go in the piece; the tank and the sowing machine. Thats going to be tight!....
We only had from 2-5 today so really only had around 10/15 minutes to have a go with the HUD etc to see how interactive it felt. The surround speaker stuff seems to really effective and when performed you really feel immersed in the experience so from that point i think the sound will be a success. Ill be very interested to see how an audience react to the interface, and the piece as a whole
**drafted pre-presentation** - presentation on 12th January
We are pretty much ready now. All samples are in place and the work is responding pretty much how it should. There are a few bugs here and there though. Firstly the engine still seems crash between the hospital and living room scene. This only happens very rarely but, of course, if the work was in the public domain this would have to be fully addressed. For now though its stable enough to present. Why does it crash? Most likely because the audio manager on HL cant handle multiple instances of sound triggers. Unfortunately there is not much that can be done about this except use build a 'soundscape', the problem here of course is there would be no real control over where the narratives would start to sound.
We are using a head-up display (HUD) and keypad to control both visual orientation and movement (respectively). The HUD does its job fairly well in the way that you really do have to turn your head and body to orient yourself properly around the scene. Its downfall i think is its accuracy in the vertical direction. The device seems to be velocity sensitive, in the same way as a mouse. I wonder if that can be turned off in the same way.....we'll come back to that...
Due to internet connectivity restriction and access etc we really only have the recital hall (conservatoire) and the space at Eastside to work with. Essentially from the space we require enough room to set up a quadratic speaker arrangement, enough room to have a separate stereo pair for 'in-game' sounds, and somewhere to witness the performance. The recital hall offers much more room and allows space for an audience to sit. Eastside may be a little informal.
The recital hall also has a large projection screen directly above a lot of empty floor space. Working in this space would mean an audience could see the participant using the interface while being able to witness their interactions on the screen very clearly. The in-built speakers in the recital hall are also already calibrated for that audience perspective.
Wednesday, 6 January 2010
As mentioned previously we are using a lot of sampled sounds, narratives, and ambient sounds in the watertower work. Using simple triggers it is uneconomical in Half Life causing the engine to stutter and freeze. This could be solved using soundscape scripts, a sort of automated sample recall system using some predefined parameters but this hands over some creative control. We found it would be far more performative to trigger samples in an installation environment.
The advantage of such a system is that we can spatially position a sound where we want, manipulate volume, and add interesting effects such as delays and dynamic sample playback rate. With a large number of samples to chose from it means we can create a unique sonic experience for every instance of gameplay. Indeed its even possible to 'play' the samples like a musical instrument creating interesting textures and modulative interplay between samples.
List of samples:
chopping sound 1
chopping sound 2
radio sound 1
radio sound 2
radio sound 3
grenade far away
I am using Max MSP to manipulate the samples in real time during gameplay. The patch is designed to allow each sample to be looped, panned left to right, panned up and down (in a quadratic speaker arrangement to create pseudo surround sound), slowed down, speeded up, and sent to a delay line.
*All patches are original and feature bespoke sub-patches for a front to back panner (including links to the delay line) and a linear volume panner that also acts as a limiter. Design by Lee Scott
Max Patch for bottom floor of Watertower simulation
Max MSP presentation mode for bottom floor of Watertower simulation
The purpose of the sitting room was to introduce the idea of new beginnings. The surrealist nature of the hospital setting directly next to the sitting room is placed with the intent to forge a link between relationships that may have come into fruition in wartime, maybe consultants and nurses, soliders and hospital staff. The idealic setting of a middle class house with comfy sofas, carpet and pictures on the wall are a far cry from the dirty, blood stained hectic pace of the hospital before.
The large poppy picture on the wall is mounted on simple poppy wallpaper symbolising the end of war and the intent for remembrance. This reflects the aesthetic for the piece as a whole which attempts to give the same impressions about the hospital move.
The ambient sound of children playing and birds chirping again plays directly after the sound of a busy hospital waiting room. This attempts to merge the two spaces as sonically they are very similar; indiscernible talking, and busy natured, but have very different social contexts; one playful, one panicked.
Once inside the room, tied to one of the chairs is a love story of a housebound to her husband at war. The response is tied to a bed, a reference to a hospital bed or a service bunk.
This post will address our use of sound within the Watertower Half Life Mod.
We are treating sound as an essential part of the watertower project not only in terms of describing visible situation, but to add value to an experience by describing what cannot be seen. This form of acousmatic sound will play upon ambiguity of space and time to provide an eb and flow of the past and present. Sound will be acoustically and spatially manipulated as well as time modulated. There is a particular focus to fragment verbal accounts of the hospitals past and present to further describe the space and also to promote individual interpretation through its ambiguity.
We are using the following sound sources:
- Aural accounts from patients and staff at Selly Oak Hospital
- Ambient sounds
- Individual sounds to describe environments e.g. grenade sounds, weaving loom sounds
Sound in Half Life
Initially we intended to place all sound within Half life to be buffered and triggered wherever we wished. Sound could then be placed where desired, made to play everywhere in the map, or to be fixed to a location and heard in proximity to a predefined area.
In reality Half Life couldn't sustain all of the media that we wanted to trigger. The complexity of multiple sample recall and high quality audio caused stuttering during gameplay.
We solved this by placing only ambient sounds into the game so that transitions between areas were smooth and realistic.
The third and final floor of the piece is dedicated to ideas of remembrance for SE hospital, the anxiety of change, and a reach out to new beginnings. This naturally meant that most of the narrative we collected ended up being assigned to the top floor. The top floor was therefore kept free of any other triggered sounds to accommodate this.
Treatment of ambiguous narrative
Other fragments of aural history are peppered throughout the workhouse and wartime themes. Ghost stories feature in the workhouse areas as well as short wartime memories throughout the air raid shelter and surgery room scene. We have tried to make associations between accounts that discuss different blocks of the hospital 'N, S and K block' etc to areas of the workhouse. Conditions in K block are referred to as the 'stink' of the sleeping conditions in the bunk bed corridor on the bottom floor, also quotes like 'different parts pack up', where Lilian (one of our interviewed patients) describes her hospital treatments appear in work areas where links to equipment failure can be drawn.
Treatment of narrative material on the top floor
The top floor features a set of still pictures of the Queen Elizabeth and Selly Oak hospitals. Longer stories and anecdotes are tied to the pictures in a similar way that the staff and patients thoughts and feelings are tied to them. The narrative works on a proximity bases so there will likely be times when you can hear several stories intermingling at the same time.
The floor of the top floor has also houses an interactive light installation (shown below) Each light is turned off or on by passing over it. A user then can draw patterns in the matrix of lights. Shorter fragments of narrative are assigned to several of these lights and by passing over them the participant creates an individual series of sound clips. Each time the space is accessed we hope different areas of the room will be explored uncovering different stories.
Out of 6 interviewees 2 were staff and 4 were patients. They each harbor different views about the move and of course different perspectives of what the new hospital can bring. Some views can be described as broad, some as tunnel visioned, some from an individual perspective, and some of the wider community. We do not know what narratives will be accessed on the top floor at any one time and we hope this leaves a differing message to open debate between people who experience the work.
We are featuring several ghost stories in a section of the workhouse. One of three ghost stories will be triggered depending on if the user enters the room on the left hand side, right hand side or centrally. Our desire is to hear a different ghost story every few attempts so viewers of the piece may hear one or two different ones. What is interesting here is that the patients and staff are talking about the same set of ghost stories in varied levels of exaggeration (one of which describes more playing tricks upon staff members). This hopefully will reflect on the audience.
Tuesday, 8 December 2009
The compiling procedure in Half Life requires a QC file. The .QC file has a list of commands that tell the studiomdl about the location of the models various SMD's. This is where the compiled objects should be written to.
The various SMD's include:
A reference mesh - that holds the UV mapping information and the rendered geometry of the model
A collision mesh - that holds the physical properties - this should have a low polygon count (cheap to render)
Skeletal animation - information about joints etc that is used in the animation process. This is required on idle objects as well
Texture information is compiled in the QC file.
Issues - Compilation
To date we have had issues with the exporting process.
This can be done in 3DSmax, Cinema4D or lightwave but all require plug-ins and different pieces of software to correctly assemble the SMD's.
We had trouble compiling in 3DSMax for these reasons and eventually settled on lightwave due to its more straight forward interface design and export capability.
Iona is currently tackling compiling.
Issues - Model production
The main problem we have encountered here is getting the model properties correct. Initally Selma created a list of fantastic models but used smoothing properties and NURBS (non-uniform rational B-splines). Half life only allows compilation using standard polygon designed models. Although models were redesigned the polygon count some still exceeded 8000 per model. These were redesigned to be less complicated and many are now useable. Some however are too simple and become unrecognisable when the MDL file is compiled.
This is really a trial and error process in which Selma and Iona are deeply into!
Some examples of rendered models by Selma Wong