We are pretty much ready now. All samples are in place and the work is responding pretty much how it should. There are a few bugs here and there though. Firstly the engine still seems crash between the hospital and living room scene. This only happens very rarely but, of course, if the work was in the public domain this would have to be fully addressed. For now though its stable enough to present. Why does it crash? Most likely because the audio manager on HL cant handle multiple instances of sound triggers. Unfortunately there is not much that can be done about this except use build a 'soundscape', the problem here of course is there would be no real control over where the narratives would start to sound.
We are using a head-up display (HUD) and keypad to control both visual orientation and movement (respectively). The HUD does its job fairly well in the way that you really do have to turn your head and body to orient yourself properly around the scene. Its downfall i think is its accuracy in the vertical direction. The device seems to be velocity sensitive, in the same way as a mouse. I wonder if that can be turned off in the same way.....we'll come back to that...
Due to internet connectivity restriction and access etc we really only have the recital hall (conservatoire) and the space at Eastside to work with. Essentially from the space we require enough room to set up a quadratic speaker arrangement, enough room to have a separate stereo pair for 'in-game' sounds, and somewhere to witness the performance. The recital hall offers much more room and allows space for an audience to sit. Eastside may be a little informal.
The recital hall also has a large projection screen directly above a lot of empty floor space. Working in this space would mean an audience could see the participant using the interface while being able to witness their interactions on the screen very clearly. The in-built speakers in the recital hall are also already calibrated for that audience perspective.