Tuesday, 8 December 2009

Modelling and compiling

The compiling procedure in Half Life requires a QC file. The .QC file has a list of commands that tell the studiomdl about the location of the models various SMD's. This is where the compiled objects should be written to.
The various SMD's include:

A reference mesh - that holds the UV mapping information and the rendered geometry of the model
A collision mesh - that holds the physical properties - this should have a low polygon count (cheap to render)
Skeletal animation - information about joints etc that is used in the animation process. This is required on idle objects as well

Texture information is compiled in the QC file.

Issues - Compilation
To date we have had issues with the exporting process.
This can be done in 3DSmax, Cinema4D or lightwave but all require plug-ins and different pieces of software to correctly assemble the SMD's.
We had trouble compiling in 3DSMax for these reasons and eventually settled on lightwave due to its more straight forward interface design and export capability.
Iona is currently tackling compiling.

Issues - Model production
The main problem we have encountered here is getting the model properties correct. Initally Selma created a list of fantastic models but used smoothing properties and NURBS (non-uniform rational B-splines). Half life only allows compilation using standard polygon designed models. Although models were redesigned the polygon count some still exceeded 8000 per model. These were redesigned to be less complicated and many are now useable. Some however are too simple and become unrecognisable when the MDL file is compiled.
This is really a trial and error process in which Selma and Iona are deeply into!

Some examples of rendered models by Selma Wong

sewing machine render
Sewing Machine

wheelchair render

More Modding in Orange Box -skyboxes

The space directly before the war bunker was simply a brick room (seen previously on another blog post called war room skeleton) before a skybox was added. A skybox, in conjunction with a sky camera allows a panoramic sky-scape to mapped onto the area in which a 'skybox' texture is applied. The sky camera itself takes its reference from the origin of the map allowing larger buildings and objects to be framed in a space further away from the main map. The reference camera uses 1/16 scale building to reduce the polygon count as well as requiring less skybox to render.
Fire entities and smoke entities also add to the war-like effect. Here some entity parenting work needed to be done to tie the gas to the canisters and effects like the canisters exploding when picked up.

Sunday, 22 November 2009

Not too happy!!!

Ok so its currently 6am, 6 hours after i decided just to 'have a go' at looking at colour correction. I was trying to produce a colour desaturation when you walked into a specific area...

What i have worked out...

1. Everything that i have read online is wrong!! Using a color_correction_volume entity linked to a trigger brush gives you the opposite effect to what is required! You step into the triggered area and colour is RESTORED when in fact you want the correction to happen when that environment.

2. Making it work by putting a volume of trigger brushes around EVERYTHING BUT the environment has its problems unless you put it all in a func_detail before applying the linkages

3. When the thing does eventually work the effect bleeds into the rest of the map - the whole map desaturates slightly so it looks less vibrant then before. Not only does it make the map less lively but it nullifies the whole point of a drastic colour desaturation!

4. Throughout typing this whole blog entry ive had to correct every instance that i've written 'colour' because ive programmed myself to type 'color'

5. Im worried that sound and (potential) animation triggering is going to lead to just as many problems.

I can see myself having another go tomorrow although i dont think there is a single setting in the colorcorrectionUI or the func_detail properties i havent tried. If i cant get it to work id rather lose the desaturation effect and try and produce something that swells shadow or lighting or something, i really dont know. It just seems a shame to lose the vibrancy of the piece for one small effect.

One thing is for sure though, i gotta get out of this 3am, 4am, 3am, 6am OCD fueled work pattern. Sometimes i think if i left it alone a while and even slept on it id have a clearer picture or at least the will to try a few more times. Lets see what tomorrow brings


Sunday, 15 November 2009

Internal architecture stills

Internal space drafts by Lee in Half Life

Water Tower External Design

Implemented by Jovi using 3DS Max and photo textures -

Work in Situ

**Post deleted wednesday 6th January**

Time Management..

After doing a rough draft of the floor plans we've divided work up a little bit as follows:

• Building the external tower
• Build objects in 3Dstudio Max
• Scripting animation/sound elements into HL
• Research how to do animation in HL


• Building ECG machine, piping, and wires (blood trans.)
• Build objects in 3Dstudio Max
• Scripting animation/sound elements into HL

• Gathering textures
• Historical research and oral history
• Sound recording/foley
• Studio 6 conservatoire mix
• Scripting animation/sound elements into HL
• Particle generators and skyboxes (external environment)

• Building the maps/architecture in Half Life
• Sound recording/Foley
• Rough mixes then Studio 6 conservatoire mix
• Scripting animation/sound elements into HL
• Research triggering in HL (animation and sound)
• Particle generators and skyboxes (external environment)

Working with 5 weeks left to finish should give us plenty of time for unexpected problems and a week to fully test and gain feedback from people.

Week 1 (as of 16th Nov)
  • Complete water tower amendments
  • Gather relevant textures for existing tower internals (photography on site)
  • Building of maps/architecture in HL (Half Life)
  • Object research and plans on realisation in 3DS Max
  • Build upon ideas of how the blood transfusion ideal could be achieved (Selma)
  • Begin oral history research

Week 2
  • Continue with 3DS Max object building including compilation tests and UV mapping
  • Research into object animation particually in reference to the ECG machine idea (more to follow)
  • Continue internal architecture and design
  • Import real world 256x256 textures into HL and implement
  • Oral history script and how it will be experienced through the space

Week 3
  • Complete 3DS Max object building and begin to import into HL, orientate and UV match to existing HL entities (objects/prefabs/lighting)
  • Research into triggering sound and animation within HL
  • Begin sound sample gathering using predefined sound script throughout the space

Week 4
  • Continue sample gathering and begin compiling atmospheric sounds/triggered sounds/oral history work (fragmented narrative)
  • Contextualisation check - Meeting to discuss if the media created is sufficient to describe the space in the way we want it to
  • Animations produced and put into HL - tested and developed

Week 5
  • Import sound elements and set up proximity triggers within HL
  • Any remaining skybox work (picture of the new hospital through the windows etc)
  • Work with particle generators to create dusty environments

Week 6
  • Test/troubleshoot and resolution
  • Feedback


Wednesday, 4 November 2009

Second Floor Plan

Here are the Second Floor plans for the Wartime Scene


This floor is where is starts to get a little surreal.
As you enter the second floor you are met with gun fire, explosive noises, and generally a very busy and intrusive soundscape. The imagery itself aims to describe a war scene very graphically by actually bringing the outdoor elements of the scene inside. For all intents and purposes this is a beach landing. The walls are replaced with perspective images of beaches, ships, the sea and the other side a high banked steep hill topped with barracks and sea defences. The 'room' is littered with war artefacts. There is a segment of a tank breaking through the wall and anti-tank objects are scattered around. Short fragmented oral accounts from the landings can be heard as you negotiate the space. The space however is fairly void of colour.
Animation elements: Being able to shoot a machine gun?

As you move through the scene the anti-take objects begin to lay side by side with wheelchairs, parts of hospital beds, and general hospital equipment. This merging also happens with sound as hospital equipment starts to bleep around you. Colour begins to saturate as you move towards the hospital doors.

You push through the doors which immediately triggers the halt of the battle sounds. We move into a pre op corridor where the mood is a lot quieter and the oral accounts become more frequent but still ambiguous and fragmented. You can however here the crying of a baby.
The much brighter corridor leads to an operating theatre. The walls here resemble metal framed fabric screens and house an operating table, surgical equipment and blood bags. The overriding hospital sounds become a little more busy. You can hear pumps, bleeps, scissors etc. These merge into a rhythmic pattern so the longer you spend in the hospital environment the more routine like it becomes. This could represent the routine the existing hospital staff/patients must put aside when moving to a new environment. The idea of change, a new environment and hope is depicted with an implied love story between a nurse and a soldier.

The final scene is a room with a single cot inside and the sound of a baby softly crying. The crying sound has been redefined by its environment and now signified new life.
The cot and surrounding wallpaper is covered in poppies.
Animation elements: Being able to rock the cot

Ground floor plan

We intend to control the flow of content through each scene by using physical walls and barriers which guide the viewer along the correct pathway. Sound, animation elements and visuals will support the experience particularly through oral history components. Touches of interactivity will allow a more immersive environment and engage the participant long enough to allow us to trigger sound elements through the act of moving in proximity of a space or along a triggered timeline.

Here are the first floor plans for the Victorian Workhouse Scene


We decided to situate it on the ground floor so that it would be the first scene the viewer saw when entering the tower. Firstly this immediately established a dialogue between how the modern day and 200 years ago as the towers realisation is similar to how the building looks today. It also allowed us to introduce the sound descriptive and animation elements from the beginning.

As you through the double doors you are met with dark walls, minimal light, and manual machine noise. The turning of cogs and fast tapping describes sewing machines. You can sit at the machine, press a button and the machine begins to work. This triggers a further machine noise for added value.

You continue through the close cramped space turning left to reveal a dimly lit blackboard on the wall. Another look left shows a row of pews, a crucifix and a bible. This represents a closely bonded school/church situation. Potentially here religion based classroom readings can be simulated and brought to life through sound. In the distance the dull machine noise still prevails.

Continuing forward you are met with a kitchen/washroom scene. Pots and pans hang from the ceiling arching over a cooking range. You have to walk through the cooking equipment before moving on, knocking it to the floor with an inevitable clang and bang. Throughout the scene a bubbling noise, a scraping noise can be heard describing the duties of everyday workhouse kitchen duties.

You follow the corridor around where some light from the window palely illuminates a corridor of small bunk beds. 6 in total here. Bed creaking helps describe the cramped conditions. The infirmary is our last port of call. Empty buckets, cloths and blood stained bedding describe vividly the utility of this space. There could be possibly sound elements depending on how ugly we want to make it...

You then move up the stairs to wartime.

Friday, 30 October 2009

360? - choose your platform!

We now know pretty much what we want to happen inside the space from an audience perspective. Its becoming much easier now to get deeper into the participation and interactivity levels of the work on paper at least. This is a good sign as it means we all have a collective understanding of what thoughts and feelings we are trying to provoke in the participant as well as what could potentially be ambiguous and unique to the individual.

The last week has essentially been devoted to the 'how'... with the biggest questions concerning what platform we are going to run the experience on, and where the visual and audio elements could be designed.

Firstly our criteria for the platform: The 'must haves'
  1. It must allow us to build a static 3 dimensional environment
  2. It must support object building and texturing
  3. It must be audio compatible in some form or another
  4. It must in some way allow us to integrate 3 different areas into a single space so that we can express 3 distinct time periods
The 'essential for artistic content/interactivity'
  1. It should allow us to animate portions
  2. It should support a first person perspective or support a controllable avatar
  3. It should allow for appropriate triggering of sound and also be proximity controlled
  4. It should allow us to model the spaces accurately with a range of textures
The 'would be nice if's'
  1. It could support a timeline of events to give us additional content control
  2. Be fully flexible with animation
  3. Support video content
  4. Allow us to give the illusion of time travel (teleporting etc)
Second life (SL) was our first option. This initially seemed a fairly good solution as it was hosted on a virtual world allowing for a wider audience and was also cross platform between Mac and PC. It was possible to build objects and building but was a little poor in its execution and quality. This however wasn't such a problem as you could import from another platform such as 3D studio Max. Sound and media could be distributed as well.
We decided to move away from this for a number of reasons. Firstly its usability was poor and did not allow for full control of the environment we were in. This restricted our content flow and the ability to direct the participants thinking. Texturing was also quite limited as it required fairly complicated modelling for alpha layering and UV mapping. The sound elements would also have been a complete compromise due to SL's 10 second maximum audio clip length. This would have made ambient sounds and time line control very limiting.

Moving away from game platforms we started to explore other 3D modelling software. This would however mean that the work we produced would only be accessible through an installation environment. Knowing we could have to compromise we continued to explore. We looked at several modelling programs including Maya, 3D studio Max, and illustrator. We worked a little in each program and found design was easier in 3D studio Max and Maya. Maya however was a little difficult to navigate and there was only a limited amount of online help available.

We were still stumped for a platform so we started looking at Flash. Jovi has had some experience with flash and could produce good working animation situations. It had full avatar control although in a third person perspective. This was great but we were still having to compromise with a 2D environment. Flash has recently offered some 3D rendering solutions but this was very tricky and possibly unachievable in our time restraints. Another solution was needed.

This continued for a couple of days until Iona, and a little help from a friend, suggested Half Life 2 as a platform. Having never really looked at editing a map in Half life we were keen to explore what was possible. It supports a first person gaming perspective as well as theoretically suitable solutions for 3D modelling and UV mapping. Coding looks achievable and there appears to be good support for sound. Half Life is designed to be modified by users and the level of online help and support mirrors this intention.

Animation and user participation is a big thing on half life. The physics engine is good and would allow us to move objects around if required. There is also functionality for all sorts of interactivity such as pushing buttons to trigger animation which is a fundamental element to user participation. We do however need to explore animation in Half life in some detail as we don't yet know its complexity. In addition we haven't discovered any way of using video media in the platform.

We will attempt to create a single user map in Half life then test its functionality with some simple object rendering and texture assignment. If it behaves as we want it too then its defiantly worth going down this route.

building site

Hl interior

Friday, 23 October 2009

Descriptive walk-through..

Idea 1 – preferably experienced through headphones in the dark

The audience enter a world of dust/smoke and a dull glow as if the world has been silenced. The sky is dark and grey and seemingly lifeless – the terrain is bare, the world is still and unnerving. They turn to see the Water tower before them. It is not how is looks today. It is old and battered, but still holds a past that shows that it has experienced so much. It gives the impression of untold stories.

As they enter they are provided with a dust filled scene of wood, masonry, old hospital beds and stained walls (possibly use of ‘Dump the Junk’ artefacts to model). Sound here is a low drone accompanied with sounds of a naturally cold and dark environment. The participant should experience that environment through recordings of similar real world sounds. They begin to explore. They’re movements trigger events. A beam falls from the roof, crashing to the floor, a hanging light sparks and flickers. One final movement introduces the distant sound of explosions and planes. The room begins to move, objects re-orientate, and in a moment they find themselves in wartime Britain.

This next scene is louder, and more dynamic, eventful and more abrasive. The sounds in this space feel more real world. The drone has gone, replaced with mechanical sounds. The audience is drawn to a hospital bed at the far side of the room lit from above. As they get close they trigger human sounds of pain. Further exploration around the room provides fragmented narrative of patient stories, archived recordings and even speeches from both Axis and allied leadership including notes of propaganda. The sound begins to fade after a short time leaving only a final narrative. This narrative is an important one. It needs to reflect the fact that everything will be ok. Maybe from ‘For the Fallen’ – Laurence Binyon:

“As the stars that shall be bright when we are dust,

Moving in marches upon the heavenly plain;

As the stars that are starry in the time of our darkness,

To the end, to the end, they re

This scenario feels more like a directed performance. The end triggers another room shift into the final scene, the Workhouse.
The room strangely appears the newest and cleanest but slightly drained of colour to give the feeling of early colour photography. (Although too early for such mediums the film gives an old but fresh tone). Most sounds here are internal and less acousmatic. The soundscape is fairly bare with only the occasionally clink of pots and pans or the odd sewing machine being turned. There are 3 distant parts to this room. The first is a chair supporting an open book.. A click of a button and the participant is in the chair, rocking back and forth. They have the option to take and hold the book while a narrative begins that describes an aged and infirm resident teaching the younger children how to count.
They get up and move on to a sewing machine. Again a click and they are sitting at the stool. The machine is animated and working and a narrative has begun describing work in the environment. Finally, a final few steps reveals a small cooking station. A click starts cooking sounds and a small mock up film of lunchtime at the workhouse. It depicts a scene of bread and cheese, soup and potatoes eaten in silence without utensils. After all areas have been explored the participant can exit the building. One final movement out the door triggers the final piece of time travel. The room once again is a building site.

Notes on restrictions

  • Only one participant should occupy the building at any one time. This gives reasonable control of the flow of content we want them to experience.
  • The control architecture restricts movement in one direction: Modern time to wartime to workhouse and back to modern time.
  • Internal objects and structures may be limited in complexity depending on the way the movements are scripted in the platform chosen

An important Focus

This week really hit home about how we need to focus on what we want the audience to experience from the piece. We explored some ideas:

  • The movement of time - how can we make this free flowing?
  • Do we want to use SL at all? Are there advantages to building in an exhibition environment and having more control over the experience
  • We feel strongly that sound should be an essential part of the interactive experience. The participate should be able to trigger and interact with fragmented narrative describing historic opinion. Visual content could be used minimally?
  • A possible physically change when moving between time periods. E.g. if a person/avatar speaks they’re sound is manipulated and changed by the environment. Maybe they can choose to change their age and clothing to suit the environment.
  • Interaction between the participant and period objects. Sewing machines, tools etc
  • Potentially limiting the amount of time people can interact with the work before it is completely deteriorated (achievable through temporary objects or scripted movements over time
  • Atmospheres: making dirty air in the workhouse, clean air in the hospital,
  • How to get the audience to enjoy the performance?
  • Can we allow people to become part of the work, not just experiencing but interacting, changing things, and becoming an integral element of the piece.

Ideas from Rose our Selly Oak collaborator
Rose was kind enough to send us back our very fragmented project proposal with some very positive feedback and a number of very interesting opinions and resources.
The first piece of interest is a site specific piece proposed by the Birmingham rep called 'Shell Shock'. The work will tie together experiences from the war in Afghanistan and scenarios during the second world war. This seems really interesting particularly when thinking about how we wish to treat time in the water tower project. This may be worth looking into.

Rose liked the ideas of an immersive environment and especially the use of a fragmented narrative to allows individual interpretation. She has a good point and i think its important to be true to opinionated narrative and archive accounts rather then to pull the participants thinking one way or another. We also have to consider how the visuals and other sound elements guide you through the space, taking care not to over suggest thoughts and views as well as be too overwhelming.

I find Rose's ideas of change very interesting. She believes the unsettling and fast pace nature of change requires a grieving process. In this sense the Selly Oak projects help to facilitate this. The water tower project therefore needs to address such issues and provide a vehicle to reassure the participant that change is healthy and that its OK to feel as if you don't have complete control. Referencing elements of Selly Oaks history within the piece may help to reassure the participant (which in an exhibition environment would probably be someone with real connections to the hospital and therefore completely effected by change) and deal with what is happening to the hospital, 'allow the mind to begin the adaption process' to quote Rose.

OK, what next?
I think the next logical step is to combine all our experience and ideas over the last few weeks into some very descriptive 'walk-throughs' of the space. Between us we need to solidify the best way to respond to the site, and from talking, our collective opinion wholeheartedly falls towards viewer participation. Personally i feel that we need to add that focus of reassurance. Through the piece we at least need to suggest that change is an important process of improvement and that anxiety is natural in such a time.

Saturday, 17 October 2009

Side track - Computer Music

Nice discussion with Jonathan Green and the Jazz group/DAP group yesterday.
Ideas within Electronische music, Music Concrete, and Stochastic music were explored and within this, how computer based music moved through analogue, to natural sounds, and then to probability based generative structures.

I've always been interested in ideas of spatialisation and diffusion within live electronic music. Electroacoustic composition and particular music performed acousmatically is a difficult concept to engage with initially without actually having worked with sound organisation. I found a lot of people in the workshop switched off a little when listening back to most examples. I suppose it takes a while to connect with music without conventional musical events and definable attributes.


Jonathan's video examples reminded me of ideas that i want to explore during the DAP course. The following example sets it all neatly for me. WYSYG (What you see you get) uses sensor based technology to directly translate light to sound 'phrases'. The whole idea of audiovisual performance and particularly using light as a variable greatly interests me.


Thursday, 15 October 2009

Initial Collaboration Ideas

Tuesdays session with Second life caused a few ideas to be voiced directly afterwards. We decided to meet early the next day to pitch our thoughts and attempt to come up with something more concrete.

Present at meeting yesterday:

The Queen Elizabeth hospital - Selly Oak site
During 2010-2013 all Selly Oak hospital services will move to the new QE hospital site. This leaves a two week period where several decommissioned buildings will be available for artistic and performance response.
One of the building is an old water tower and holds some nice architectural interest.

Old A+E entrance to the hospital - UHB historical gallery

Moving on
The Selly Oak site itself is only available from June next year, meaning any work that could potentially be produced there would have to be realised graphically for now.
  1. Is it worth looking at a physical site if realistically there was no way of presenting work there in our time frame (10 weeks)?
  2. Would it be a good idea to represent an idea in a virtual environment with the intention of moving a work to the physical site when possible?
  3. Should we just take inspiration from the architecture/culture to produce work in a different environment?
Collectively as a group we are keen to represent the external water tower in a virtual environment such as Second life. Along with this we want the internal space to be bespoke to whatever function we wish. This may mean we scale the tower to some degree to accommodate such functions.

Purpose before process
We feel its important to pay respect to the history of the hospital and draw something contextually from its past. A little research showed that the hospital played a major part receiving air raid casualties during world war 2, and regularly doubled its intended intake. A little more research suggested some of the solihull hospital units were formally workhouses for the homeless, sick, and aged. This of course added another dimension to what we could potential work with.

I initially discussed ideas of pulling together audio and visual media to suggest a feeling of a war time hospital scene. The intensity and representation of that time could be described more in an audio context then simply the visual scenes that lay in the internal space. Iona added that a morphing of this idea with a workhouse scene could work well. Between us we decided there was no reason why we couldn't use sound and visual cues to describe both spaces. A sewing machine for example, has a similar sound quality to a machine gun, the look of a hospital bed a comparison to a dormitory bed. These comparisons could be explored much further to blur the borders between the two spaces, to imply an ambiguity.

How can we make the this more interesting? Several ideas of embedded video and even narrative were discussed but personally i think its important we don't make the project feel like a heritage build or too informative. Our intention is to use new technologies to explore the difficulty and holistic responses of the space and to draw comparisons between the two different settings.

Jovi put forward that a teleport system may be a good way of moving between the different spaces with the original tower condition as a starting state. From here you could portal to the respective scenes. Nice idea! The conversation then moved onto creating 3 different towers. I thought this would be a little inefficient and would mean you would see 3 towers as you entered the external scene.
Instead i put forward the following idea..


Here the three floors of the water-tower provide the teleport points for each scene. Using proximity thresholds it may also be possible to allow certain sounds to bleed between floors aiding the morphing nature.

The next step
Audio research - whats possible in Second life (compromises?) - ambient/placement
The build - How accurate can we represent the tower
Mapping - external and internal proportions
Textures - Whats possible to produce and implement on second life. Building textures/fabric/lighting
Contextual research - relevant audio and visual cues in context with that period including potential items we can use from 'X-street' (online store for items)
Second life architecture - Permissions, buying land if needed, teleport architecture, required land size/prim allocation, embedding media

Once completed we can move onto design stages and start to put together exactly what we want to get out of the internal spaces in the building.
Lots of ifs and buts to be ironed out and put forward before we commit pen to paper

Wednesday, 14 October 2009

Notes of Second Life and a Virtual environment

Yesterday yielded an interesting session with Drew and Mike on a Virtual world Sim - 'Second Life' (SL). Here your a 'version 2.0' of yourself, an alter ego, an avatar for creation, and within this realm, have seemingly limitless potential.
To be honest at first glance this seemed too 'dungeons and dragons' for me. A little too immersive for my liking, maybe even megalomanic.


Firstly, i dont want to be one of these people who pass judgement before embracing someones passion, and secondly there is a reason why SL has such a strong online community. It didn't take Drew long to demonstrate its potential as an environment where you can build and test new installations, recreate existing buildings, and interpret ruined/destroyed/lost landmarks into how they may have looked at that period. For me where things got interesting was where you can use media to enhance the virtual environment to the point where you can interpret pretty much any art based event/activity in that space.

Heres an example of a concert performed by the Liverpool philharmonic orchestra streamed through second life. This played out to 80 avatars that subscribed to the performance..

The experience offered a remote audience to experience the concert and even share conversation with the composer and conductor after performance in a way only possible through interactive media. It also showcased the hall in a cultural heritage light through a modestly accurate representation of the building inside and out.

So What Next?
This environment seems like a good starting point when looking at producing a collaborative work.
The Virtual world is simply a place where i personally have not explored in any way when working with technologies, and its a good opportunity to push myself right out a comfort zone and start working with new forms of interactive media. Creating some sort of build in Second Life (or another virtual environment) seems far enough away from what we know as a group to be a steep and rewarding learning process (scripting/broadcasting/3D modelling). However within it there are opportunities to utilise the skills held collectively in terms of audio/visual media creation and manipulation.

Digital artworks explore relationships between reality, in what is a physical space, and what is virtual. Should we be concerned with the translation or mapping of a physical environment into a virtual one, or exploiting the qualities and freedom of the virtual space to produce work that is unique to that environment. Architecturally a virtual location may be more responsive in terms of interaction between you and that space, but will it evoke the same emotive responses as similar art in a physical space.

food for thought....

Thursday, 1 October 2009

Collaboration thoughts

What do I want to achieve?
On an academic level its to produce a piece of work in collaboration with someone with a different range of skills, bounce ideas off each other, and produce a work that effectively is greater then the sum of its parts. On a personal level its a great opportunity to learn new skills, refine existing ones, and most importantly for me, to be put out of my comfort zone and do something that otherwise would have been left alone.

My background - Music production. Which forces me to slant away from a project that is built solely around audio. This doesn't however rule out the use of audio in an installation or piece or work. What is interesting to me is the effect audio/sonics have upon us in a physiological/psychological sense. Im just going to list a few 'key' words i feel are important to this, it might help!

  • sensory
  • cyclic feedback
  • tactile
  • interactivity
  • creative input devices

Works/Installations of interest

Steve Reich's 'Pendulum music'

Reich here uses sounds that are usually seen as 'unwanted/undesirable' in the audio world, to produce generative composition. Whats interesting to me is that when viewed in the context of an art installation, the resultant sounds feel musical; turning undesirability into creativity.

Nathaniel stern/Greg Shaker collaboration - 'Undertoe'

I've chosen this example because of its interactive nature. Although an unfinished concept it pulls together the sensory, feedback, and interactivity elements of where i want to go with my work. 'Undertoe' also touches on the theraputic side of art which is an exciting avenue i wish to explore.
More details here, nathanielstern.com

Public Intervention....I also found of this interest but wont dwell too much on at the moment. Nevertheless have a look at the following slide show: SpYInterventions (under 'interventions tab')

Monday, 28 September 2009

All set!

Blog and Calendar is set up, modules picked, facilities explored!
Ready to get creative!