Target audiences and Limited Resources

Both units studied this trimester (Studio 3 and CIU Major Project Development) had a ‘future focus’, that is the units are designed to get us thinking about future, perhaps commercial, productions when we graduate.  A strong message that has come out is around clearly identifying a target audience and using this information in the planning and pre-production phases of a project.  Another message that has come out of these units is the need to work creatively within less than perfect environments and with limited resources. This blog will address both of these Studio 3 Learning Outcomes.

Although our project was not intended as a potential commercial venture it was a valuable exercise thinking about the Game of Thrones audience.  This knowledge informed many of the decisions we made during the project.  As mentioned in my earlier blog about the aesthetics of the production, we didn’t want to deviate too much from what fans of the franchise might expect.

And who are those fans? Game of Thrones is a hugely popular series.  The primary audience can be identified as males aged 18 to 45, but it is clear that the series also appeals to women. In the above mentioned blog I talk a fair bit about what it is about Game of Thrones that sets it apart from other fantasy series and what is its appeal.

We wanted to make the production polished and highlight the elements that make the show so popular.  Probably the biggest departure from the original GOT production is our use of music.  We introduced a modern instrument to the mix (electric guitar) and blended this with instruments generally found in the series e.g. cellos, violins, flutes.  This was a creative decision that came about relatively late in the trimester when our semi-professional composer became unable to continue collaborating with us on the project.  Unfortunately, as we were no longer able to contact Brad, we felt we were not able to use the material we had worked on with him.

Even though this was a setback, not to mention a huge disappointment, we were determined to deliver our product as pitched, with an original score.  It was way too late to engage the help of another composer so Lisa and I decided to take on this challenge of composing.  Neither of us has any experience in this area so of course we weren’t expecting the results to be the same as what we had originally intended. We had to think creatively.  How could we compose and record ten minutes of appropriate music, scored to the scene in three weeks?

We hit on an idea that Akshay had actually mentioned in week 2 (must have stayed at the back of our minds) – introducing electric guitar to the score.  Lisa enlisted the help of Corey D’Angelis to work on some guitar parts that would suit the carnage scenes in the middle of the episode.  She then set about composing strings and drums to go with the guitar as well as composing the first section of music before the guitar comes in.

My role was to write the music for all the dragon parts, basically where the dragon enters the Colosseum to where he flys off at the end.  This was always going to be a challenge.  I read everything I could on music composition, read up on basic music theory and analysed the music in the original scene.  More importantly I just played around with the midi keyboard at uni for hours until I stumbled on a few chord progressions that I liked.  I don’t have a midi keyboard at home (it’s been added to my wishlist) and so had to access the computers with Kontakt 5 software at uni, at night when rooms were available.  I practically lived in there for three weeks, staying til midnight each night and working across the weekends.  But I’m certainly not complaining as it was one of the most enjoyable experiences I have had working on a project.

Once I had the chords more or less sorted I set about adding instruments.  The more I worked on the music the more I realised I didn’t know.  Are the notes supposed to overlap when playing legato?  Can a flute be played like that?  What is the range of this instrument?  How are the parts written to play chords?  Again, more research on the basics of orchestral instruments and who plays what.  I experimented with different instruments playing different notes of a chord and in different octaves and tried to add a little counter melody where I could.

Then I discovered you could control other parameters in Kontakt 5 such as the velocity curve.  Three weeks was not nearly enough time but I’m actually pretty happy with my results considering my lack of knowledge in this area.

Below is a segment of the music I wrote.  It picks up where Daenerys and her Dragon, Drogon have a touching moment before he is again speared.  Daenerys then slowly moves toward him and carefully climbs aboard.  Drogon then charges across the Colosseum floor gaining momentum before taking off into the sky and flying into the distance with Dany aboard.  The instruments sound quite ‘midi’ but you get the idea of where I was going.

Lisa and I worked together where we could to ensure some sort of consistency and flow between the various musical segments. For example, it we decided to start and end our video with just timpani drums. Lisa did a really good job working out how to transition in and out to the guitar so that it would fit.  We had originally planned to record a number of instruments (when our composer was on the project) but now we could only record the electric guitar.  Lisa and I recorded Corey’s playing one Sunday afternoon in the Neve.  As Corey was playing to the video and not a click, Lisa did some painstaking editing on the guitar after the session and re-performed her midi input to match the timing.

I’m thrilled with what we achieved with our music given the time constraints and limited resources including our inexperience in music composition.

Advertisements

Experiences with Mastering

Mastering was one of the key areas looked at in our Studio 3 unit this trimester.  Until this unit, it was an area of audio I had put in the ‘too hard basket’  and knew very little about.  We were very lucky to have Guy Gray taking these classes and sharing his expertise in both lecture format and practical demonstrations.  We touched on mastering early in the trimester and then revisited it several times later in the unit.  I thought this was a good approach because we were able to revise what we had already learnt and then build on this knowledge to gain a more thorough understanding. We were also lucky to be able to look at mastering in different contexts and genres e.g. stereo and 5.1, orchestral and pop.

We learned that mastering is all about achieving optimal sound quality as well as ensuring consistency in sound quality and dynamic range between audio files e.g. individual songs on an EP or chapters in an audio book.  Oh, and let’s not forget it’s about getting things loud.

Guy pointed out that getting fantastic sound quality in the mastering phase of production required getting a great recording and mix in the first place.  Mastering just adds the final polish.

In the mastering lectures, Guy explained the basics of mastering, gave us information about what masterers actually do and talked us through pre-master mix requirements.  The aim of these classes was two-fold; to give us the knowledge to apply basic mastering to our productions when a professional masterer is not an option, and to have a better understanding of what a master does with your audio so that you can mix with that in mind if getting tracks professionally mastered.

Once in the studio, Guy gave us practical demonstrations of the various mastering principals and techniques discussed in the lectures.  He also showed us the benefits of mastering with stems (full mix, vocal only version, music only version).

In this blog I will give a bit of a summary of the main messages I took out of these classes.

  • Need to start with a good mix in the first place with enough headroom
  • Mastering signal flow:  audio source track/s bussed to a Source Master (stereo auxillary track), then bussed to a Print track.  An auxillary for reverb can also be created if required. Solo isolate aux tracks.
  • Tracks always recorded/printed not bounced
  • Brickwall limiter e.g. Maxim inserted on Source master – always last in the chain.  Ceiling set to -0.3dBFS (least noticeable difference). Release set to around 60ms. Threshold set to what you think sounds good.  Be mindful that limiting brings up noise floor.
  • Change metres to peak
  • Apply a multiband compressor to the Source Master
  • EQ the source track/s
  • Always revisit the limiter after making adjustments earlier in the chain
  • All about balance
  • Best mastering is transparent

The practical demonstrations were hugely beneficial.  Seeing and more importantly hearing the mastering process in action made it very clear what was going on and why.  We also got to learn about more advanced mastering techniques such as mid-side processing.  Akshay demonstrated how to decode a stereo file in order to apply mid-side processing and Guy talked us through the magic of using this technique.  A real eye-opener.

Seeing and hearing mastering in action, however is never enough to improve skills in this area.  You really need to give it a go.  I had several experiences applying basic mastering techniques across the trimester.  My first attempt was when I produced a revisit of my horror sonic experience.  This was a very basic attempt and I realise now I did not have the release setting right so it wasn’t that effective.  Later in the trimester I had the opportunity (along with my group) to master our 5.1 version of the Game of Thrones episode as well as the stereo version.  The surround version turned out quite well but we were a bit rushed when working on the stereo version and I think we missed the mark a bit with that one.  We had set up a stereo master and sends set to FMP (follow main pan) to this master (essentially like a headphone mix).  Guy had shown us this technique.  It certainly is a very effective technique but time is required to get the balance right when down mixing.  I’m hoping we get an opportunity to re-do the mastered stereo version in the near future as we’d put an incredible amount of time and energy into the production up to that point.

My most recent attempt at mastering is on the pilot of our audio book for next trimester.  From the research I’ve done so far, it appears the ceiling is supposed to be set at -3dB peak but I need to look at this a bit more.  From this experience I can see that mastering an audio book is going to be tricky.

Signal processing and repair

This blog addresses the following Learning Outcomes of our Studio 3 unit:

LO 09  Implement performance correction techniques in music productions with specialised tools.

LO 11  Adapt recorded audio performances into new audio assets.

LO 12  Repair audio signals to improve sound quality.

I’ll start with the last of these as repairing audio signals recorded in challenging environments was often one of the first things we had to do during our project after recording.  An example of this was the work we had to do on the recordings for our Game of Thrones project, captured on location at the Abbey Medieval Festival.  You can check out my blog about these location recordings here.

As mentioned in my blog, there were several challenges faced on the day with these recordings including the windy weather, noisy generators, galloping horses and sound from PA systems across the festival site.SAE MED FAIR2015-5464

To clean up the recordings I used a range of techniques and processes including close editing, filtering and equalisation.  I found applying a high pass filter on signals affected by the intrusion of the generator, to be very effective.  Likewise applying a high cut filter to signals affected by wind, worked well.  I had to keep in mind though, that there needed to be consistency in the sonic quality of the crowd samples for the project.

On other signals there were certain undesirable frequencies that needed to be identified and removed or reduced with a notch filter.  Using a narrow Q, these frequencies could be drastically reduced without too much impact on the remaining audio.  These annoying frequencies weren’t always immediately apparent during initial clean up of signals but became noticeable when looping small sections for our crowd ambience.  They also popped out in the later stages of the project when the limiter was applied during the mixing/mastering phase.

We had further flexibility with the recordings that were done with the Zoom H6N. These were recorded in MS-RAW mode where the front facing centre mic is recorded on the left channel and the signals from the bi-directional side mic are summed and recorded on the right channel.  This RAW stereo file can be decoded in a DAW by splitting the file into mono files (left is centre and right is the summed sides), duplicating the right mono file (sides), panning the centre to the middle and the left and right sides hard left and right, and inverting the phase on the new ‘right’ side with a plug-in. Spectral signal processing can now be applied to these mono tracks individually as desired and the stereo width can be adjusted to suit.

In earlier projects we used audio repair software to reduce unwanted noise – namely IZotope RX, with good results.  On reflection, making more use of this software for this project would have been hugely beneficial as it is far more advanced than the basic software I primarily used, and is far superior in identifying problem frequencies than myself. Once a final limiter is applied in the later stages of a project, problems that haven’t been addressed with audio become evident.

For my next project (an enhanced audio book for children) where sound quality is critical (and very exposed),  I predict I will be using this software extensively.  I plan to incorporate its use into my editing routine and manage studio bookings carefully so that I have access to this brilliant software.

As well as signal repair, performance correction is an important part of the production process. It is also important to be able to adapt recorded audio performances into new audio assets. To illustrate my experiences in these areas across the trimester I will focus on the ADR for our Game of Thrones project and the narration recorded for the pilot of our audio book (next trimester’s major project).

Our original plan was to use voice actors for the ADR but this didn’t happen.  Fellow students were kind enough to help us out and Bran and Abbey set about completing this recording task.  They did a great job with this but when it came time to bring the dialogue into our master session we noticed a few things that needed to be worked on.  The synchronisation was just a little off.

In order to achieve a better match between the ADR and the screen we used elastic audio.  Elastic audio allows you to stretch or compress audio in real time, to achieve the perfect synch.  You can change the timing of words or even syllables within a region to get a better match. Once elastic audio is enabled you can identify the syllables in the dialogue in analysis view (this shows the Event Transient Markers).  Then in warp view you can then use warp markers to adjust the audio within a region as desired. By using warp markers as anchors you can stretch or compress audio in a specific region without affecting surrounding audio.

Screen Shot 2015-08-15 at 4.12.51 pm

I’ve found with ADR, it’s not just the timing of the line delivery that needs to be spot on, but also the articulation of each word. One of Tyrion’s lines, “Yes, she can” was well matched with timing but still appeared ‘out of synch’.  After close inspection, I noticed it was the delivery of the final ‘n’ of the word ‘can’ that wasn’t working.  In the original film this word is articulated so that the final ‘n’ sound is emphasised – “Yes you caN” (you can see this in the way Tyrion’s lips and beard move). I located this ‘n’ in the waveform by zooming in and used clip gain to raise this portion of the file. This worked well.

The use of clip gain to improve performance was also used extensively in my second project – a children’s audio book (Kumiko and the Dragon).  To produce a pilot (first chapter) of the audio book we recorded an 8 year old girl in the studio (a challenge in itself). When it came time to edit her narration I found a great difference in the level of audio at the beginning of a sentence and by the end of each sentence.  This was particularly a problem where we had to record multiple takes of ‘part sentences’.  Clip gain had to be used extensively to even out the performance.  Compression was also used but this had to be gentle and would not achieve the desired results on its own.

The experience of editing spoken word was quite a learning experience. By zooming in and examining waveforms you start to learn how to ‘read them’.  For example, in a waveform you can see the little burst of air at the beginning of words starting with stop consonants (plosives).  These can then be reduced with clip gain or removed (being careful to keep the consonant sound) to improve the performance.  This technique can also be used to reduce siblance.  This is fortunate for me because I am finding I’m having trouble using a de-esser effectively.

Screen Shot 2015-08-26 at 7.02.09 pmScreen Shot 2015-08-26 at 7.01.31 pm

As the young girl is not a trained voice actor, her performance required a good deal of editing.  I used a range of techniques including close editing, adjustment of clip gain, the application of gentle compression and de-essing, and equalisation.  I also used a little ‘small room’ reverb on her voice.  I didn’t have a lot of time to really get into it as the GOT project was so intense but at least it gave me some ideas for next trimester.  You can hear the results in the before and after examples below.

Back to our ADR…  Once the synchronisation issues were dealt with, we needed to work on the actual sound or sonic quality of the recordings.  The male characters being portrayed in the scene are all clearly in their thirties and have older sounding voices than our student voice actors.  We found pitching the audio down a couple of semi tones was quite effective in achieving a more mature sounding voice.  Screen Shot 2015-08-15 at 4.08.28 pm

We used Time Shift in Audio Suite, to adjust the pitch of each character to suit.  We used Audio Suite extensively in our post production as it allowed us to process individual clips, e.g. individual spot effects separately to the rest of the audio on a track.  In Audio Suite you can listen to the effect being applied prior to rendering the clip which was very useful.

We also used pitch shifting on many of our ambience clips such as various crowd parts.  For example we would duplicate a crowd section, pitch it down a few semi tones and layer it with the original track.  This improved the overall quality of the crowd sounds and added depth.

The original crowd recordings were also adapted in other ways with processing including the use of reversing audio.  This worked well when we needed to join repeated sections to add length.

I’ve learnt a lot about post-production working on my projects across this trimester and intend to work on and improve my skills in this area next tri as this is an area I’m very interested in.

Horror Soundscape Take Two

Earlier in the trimester we were asked to create a Sci-fi horror soundscape that was inspired by the 90’s film Event Horizon. During class we were asked to evaluate our peers’ soundscapes and provide feedback as to how they might be improved. Our next task was to reflect on the aesthetic outcomes of our soundscape and make improvements based on the feedback provided in the peer review.  Part of this reflection was to include a judgment on how our soundscape compared to a commercial production, i.e. the soundscape of Event Horizon.  In addition to this improvement on the original stereo version of the soundscape I decided to attempt a 5.1 surround version to meet another learning of the trimester. This blog aims to address these learning outcomes.

The primary feedback from my peers was to do with levels – “needs to be louder”.  The overall level was an issue, as well as the level and impact of the final moments of carnage.  As mentioned in my original blog, there were also a number of things I wanted to do to the piece to improve its effectiveness.  I decided to make these adjustments first and then incorporate the peer feedback as this was probably something that would need to be done towards the end of the mixing process.

I added some footsteps to make it more obvious that someone was moving through the ‘spaceship’.  It was challenging adding footsteps that would sound like those of an astronaut moving through a metal ship.  The sounds I came up with were a combination of ‘metal’ samples from freesound.org.   Unfortunately they aren’t great so I used them very sparingly, just to give the idea of movement.  I also added a more ‘banging metal’ and ‘alien’ sounds to keep the piece moving.

Most of my adjustments, however, were to do with the mixing of the soundscape.  I adjusted the relative levels and automation on the piece and put a lot more thought into the processing of the individual elements.  I added quite a bit more reverb in an effort to replicate a big empty metal spaceship and used filtering to place sounds within the space.

Taking on the peer feedback, I next looked at the final ‘carnage’ segment and explored ways to give this more impact.  I adjusted the scream first as this was specifically mentioned in the feedback.  I kept the same samples but balanced them differently, applied slightly different EQ, added compression and adjusted the fade out.  Then I applied similar processes to the various ‘alien’ sounds in this segment.

Finally, I added a limiter to the master bus in an effort to “make it louder”.  I’ve not used a brickwall limiter before and so I’m a little unsure as to the results.  It certainly sounds louder but there’s probably a lot more I could have done to improve other factors such as balance of frequencies and levels.  I look forward to learning more in our mastering classes.

When comparing my soundscape to the film’s soundtrack I can see some similarities in overall aesthetic e.g. the reverberant metal, clunky environment and ‘industrial’ feel.  I included persistent banging throughout the soundscape to give the idea of a presence within the ship as this was a strategy employed in the film soundtrack.

Throughout the film you can hear a ‘thunderstorm’ of sorts outside the ship.  I wasn’t keen to add such sounds as without visuals a thunderstorm would sound very out of place in a sci fi film set in space.  In an effort to achieve a similar effect I instead added some wind-like rumble to give a similar feel. I’ve also included segments of eerie music similar to that of the film and a low drone throughout to gel it together.

I also drew inspiration from another sci-fi horror film – Alien.  Many of the ‘alien creature’ sounds in my soundscape are of the screeching type found in the 1979 film.  On reflection this aesthetic doesn’t really match that of Event Horizon as the horror in that film isn’t really based on aliens.  Anyway you can hear the final stereo version of my soundscape above.  I’m pretty happy with the overall results.

Next I had a go at mixing in surround.  First step was to calibrate the monitors.  This involves placing an SPL metre at ear height in the ‘sweet spot’ and ensuring the speakers are positioned in such a way (and settings adjusted) so that they each measure 75dB on the metre.  To do this I taped the SPL meter (set to C weighting) to a mic stand and placed it in the sweet spot where I would be mixing. Then I set up an aux track in the session and inserted a signal generator (set to pink noise).

The monitors need to be angled correctly and placed equidistant from the sweet spot. My mate Alex helped me check these measurements with a tape measure.  I started with the centre monitor and increased the C/R monitoring level on the console until I had a reading of 75dB. After this we selected each subsequent monitor individually and checked its SPL reading, making adjustments where necessary.

20150728_142009 20150728_142001

It had been a few weeks since our surround mixing classes so it’s fair to say it took me a while going through my notes to get up and running.  Once I did get going (sorting out I/Os etc. and inserting a 5.1 master fader) I quickly realised how much fun you could have with surround! I spent most of my time just moving things around seeing how they sat in the mix.  I tried to keep the general surround mixing principals in mind as I did this e.g. dry in the front and wet in the back.  The effect of having reverb in the rears was amazing.  The sound is so much more spacious than my stereo version.

I also played around with the divergence setting to see what effect this would have. The task of mixing in surround was somewhat easier than it could have been given there is no dialogue to contend with.  That said, I only scratched the surface and have so much to learn about surround.  This experience was good practise for our final studio Game of Thrones project where we will be mixing in surround.

Location recording at the Abbey

SAE MED FAIR2015-5430

Location recording complete!  As mentioned in earlier an earlier blog, recording sounds at the Abbey Medieval Festival forms a major part of our A Game of Thrones sound replacement project. We’d put a lot of effort into the planning and preparation for the two days of recording (can be checked out here), and were really looking forward to it.

When recording outdoors you are really at the mercy of the elements. As the weekend approached we became a tad concerned about the weather forecast.  The forecast seemed to get worse day by day; the “possibility of showers” early in the week, to “high chance of rain and possible thunderstorm” by the night before. The forecast for the Sunday was consistently “sunny”.  When it appeared the night before that things weren’t going to go our way on the Saturday we made the decision to cancel that day’s recording, happy in the knowledge that we still had the Sunday to capture the sounds we were after. This decision turned out to be a mistake!

First lesson learned “remain positive and be prepared to head out on location no matter what!”  Saturday turned out to be glorious but due to this group decision not to go, we had already made other plans so the day was not a waste.  After several hours of sunshine and a forecast for Sunday that now said “windy” (where did that come from?) Bran and I decided we had to seize the day and capture whatever we could in the time we had left.  In hindsight we should have been ready to go as planned, regardless of the weather prediction.

The afternoon’s recording was valuable not only for the sounds we actually captured but for what we learnt about location recording that we hadn’t considered.  For example, when capturing the great crowd sounds in the jousting arena we noticed some extraneous noise in the headphones that turned out to be a generator providing power to the PA system. Later an even worse sound began impacting on our recordings.  This turned out to be a truck emptying some porta-loos at the back of the arena!!!!

Bran and I also discovered how challenging is can be managing gear when on site. Even though we had packed light, it seemed at times we each needed another pair of hands to effectively manage mics, cables and other recording equipment and to keep this equipment safe. I made a mental note of some adjustments that could be made for the following day.  All in all it was a successful mission heading up Saturday afternoon.

As I lay in bed early Sunday morning, woken by the howling wind storm outside I recalled the lesson learned from the previous day “remain positive and be prepared to head out on location no matter what!”  I repeated this like a mantra trying to convince myself that it would work out.  Well it was looking pretty dodgy when we arrived at 8am and not much better at around 9 am when we started recording – no amount of high pass filtering or ‘dead wombat’ was going to combat this wind! Fortunately though the wind died down pretty quickly and we had near perfect conditions for the rest of the day. Who would have thought?

SAE MED FAIR2015-5468

We worked in teams across the day, capturing a variety of ambient and specific sounds.  Lisa and I are responsible for the various ambient sounds in the GOT clip including the ‘walla’ (crowd sounds).  So, our goal for the day was to record as much crowd noise as we could in order to capture all the different crowd sounds we needed – cheers, boos, call outs, claps, laughter and general crowd hum.  We recorded at three different joust sessions as well as in front of the ‘castle crowd’ and other gatherings of people across the festival site. At times we also captured some other interesting ‘medieval’ sounds between walla recordings.

SAE MED FAIR2015-5442

To achieve recordings with different sonic qualities we used different mics and configurations.  For example we used the mid-side capsule of the Zoom H6N, two NT5s in an ORTF configuration on a stereo bar, and NTG2 on a boom to capture more specific sounds from the crowd.  We recorded in ms-RAW mode on the H6 for the mid-side mic.  This essentially records the unidirectional centre mic on the left channel and sums signals from the bi-directional mic on the right channel. The recorded stereo file can then be split into mono files and manipulated in Pro Tools to adjust the stereo width as required.  More on this in a later blog.

We also recorded the various crowds from different distances and from different perspectives (e.g. in front of, behind and among the crowd).  Having our media passes assisted us greatly, giving us access to recording positions around the festival not accessible to the general public

SAE MED FAIR2015-5465

In addition to the crowd sounds, we captured some great drumming in the Romani camp which we are thinking would sound great down low in the mix to give that ‘big event’ feel. Our composer Brad, spent some time with us during the location recording and was quite inspired by the various rhythms and patterns he heard in this music.

it was an extremely long day of recording but well worth the effort. We are very happy with the recordings we captured across the weekend, particularly for the crowd sounds.  It was more difficult for Bran and Abbey to record spot effects at the festival due to the excessive noise of the festival goers, PAs etc.  We knew this would be the case and so they have been busy since the festival recording the required sounds at other locations in Brisbane and in the studio.

Reflections on a Jazz Recording

Last Wednesday and Thursday we had the extraordinary privilege of working with Bart Stenhouse and his assembled team of ultra-talented jazz musicians.  Akshay had worked his magic and secured this exciting two-day recoding session with the band after a previous successful session earlier in the year.

We’ve been asked to reflect on our experiences across the day, in particular our observations of group dynamics and our personal role and contributions to the session.  I’d also like to reflect on the things I took away from the project in terms of my learning.

Our audio cohort was spilt into two teams, ‘red’ and ‘blue’; the red team to record on Wednesday and the blue on Thursday. These two recording days were of course, not the beginning of the project.  We had spent a good deal of time in pre-production, planning the execution of the day so that things would run smoothly and the desired outcomes of the session would be achieved.  This pre-production process is one of the key things I will take away from the project.

Guy Gray facilitated a ‘test run’ in the Neve studio in the lead-up week where we ran through the critical process of line check. We methodically worked through each of the available inputs, testing signal and checking for any issues.  This exercise was valuable on so many levels. It was a return to the basics of signal flow which, I think it’s fair to say, for our group was greatly needed and appreciated.  I know personally my understanding of signal flow, while much improved lately, is not as robust as I would like it to be.

This line check also identified some modules that weren’t operating as they should – a problem with the monitor path on channel 4, no phantom on channel 9 and some sort of issue with channel 19 where the signal was attenuated by about 4dB.  Needless to say, having this information in advance of the actual recording session avoided potential time wastage.

Guy continually tested our thinking during this process.  Where some of us would have been satisfied in the knowledge that “Channel 4 is not working”, Guy wanted to know where exactly the problem the was.  Immediately this demonstrated some shortcomings or uncertainty with aspects of basic signal flow.  With a couple of fader moves we should have instantly been able to diagnose the issue as being somewhere on the monitor path, rather than recording path, which could be patched around if necessary.

This session was also a much needed return to the basics in terms of the console itself.  Akshay stepped in at this point to run us through the optimal recording set up for the Neve.  He demonstrated how he would always swap the faders (different to a flip of a desk), assign the monitor bus to the main mix, take all the channels off ‘main mix’, select the global pan and switch on the auxillaries in preparation for headphone mixes.  Akshay also reminded us of some things to look out for such as how the pans are in their original position despite the fader swap.

Another thing we ran through during this session was the fundamentals of a Pro Tools session set up. Guy explained that a preliminary check of some key aspects of Pro Tool and associated hardware could save untold headaches later in the recording session.  This basic check list included things like checking clock synchronisation, disc allocation, I/Os, buffer size and even space on the hard drive. The check also included confirming auto back up was enabled in Pro Tools preferences. We saved the Pro Tools session used for line check as a template to use the following week.

On the day of recording my assigned role was as Pro Tools operator, along with my mate Guy Dixon. I’ll be honest, I was more than a little nervous at being placed in this role as I’d observed the recording session earlier in the year, and knew that certain aspects such as ‘punching in’ during overdubs would be a challenge, particularly with this style of music.

Being the overly-cautious person I am (trying to beat this out of myself) I steadied myself early in the session by double and triple checking all the Pro Tools parameters (e.g. sample rate, bit depth) and labelling tracks etc.  The preparation that had gone into the day really paid off at this stage.  The input list had be sorted which meant the microphone team knew exactly what mic was going into what channel, as did the console and tape operators. Guy D and I worked closely with Dan and Ben on the desk to ensure everything was ready to hit record when the musicians turned up.

My observations of the group dynamics at this stage of the day were somewhat limited to the team in the control room.  I was so focused on what we were doing that I wasn’t able to directly observe the other team members working in the studio but by all accounts the team worked quietly and efficiently getting the job done.  What I could tell was that vibe on the day was great.  Everyone arrived early, ready to do their job.

In the control room, the group dynamics were very positive.  I felt confident in the abilities of those around me and appreciated everyone’s calm demeanour. Dan, Ben, Abbey, Guy D and I were all working co-operatively to achieve our set up goals.  We were probably a little too focussed at one point in that when Bart arrived in the room we didn’t really stop to welcome him (though I’m very confident the project managers would have greeted him positively already).  I think for me, I felt a little intimidated by his presence; another thing I’m working on.

The project managers were doing a great job facilitating the set up and ensuring things ran smoothly, including the arrival of the musicians.  They were regularly checking with us to see that things were going well.

After Guy facilitated the microphone set up in the studio he entered the control room to start the sound check. It was a real thrill to watch a professional in action.  Great mics and mic placement meant little needed to be done in the way of EQ’ing or the like.  A few tweaks here and there and the band was sounding great. At one point Guy made some adjustments to the bass – pulling back the channel path send and cranking the preamp gain, This was probably the first time I’d really seen the power of the Neve preamps giving colour and character to a signal. Guy also showed us how to set the levels in such a way that the faders were relatively level in Pro Tools.

The massive learning curve continued once we actually started recording.  I was able to see the benefits of recording in ‘punch’ mode and well as recording the click.  We had also set up a ‘Work in Progess’ 2-track.  This is something I will certainly use in future recording sessions.  I also learned other standard recording workflow processes such as muting the click track as the final sustains played out.

These early stages of the recording went relatively smoothly from my perspective as tape operator but things certainly ramped up when it was time to do the overdub ‘punch ins’.  I tensed up at this point, knowing what was ahead.  As predicted, my musical knowledge (or lack there of) was to prove somewhat of a hindrance during this process.  I’m unfamiliar with the form of jazz music and certainly not well equipped to keep up with the unusual time signatures of the genre.  I found it challenging to confidently locate the required segment of music for ‘punch in’ and was acutely aware of the need to not interrupt the musicians creative flow with unnecessary delays.

I also found it a challenge, identifying the appropriate transient/note on which to edit the punch in to do an effective cross-fade on the fly.  My lack of proficiency with Pro Tools didn’t help at this stage. Being put in the hotseat in such a recording situation quickly makes you realise how necessary it is to not only have a working knowledge of the DAW but to be able to quickly navigate your way around it to get yourself out of trouble.  I’m really wishing now I had have purchased my copy of PTs at the beginning of this course.

On the upside, I felt I handled the pressure positively.  I’ve become stronger mentally as I’ve become older and feel I’m able to cope with challenging situations better than I might have in the past.  I also feel I have cognitive stamina, that is I’m able to remain focussed on something important for long periods of time.  I also felt supported positively by most of the team around me which really helped.  I love working as part of a team and look forward to future recordings with my peers.

A huge thank you to Bart Stenhouse and his band (Trent Bryson-Dean, David Galea and James Whiting) for the privilege of working with such talented musicians, Akshay Kalawar and Guy Gray for the opportunity to take part in such as experience and learn so much, and to the rest of the red team for such a positive team work experience on the day!

Location Recording at Abbeystowe

A large part of the sound for our A Game of Thrones sound design project will be recorded on location at the Abbey Medieval Festival, Caboolture on the weekend of July 11 and 12.  Having attended this festival last year I’m confident we will be able to capture authentic sounds that will be extremely valuable for our project.

a;lsdk

At the festival we will be capturing sounds that will be used as spot effects and ambience or Atmos (including walla tracks). Capturing quality ambience will be critical to our project as we will be using it not just to provide the setting for our scene but also to create drama.  In the video ‘Australia’ – Behind the Scenes – Location (from 7:46 min), Baz Luhrmann explains the importance of ambience in a film soundtrack and gives a great example of how ambience can be employed to create drama in a narrative.   Of course for our final production many of the spot sound effects will need to be supplemented with sounds created back in the studio but the atmos tracks will need to be spot on as they cannot be recreated in post.

In addition to these sounds we will also be attempting a type of ‘on location Foley’, mainly for the ‘on gravel’ footsteps.  We found doing the Monty Python sound replacement last trimester, that recording footsteps on the foley stage in the post-production suite was problematic.  The reverberation in the small space was difficult to avoid and could not really be removed after the fact.  As a result it was difficult to place the footsteps in the space of the scene.  There’s a gravel road behind the jousting arena where we’re planning to record this Foley.  To synch the footsteps to the video we’ll be using a tablet loaded with the clip.  We’ll see how this goes…

As this recording process will play such an important role in the success of our project, I have been spending a great deal of time researching best practice in location recording to ensure we get the most out of the weekend.  A common message across all the sources I’ve consulted is that of PLANNING and PREPARATION.  This makes sense as we’ve certainly learnt the hard way in previous experiences, that even the smallest things can derail a studio recording.  As well as all the usual things that can go wrong in the recording process, location recording presents many additional challenges and these will have to be planned for.

This careful planning and preparation will include things such as obtaining permission to record at the festival, defining the objectives of the recording, identifying and defining the sounds required, selection and booking of equipment (and back ups), researching location recording techniques, storing audio data and scheduling/time management.  We will also need to consider other less obvious factors such as our own comfort on the day to get us through a long day of recording.

I thought I’d look at each of these things in turn.

Permission to record

We certainly didn’t want to rock up to the festival will all our gear only to be denied entry.  Early in our pre-production I contacted the festival organisers requesting permission to record and was pleasantly surprised that not only were we able to record, but we would be given media passes for the weekend.  This will assist us with access to the entire festival and give us some legitimacy to approach re-enactors to produce particular sounds (e.g. sword hits).

Crowd

Defining objectives of the recording and identifying sounds to be recorded

As mentioned in my previous blog, we have spent a good deal of time planning the aesthetics of the production and so we have a clear idea of not only the sounds we’re after, but the quality of those recordings.  We’ve carefully identified all the sounds, including ambience we need for the scene and have mapped out potential locations within the festival where these sounds could be obtained.  Each team will have a printed copy of the sound inventory as well as digital access to the video clip to confirm the sound.

Selection and booking of equipment

Knowing what equipment is best suited to the recording task at hand, and how to effectively use this equipment was one of the tips that featured in most of the articles and books I’ve consulted.  For our recording assignment, we’ve decided to work in pairs so we’ll be using two portable digital recorders – Sound Devices 552 Production Mixer and the Zoom H6N.  Each pair of location recordists with have a boom and blimp with RODE NTG2s ( lightweight condensor shotgun mics).  The booms will mainly be used for stability (in conjunction with the pistol grip shock mount attached to the blimp).  These digital recorders both require AA batteries for power so we will be taking along an excess of spares to get us through the weekend.  Losing power will not be an option.

In addition to the shotgun mics we will be taking a small selection of condensor and dynamic microphones. Of course, the selection of mic is dependent on the source material.  The shotgun mics will be used when we require a good deal of directionality as they are super-cardioid. We’re keen to capture some quality stereo recordings so we’ll take along a stereo bar as well. The H6 has two of its own options for stereo recording – XY and Mid-side capsules that can be attached, as well as four further inputs for mics.

The selection of mics needs to consider factors such robustness for location recording, frequency response, maximum SPL and directionality.  We’ll need to consider not only what sounds will a mic capture but what will they reject.  We’re taking along my SM58 to capture louder sounds with a heavy attack – most notably the cannons at the festival.  There are no cannons in our video clip but when else would you get the chance to record a cannon!?

Other gear will include headphones, an ample supply of short XLR cables, a lightweight tripod, a DSLR, tape, headphone splitters and ‘Dead Wombats’.

We booked the gear out for a test run last week to make sure we’re confident using each piece of equipment. I was really impressed with what the Zoom and mixer could do.  We learnt how to use pre-record (so we don’t miss a magic moment), back up recording at 12dB below (in case first recording clips) and how to effectively set levels which is of critical importance.  We also picked up a few potential problems.  For example, we found a lot of crackling with even the slightest movement when the shotgun mic was connected via the XLR attached to the shock mount/blimp.  Bypassing this and connecting our own XLR directly to the mic solved this problem.

This test run also highlighted the importance of portability with the field gear. We need to balance our desire to ‘cover all bases’ with the need to be able to move comfortably around the festival with the gear over two long days. Size and weight have to be important considerations.

Cannon

Location recording techniques

We plan on using a variety of recording techniques to capture our sound.  We’ll be experimenting with different stereo configurations to capture 2-channel Left/Right recordings.  We’re also keen to record in dual mono.  this way we can get two different colourations for the same raw sound.  This could also be useful for adjusting distance in the mix. For example we could use a directional mic to capture detail and an omni-directional mic to control distance (blending in post).

We’ll also have the option of recording each sound source at different levels in each channel and from different perspectives. Our technique for each sound recording will be dependent on the sound source and what exactly we are trying to capture about that sound. For example, does it need a sense of space?  With the recorders we have, there is also the option to record multi-channel.

Storing audio data

Both recording devices record to an SD card.  We will need to ensure the cards are clear before heading out.  We’ve checked the size of the available cards and believe there will be more than enough room for our audio recording.  Managing the audio data will be important as there’s going to be an awful lot of it. We’ll be using a sound log on the day and organising recordings into particular folders depending on the type.  This will make the process of locating, categorizing and editing sounds a lot easier when we get back to the studio.

Scheduling/ Time management

Managing our time to ensure we capture all of our desired sounds will be critical.  We’ve had to plan for travel time, set up, liaising with festival officials, possible equipment malfunctions, actual recording time, sound logging and reviewing, moving between locations, programming of festival events, lunch breaks and comfort stops,  As mentioned above we’re working in two teams which will allow us to cover more ground and record the same sound sources at different times of the day to achieve different effects.

Personal comfort

Lugging recording gear around all day is most likely going to be more taxing than we expect.  We’ve come up with a comfort check list that will hopefully manage this.  The list includes things you’d expect such as applying sunscreen, bring layers for change in weather conditions, comfortable shoes, hat, water etc.  We’re going to purchase food at the event to lighten our load a little.

Fest

Challenges with location recording

Location recording can often be unpredictable.  Obviously the weather will have a significant effect on the day’s proceedings.  Being winter, we’re hopeful the conditions will be comfortable.  Wind is always a factor, even on a calm day, so we’ll be using dead wombats and other wind shields as well as high pass filters where appropriate.

At certain parts of the day, the cannons start firing regularly.  This will need to be managed as will the crowds across the day.

We’ll also need to look after the gear, making sure it’s protected including prolonged exposure to direct sunlight.

I’m beyond excited about our upcoming recording adventure.  Hopefully things go to plan and if not I trust our planning and preparation will ensure any hiccups will not derail the day!

References:

Cvrgoje Sound. (n.d.) A practical guide to field recording Pt 1.  Retrieved from http://www.cvrgoje.com/blog/practical-guide-field-recording-part-1

Latta, W.(13 July, 2009). A Beginner’s Guild to Field Recording, Pt 1. Retrieved from
http://music.tutsplus.com/tutorials/a-beginners-guide-to-field-recording-pt-1–audio-1785

RODE Microphones (n.d.). RODE University: Location Sound Effects Recording. Retrieved from https://www.youtube.com/watch?v=Eb_7SHmMy_E

Viers, R. (2008). The Sound Effects Bible: How to create and record Hollywood style sound effects. Studio City, California: Michael Wiess Productions.

Images:

http://abbeymedievalfestival.com/image-galleries/