Let the Blogging Continue

Chris's Blog Archive: March 2016

In March I mused on what I'd learnt as a result of taking part in February Album Writing Month once again, I contemplated the neurological reasons why people respond to some songs more than others, and pondered whether my guitar effects signal chain could be improved (it could). And there was talk of typefaces, and of AI deer.


Edgar, in case you weren't aware of him yet, is a deer (a virtual one) who wanders around the game world of Grand Theft Auto V wreaking havoc. The game has been modded by Seattle artist Brent Watanabe to make Edgar indestructible and the deer teleports between predetermined sites in GTA V's fictional city of San Andreas, where he barges into pedestrians, blunders into gunfights between cops and gangsters, causes traffic chaos, and occasionally wanders into the sea (spoiler: he doesn't float).

Even though the points at which Edgar appears are predetermined and he follows a set path every time, the rest of the game is complex enough for unpredictable things to happen. His interactions with traffic are fascinating and I was surprised how infrequently he gets hit by cars. They will usually brake and avoid him, although he has less luck with trains. His interactions with other non-player characters range from garnering passive-aggressive shouts of "well EXCUSE me!" to opening up with automatic weapons. I've seen him end up in a gunfight with police at the airport, with fuel tanks exploding; he's ended up with three stars on the police "Wanted" display at the top of the game screen more than once and I hear he's even made all five stars light up before now (as well as managing to somehow blow up a jet airliner).

Yet at other times the programming is frustratingly simple. He gets stuck in the boarded-up entrance to a mine and stands there, running on the spot, until he teleports somewhere else. The same thing happens when he arrives at the game's version of the Playboy Mansion as he will always end up trapped in the pool. Collision detection is barely implemented and Edgar regularly gets stuck against walls or trapped in corners.

Despite this, the proceedings are hypnotic to watch and it will all be streamed online until April 20th. You can watch his exploits as they happen on Twitch TV.


When you start writing songs, you imagine that people will respond to your work with as much enthusiasm and joy as you do when you play back the finished track. The reality of the process is very different, of course. Not only are your early efforts terrible, but you realise that it's very hard to get people to listen to your work and it's even harder to get them to respond favourably to it. Interestingly, it's the first of those two problems that's the most important and, equally, it's the hardest one to crack. The music industry's response to that challenge has shaped the sound of popular music over the last couple of decades, and I'd argue that it hasn't been a change for the better.

It's all because of a discovery made by psychologists years ago that a listener's familiarity with a piece of music is what governs their response to it. As long ago as 1906, Heinrich Schenker noted the importance of repetition in triggering a response in his essay, Harmony, asserting that "repetition [...] is the basis of music as an art." Repetition is a reliable way of generating an emotional response in the listener, although repeated listening beyond a certain point produces boredom. Crucially, the listener responds even through mere exposure to a piece - they don't have to actively listen to a song to be affected by it. Songwriters have known this for much longer than psychologists, though - which is why a song's hook is considered such an important feature. These days, we even know which parts of the brain respond to those hooks.

In a 2014 essay for Aeon, Elizabeth Margulis explored more unusual aspects of how the brain responds to repetition and the "speech to song" illusion. It's an interesting read, but again, musicians and composers knew about this long before the psychologists cottoned on - while Diana Deutsch first wrote up the speech-to-song illusion in 1995, Steve Reich was using it in his work back in the 1960s, most notably in his 1965 composition It's Gonna Rain, whare a tape recording of a single sentence is broken up and repeated until the words lose their meaning and become simple, musical sounds. Have a listen for yourself.

The music industry has not been slow to cotton on to the use of repetition as a way to increase sales. Modern pop songs rely on repetition as a way of generating the required level of familiarity with a low number of listens. Why? Because familiarity with a song is one of the biggest reasons why people buy music, that's why - crazily enough it's a bigger influencing factor than whether or not they actually like the song. Labels wasted no time in putting the lesson that repetitive lyrics in a song drive its market success into practice. It should come as no surprise, then, to encounter hugely repetitive lyrics in songs by successful artists. And we do, whether it's a song by Beyoncé:

Cause we like to party, hey!
Hey! Hey! Hey! Hey! Hey!
Cause we like to party, hey!
Hey! Hey! Hey! Hey! Hey!
Cause we like to party!

or Nicki Minaj:

You a stupid hoe, you a you a stupid hoe
You a stupid hoe, you a you a stupid hoe
You a stupid hoe, you a you a stupid hoe
You a stupid hoe, yeah you a you a stupid hoe
You a stupid hoe, you a you a stupid hoe
You a stupid hoe, you a you a stupid hoe
You a stupid hoe, you a you a stupid hoe
You a stupid hoe, yeah you a you a stupid hoe

or Rihanna, who takes things to quite frankly ludicrous levels:


Yes, I know songwriters have always used repetition as a way of hooking the listener. But back in the day, at least people were moderately subtle about things. They used to make it look like they'd made an effort to make something beautiful. These days, the gloves are off.

But as depressing as this may seem, it's only one part of the story.

Music companies have known the value of getting airplay for their artists for a very long time. Dodgy dealing to ensure repeated plays has been a part of life since radio stations first started playing music. Bribing disc jockeys to play your artist's song became known as Payola back in the 1950s, but the practice was well established even back then. Despite a number of high-profile prosecutions in the 50s and 60s, the problem hasn't gone away. It probably never will - there's just too much money to be made from doing it.

There's another dimension to the problem as well.

Most radio stations playing popular music use computer-generated playlists these days. Known as Music Scheduling Systems, they use algorithms to predict which songs will maximise their share of the available audience. The first example was Selector, developed by RCS almost thirty years ago. Repeating songs means that listeners tune in, because they're expecting to hear their current favourite song. As we've just seen, the songs that become their favourites are the ones that are played a lot on the radio, and lo and behold a vicious circle appears. Repeating those popular songs takes up a lot of airtime. Add in the spaces for advertising and the places where the DJ does his or her thing (that is, if the station even bothers with DJs any more; some, like Jack FM, have largely dispensed with them - songs and ad breaks are cued and played by computer) and suddenly there isn't much room for new music by unfamiliar artists, no matter how much they might deserve a place to be heard. There certainly isn't time to play those old tracks that you love so much but which never got airplay, even when they were first released (have you seen my record collection?)

The vast majority of recorded music has effectively disappeared from the airwaves. You won't hear it on the radio, because that's not how radio works.

Even shows that play "requests" tend to pick what they play from a small catalogue of tracks that are approved by playlist compilers, whether they're humans or computers. It's all about maintaining that audience by focusing on the core artists that keep people listening. Diversity doesn't make money, because it doesn't draw in the audiences.

This approach has reached its ultimate expression on TV. Talent shows have become a prime way to get listeners familiar with an artist or a song. They're not about talent at all, really; they're about repetition. Getting a song in front of millions of people every week means that the music will make it into the charts based solely on familiarity. That approach has made Simon Cowell a very rich man; the artists who get fed into the hit-making sausage machine, less so. But at this point, bear this in mind: as psychologists discover more details about what features of music make our brains light up and click on that "buy" button on iTunes, things are only going to get worse.


This year's FAWM may have drawn to a close, but I'm still doing stuff in the studio and over the past week or so I've been tweaking my setup, not just to make things a bit more efficient but also to make them more flexible.

Up until now I've been using the Korg D3200 digital multitrack recorder as the studio's mixer. With 12 inputs, it has enabled me to have most of my instruments connected up permanently. That's incredibly useful when you're switching between synths and guitars as often as I do. The down side, of course, is that the D3200 has to be booted up for me to actually hear anything, let alone feed signals into my DAW. After splitting my guitar effects chain into an A and a B side, I realised I'd run out of channels to use, particularly as I wanted to be able to record the "B" chain in its native stereo. So I sat down and had a think about what I ought to do, browsed the web for a suitable solution, and came up with this:

Current pedal setup

I've just realised that a lot of the gear in that photo is at least thirty years old, and some of it, which was second-hand when I bought it, will be considerably older than that. But if it works, dont't fix it, right? The "spot the difference" part of the photo is that I now have a dedicated mixer. It's an eight channel one, the Mackie Mix 8. I like the robust qualities of Mackie gear after getting their "Big Knob" studio controller a few years back. It's built like a tank and has withstood all sorts of abuse up to having a mic stand dropped on it. The Mix 8 feels equally robust, and the three-channel eq on each input is a nice bonus.

Now that I have a separate mixer for the guitar, I can listen to my guitar playing in the monitors without having to switch all the rest of the studio on. The Main outs go into the D3200 for recording but the Control Room outs go into the Big Knob for routing to the monitors. At the same time, the mixer's Aux feed goes into the Blackstar amp so I can still get the apocalyptic sounds I mentioned below. What this means is that I no longer have to faff around switching connections over depending on what it is I'm trying to do. Everything is ready to use without the need to unplug one thing so I can plug in something else. The red cables leading from the floor are the guitar feeds. The guitar signal is fed from the guitar into my old Boss volume pedal and from there into a Mooer A/B switch. The two chains are as follows:

The "A" side (mono):

  • Electro Harmonix Big Muff Pi
  • Boss CE-2 Chorus
  • Boss BF-2 Flanger
  • Digitech PDS-1000 digital delay

The "B" side (stereo):

  • Digitech Bad Monkey Overdrive
  • Zoom G3 Multi-effects box (with expression pedal)
  • Digitech Jam Man Express XT looper

The setup means that I can now record bass through the A signal chain direct, without having to use the Vox bass amp (under the bench, with an SM57 that's normally pointed on axis at the speaker). That lets me use the BF-2's lovely sound again; the thing has been in a box for the past five years or so.

The G3 is an odd beast. I've had mine for over a year now and while it does some things really well, I couldn't recommend it as a product to anyone else. Even after updating it to the latest software, it emits a horrid high-pitched whine when you use any of the amp models that requires lots of editing and eq to remedy before recorded tracks sound acceptable. There is one footswitch socket that serves for either the expression pedal or a bypass switch - you can't have both, and if you have an expression pedal connected, as I do, you're left with no way to set the pedal into true bypass mode. But for clean sounds, with a little compression and buckets of reverb and delay, it produces lovely results.

I've become more than a little bit obsessed with the guitar in recent years. Being able to sit down, switch on and play means I do quite a lot of it at the moment. Maybe it's because my playing has got to the point where I don't completely suck, but I get a lot of satisfaction out of playing these days.


Much as it pains me to admit it, my eyesight is not what it used to be. Since I switched the website over to CSS and the Raleway font I've gradually realised that the font is not suited to body text, or display at small sizes. So this morning I switched over the body text to a slightly heavier typeface called Noto Sans. It's easier to read for me and while it's not as aesthetically pleasing as Raleway, it's a lot clearer in large chunks.

There may be one or two idiosyncrasies on the odd page that I haven't picked up yet. Please let me know if you spot anything that looks messy. Nevertheless, the fact that I can roll out such a huge change to the entire site in the time it takes to drink a cup of coffee is a welcome change to the old way of doing things.


As I mentioned in my last blog entry, I crossed the finish line for February Album Writing Month after writing 24 songs on my own and collaborating on a further 7 pieces. That's ten more songs than my previous record, and eleven more than I managed the last time FAWM happened on a leap year back in 2012. But boy, don't I know it. Yesterday I felt like I'd been hit by a truck. So, a day later than usual, here's my review of the month's endeavours.

One thing that happened early on in the month is that I tried singing an entire octave lower than normal. When I played back the track I was stunned to hear a rich bass voice that reminded me - and just about everyone else - of Sir Christopher Lee. I was very tempted to use that singing voice for everything else I did for FAWM, but I decided that while it's the sort of voice that is great in short doses, I wouldn't want to listen to a whole album sung like that. I bought both of the Charlemagne albums, after all; that was more than enough. I've tried hard to improve my vocals this year. I haven't always been successful and the results have sometimes made me wince when I listened back to them the next day, but on the whole I've made some improvement, I think. I can always go back and redo the weakest takes if I decide it's worth doing.

By focusing much more carefully on how I used my voice, and by making sure that I drank plenty of fluids during the day I also managed to avoid coming down with my traditional FAWM cold. That's the first time I've stayed healthy for the whole month for at least three years.

As always during February I've heard some amazing music from other FAWMers. Mel, in her new persona as RYAKO has made giant leaps forward in what she produces and has now added guitars to the mix. The results have been amazingly good. Jacqui, known to FAWM as Expendablefriend continued to knock tracks out of the park. My friend Tina delivered a string of jaw-dropping guitar numbers, including a track that will go down in FAWM history as having the best title ever, Nikki, Don't Lose That Rhumba. It was great to hear new music from Paul, a.k.a. Dragondreams drifting out of my speakers once again and reinforcing the plank-spanking contingent with some seriously tasty guitar chops. Aside from the mighty Sapient, the metal side of things was well-represented this year and once again one Finn after another romped home with a track full of blistering guitars and tight riffage. jvallius, torniojaws, Arkka and Elias L deserve special mentions. Elsewhere in Scandinavia, w1n, bithprod and kristian kept things ticking over nicely. It was a good month for me too in that I finally got to do collaborations with some of my favourite FAWMers and aside from the two songs I recorded with Mel (which she did amazing things on, and you should totally go and listen to both Swing! and Together, I'm also ridiculously proud of the tracks I recorded with Dunwich, Popmythology, Sapient, and Wobbie Wobbit.

Of all the songs I heard this year, though, it was Sapient's utterly demented ode to the baddies in Dora the Explorer which went in, stuck, and has steadfastly refused to budge from the space between my ears. Ladies and gentlemen, I give you Raccoon Fhtagn!

To wind things up for another twelve months (or until Fifty/Ninety starts in July, at least) here is my list of the top five things I learnt this year.

5. Melodyne can help your vocals not suck without turning your voice into a robot

Yes, this year I actually bought a copy of Melodyne Assistant and after a tentative start that involved the VST plugin making Ableton crash every single time I used it, I switched to the 64-bit standalone program and ran my worst-sounding vocals through it. I realised that it actually does a very good job and the effect can be remarkably subtle - I don't end up sounding like Cher, it just makes the parts where I was pitchy sound like I was hitting every note bang on.

After watching some of the instructional videos on Celemony's site I also had a go at using Melodyne on a guitar solo, with some very interesting results (listen to the solo during the playout here...)

4. Take your time; spot when your ears are getting tired

This year I became very aware that my chops in mixing - such as they are - evaporated if I kept going for too long, particularly if the track was a loud one (and some of them most definitely were). After a break of at least an hour I found myself coming back to something I thought sounded okay when I left it and realising that it sounded awful. As in most other things, fatigue really can cloud your judgement.

You'll read on some sites that switching between headphones and monitors will "wake up" your ears and while it's true that you'll hear things differently, it's not a real fix. For one thing, mixing with headphones will really exaggerate the bass frequencies in your mix. I've heard a lot of riduculously bass-heavy tracks in FAWM this year. In every single case, when I asked if the song had been mixed using headphones, the answer was yes. So switching like this can make things worse. If you can feel yourself getting tired, take a break for at least an hour.

3. If it's not working, take a break

And this follows on from the last point, in a more general context. This year I've tried to push my guitar playing, and I know I overdid it. I've never had such thick calluses on my fingers since I first started playing as a kid, and in trying to play very fast, metal-style riffs or runs I found myself messing up in take after take. I pressed on when I should have stopped and ended up getting a mild case of tennis elbow, which stopped me playing completely for a couple of days.

I learned my lesson; later on in the month when I wanted to do another riff but absolutely couldn't get my hands to do what I wanted them to, I went downstairs, made myself a cup of tea, ate a pork pie, and had a break for an hour. When I picked up the guitar again, I got the part finished at the first attempt.

2. Chaining the G3 box with a Blackstar's inbuilt effects chain

You either end up with a screamy, histrionic whine that will set your teeth on edge faster than a report on American politics, or you get a guitar sound that can level entire city blocks. Like this:

See what I mean?

1. Use Parallel Compression to add that extra sparkle

Early on in the month I watched a video over at SonicScoop about using parallel compression to add a bit of extra sizzle to tracks, and while I was driven to distraction by the autofocus on the main camera used in the shoot (which should have been switched off) the points being made were very helpful. I tried them out on many of the tracks I recorded last month and they really did help to add punch to my kick drums, sizzle to my cymbals, and grit to my bass sound. You can hear the results on this track, which was the 29th I uploaded to FAWM this year.

I used those parallel compression techniques when I remixed a track about Red Sonja that Sapient and Leslie did together, called I have a name. I was very flattered when they swapped the original version out for my remix. Peter asked me, "how did you do that?" so for my FAWM homework this year, I wrote out a quick essay for him on how I did it which you can download as an Adobe Acrobat file.

So that's me done with FAWM for another year. One other thing that I learned this year was very simple: do NOT try for a double FAWM or you will end up in a crumpled heap by March 1st. In particular, don't do what I did on February 29th and decide that you're going to write and record four fully-produced tracks, all with vocals, in under twelve hours and then go and do it in under eight, just to see if you could.

Because while it turned out that yes, I can, I was toast by the time I'd finished. I think I might go and have a nap, now.


How windy has it been this morning? Well, next door's gazebo is currently in a heap on the other side of their garden from usual, that's how windy it's been.