These rock star's don’t shred, they solder

Should Musicians Go the Way of The Horse?

Cecil Stehelin

--

In a drafty cave in Toronto, bar patrons stand in front of an empty stage. They stamp their feet against the cold and nurse their beers, domestic costs $9 here. “Work Mix 4”, the bartender’s playlist, booms over their heads while the stage is empty. They can have a conversation only by shouting, so they tend to stay quiet and nod their head halfheartedly to the beat. The loud music is supposed to make them drink quicker. Mostly they just stand there.

A hesitant young man steps out onto stage. The crowd cheers, glad to have something to break up the monotony. The bartender turns off her iPod, the artist plugs his in. He introduces himself as Trophy Kill before hitting play. A driving beat breaks over the speakers, held aloft by brutal dub bass. Synthy stab’s shimmer over-top as he unleashes a growling verbal assault on the audience. They respond enthusiastically, using their bodies like whips.

But, his playlist has accidentally been switched accidentally to shuffle, the intro to the second song begins normally, but just as he open’d his mouth to scream, he is interrupted by his own voice in the recording.

I stand apart from the crowd, so I was one of the few to notice the gaff at first. His face squirmed and wrinkled as he stared out at the dancing crowd and contemplated what to do next, visions of Ashley Simpson dancing a jig run through my head.

But, this audience doesn’t much care that he has stopped singing. He takes charge of the floor and begin’s directing the audience, yelling out instructions like an aerobics teacher. No one care’s that the live element of the show has been lost. It sound’s the same doesn’t it?

The Post-Live Revolution

Watching this scene, inspired thoughts about the post-live world we were entering. If an audience could accept this lip-synching punk singer, they’d obviously accept an entire concert performed by a hologram. Perhaps instead of artists having to fly around the world to visit their audience, they could simply deliver one concert in a controlled environment and project it to stadiums everywhere. The spirit of the performance is the only thing that matters, isn’t it?

And this would just be the first step. Computers aren’t just developing their live chops, they’re also becoming inspired composers. YouTube personality and former American Idol contestant Taryn Southern is on the cutting edge of this trend. Her recently released single “Break Free” was created within A.I platform Amper Music. The process is simple: select a genre, your preferred instruments and the beats per minute. The A.I collates the inputs and begins testing thousands of combinations. It spits out the best verses for the user to arrange into a song.

The result? Passable. No worse than whatever dribbles out of the speakers at the supermarket. And of course, the machine is always learning.

Soon, the ability to arrange complex and powerful pieces of music will be in the hands of the general public. Anyone will be able to write a song, without even needing a day of practice. Very soon, the need to play an instrument will start to seem like a quaint idea.

BACH-2

Suppose a company creates a new song writing bot, they name it “Bach-2”. They decide to fill its sound library with every possible noise that can be made with all physical instruments. Maybe they would undertake a massive recording project, inviting musicians from around the world to converge on their studio. Capturing everything from a tuning Erhu to a smashing guitar. They exhaustively collate and feed this information back into BACH-2’s database. From these building blocks, he could theoretically produce every song possible in a matter of years!

But BACH-2 won’t just have quantity. Thanks to the amazing strides in Translational Brain Mapping, we have been able to map out how music interacts with the human brain more accurately than ever before. At the University of Rochester, neuroscientists used extensive MRI scanning to map out zone’s imperative to understanding music in Saxophonist Dan Fabbio’s brain, in preparation for the removal of a nearby tumor. They even had him play Saxophone while it was being removed, to ensure that no permanent damage was caused.

This data would be invaluable to BACH-2, with further research we might even map out musical preferences. Not only would BACH-2 be able to compose a piece of music attenuated to your personal preference’s, but it could even tailor the piece to your current mood. Soon, you might literally have your own soundtrack playing whenever you want from the inside of your head.

All of this is not to say that musicians will go completely extinct. The novelty of watching your favorite band live will linger, like opera and draft horses, as a cultural artifact. Fervently maintained by a dedicated few, with the occasional indulgence of the many. We musicians may try and protest the authenticity of auto-generated art, we may have occasional success playing up our common humanity, but ultimately what really matters to the listener is whether or not the music is good. If BACH-2 delivers a perfect song every time, would people really protest?

The Perfect Concert

Think back to that drafty cave in Toronto, this time in the near future. Everyone in the crowd has a BACH-2 implant in the back of their neck. Loud music keeps the crowd silent, beer is now $12. Trophy Kill steps out onstage and the audience cheers. They switch on a pair of headphones implanted behind their ears and connect with the artist’s phone via Bluetooth, he hits play.

Sneaker squeaks and grunts are the only music in the hushed bar. With only the rhythm of one enthusiastic listener’s claps betraying the nature of the private concert unfolding within the minds of the concert-goer. Each individual receiving their own idealized version of the same song. Trophy Kill delivers an inspired performance, directing the gyrations of the audience and focusing their energy.

His music is more than post-instrument, it is post-artist.

--

--