Digital strategist Bas Grasmayer kicks off 2017 with a look at how technological disruption will affect employment in the music industry.
One of 2016’s hot topics was the rise of artificial intelligence and technological unemployment. Where it previously took hundreds of people to run a factory, that job can nowadays be done by a dozen. With the continuing improvement of artificial intelligence even jobs in the service sector may now be disrupted. In this Projecting Trends edition we take look at what jobs in the music business ecosystem are set to disappear, diminish, or radically change.
Here’s how technological disruption will affect employment in the music industry.
Last October, Uber’s self-driving truck made its first delivery. Unlike Tesla, Uber’s technology allows for full-autonomous driving, except in certain situations such as severe weather conditions. A driver must be present, but the vehicle can basically drive itself, like an auto-pilot. Beyond this so-called ‘Level 4’ autonomy lies full driverless transportation.
Live performance in particular requires a lot of transportation, from crew, to equipment, lights, facilities for festivals, and the bands themselves. Factoring out the human has more benefits than just cutting some salary costs. It helps with efficiency, for instance, tour bus drivers have to take statutory breaks, which influences touring schedules. Transportation will also be safer, due to a mix of evolving technologies like smart sensors, machine learning, and advanced hardware and software. This will result in autonomous vehicles acquiring ‘superhuman driving skills’.
Car manufacturers are racing to have autonomous cars market ready by 2020, so in the coming decade using autonomous transportation for festivals, shows, and tours will become so common that it will become hard to imagine the world differently.
The reason why there have been such big advances in machine learning in the past years has a lot to do with the availability of really large datasets. Companies like Music Xray have built datasets out of past hit songs and algorithms to analyze and compare song qualities such as base melody, beat, tempo, rhythm, octave, pitch, chords, progression, sonic brilliance. This way they can predict whether songs have hit potential. By 2012, Music Xray had already connected more than 5,000 artists with recording deals.
A relatively recent technique in machine learning is called ‘deep learning’, through which artificial intelligence (AI) is trained by iteratively presenting data along with the correct answers to a ‘neural network’ which then optimizes its parameters, so that it gets better at predicting correct answers. An example of this is Google’s Quick, Draw! game, where you can play pictionary with an AI engine.
Now imagine these learning processes going on for 10 or 20 years, with infinitely more data than our own brains could parse. Meanwhile, this learning process is improving itself, so computers will not just understand more, but get better at increasing their understanding.
While it remains to be seen whether humans will ever be completely removed from the process, it’s clear that a lot of the work will be done by computers, making it possible for 1 person to do the work of 10 people, or maybe a lot more.
The music industry is a complex ecosystem of interoperating businesses and legal frameworks. The type of administrative work done by Collective Management Organisations particularly in royalty distribution, but also in negotiating licenses, can already be automated to a great extent at present, but even moreso in the future.
You could train machines by letting them monitor the large datasets managed by collecting societies and any activity related to them, from payouts, to disputes, or changes in ownership. Over time, they should be able to flag issues before they arise, identify likely bad actors, and more accurately identify owners of works, so that money doesn’t get put in the black box.
Other technologies like blockchain may be able to decentralize content registries, which would mean that the workload of adding data to such registries would also be decentralized. Not completely unlike how encyclopedia writers have been put out of work by the (hundreds of) thousands of Wikipedia editors. In such a scenario, collecting societies can stay relevant by adopting a sort of arbiter role as imagined in Music Tech Fest’s blockchain whitepaper, but when such systems are used to automate payments, licensing, or even dispute resolution, then once more the work of many can be done by few.
If algorithms can detect great music, perhaps they could also start writing it. BPI, Britain’s recorded music industry’s trade association, recently released a thorough report on AI’s effects on the music industry. One of the highlighted startups is Jukedeck, which helps people auto-generate music to fit people’s needs, eg. for a short video you’re doing. Sony’s Computer Science Lab is even planning to launch albums with songs completely generated by artificial intelligence as part of research in a field called pop music automation.
Algorithms will increasingly guide music creation which will help creators in some cases, but will also make musicians redundant in certain situations. Thirty or forty years from now, would we hire a musician to record a jingle or have something perfect generated for us by hitting a few buttons? If our interfaces are even button-based by then… As the BPI report explains:
“If AI composers prove to be cheaper than their human equivalents, and their output good enough to use as background music for online videos; for games and apps; for corporate videos and for some public spaces – to name but four uses – then AI composition may carve out a role for itself that will in some cases compete with the existing music industry and its creators.”
Subscribe to our Synchtank Weekly to receive all of our blog posts via email, plus key industry news, and details of our podcast episodes and free webinars.