Top of page

The Roaring Twenties Welcome Artificial Intelligence: How Will It All Play Out?

Share this post:

The following is a guest post by Juris Klavins, spring intern in the Office of Policy and International Affairs.

The music industry in the 1920s was forever changed with the introduction of the radio. Radio enabled music dissemination at an unprecedented rate and allowed live performers to reach millions of people at home, thereby fundamentally altering pre-existing business models. In the 2020s, one hundred years later, the industry is yet again facing a potentially industry-changing new technology. This time, however, it is the force of artificial intelligence (AI) that will transform the way in which business models and the music creation processes work.

To consider how copyright policy could best address AI, the U.S. Copyright Office, together with the World Intellectual Property Organization (WIPO), held a symposium on Copyright in the Age of Artificial Intelligence on February 5, 2020, in Washington, DC. As a spring intern with the Office of Policy and International Affairs, I had the opportunity to help the Office prepare for the event as well as attend. During the course of several engaging panel discussions, U.S. and international speakers from across the creative industries, business, academia, government, and policy-making bodies grappled with issues that AI has brought to the discussion table.

Five panelists at the AI and Creating Music session
“AI and Creating Music” panel at the Copyright in the Age of Artificial Intelligence symposium.

As a professional musician, I was especially interested to hear remarks from the symposium’s “AI and Creating Music” panel, which was composed of David Hughes, chief technology officer of RIAA; Joel Douek, composer and cofounder of EccoVR; Alex Mitchell, founder and CEO of Boomy; and Dr. E. Michael Harrington, professor in music copyright and IP matters at Berklee Online and the country’s leading expert witness in music copyright infringement disputes. Before this event, I had, on occasion, thought about some of the issues raised by AI, like whether AI should be able to own a song, or should the musician using AI be the owner of the AI-generated work instead. I was also curious to see whether the panelists would address these questions, as well as the Copyright Office’s position that copyrightability requires human authorship. While the speakers touched on some of these issues, the more interesting issues to me are the less theoretical. Will AI make music better or worse? Can AI increase access to music composition to people with no background in music theory? Are there mechanisms, either in the copyright law or elsewhere, that can provide incentives for fostering AI development more efficiently?

As David Hughes pointed out, instead of cautiously pondering over these questions, players in the music business—labels, composers, and performers—have already embraced the possibilities AI has to offer. For example, AI is already being used to generate personalized audio tracks to accompany the listener’s mood or to strip out separate tracks (vocals, bass, accompaniment) from a single master file to produce alternative versions of pieces. As it turns out, AI can even aid composers struggling with writer’s block by giving them an impetus for writing melodies and lyrics. Just imagine what classical composers like Beethoven or Mozart—who would put melody ideas in chests and then pull them out at random when running dry of material for the next symphony—would think of this!

Despite all these innovations, a legitimate concern on my mind is whether AI threatens our jobs. The short answer to that appears to be maybe. Think of sound engineers, for example, whose expertise in mastering is challenged by faster and cheaper AI. While perhaps not sophisticated enough to generate highly nuanced musical tracks, AI arguably does produce music sufficiently acceptable that the cost-benefit may favor AI. This threat is, in my opinion, substantiated, especially in genres such as pop that do not mandate a high degree of subtlety.

Dr. Harrington put some of those concerns to rest, however, by recognizing that, while AI is great for generating volumes of musical works at an unprecedented rate, it currently lacks the ability to make better and more unlikely choices due to its dataset limitations. Think of Jimmy Hendrix’s “Manic Depression” using the Viennese waltz, or Garth Brooks’ use of the doumbek (Middle Eastern drum) in his hit song “Standing Outside the Fire,” or even Mozart’s 4th String Quartet K.157, which opens with a simple motif, followed by a predictable transposition, but then has a jarring change in melody and harmony. At this time, AI would not have made those judgments and choices, and I don’t think it will ever be able to make those nuanced decisions.

One question that really caught my attention at the symposium is why do we need AI when we already have creative folks out there producing great music? Maybe just because we can. If we are to accept that there will be music created by AI and music created by humans, I agree with the panel that we should consider labeling human-generated music as “produced by a human,” the way some coffees are identified as sourced through “fair trade.” Going forward, I think that AI is here to stay, and it will become increasingly ubiquitous. Therefore, it is time to view the capabilities of AI as a tool, or as an extension of our spirit, and find ways to use it to our creative advantage. And we should not be scared, because I do feel like there is something that AI will never grasp: an almost ethereal aspect—that imperfection that makes music feel human. I look forward to seeing how it plays out.

Add a Comment

Your email address will not be published. Required fields are marked *