If AI Starts Making Music on Its Own, What Happens to Musicians?

Music made with artificial intelligence could upend the music industry. Here’s what that might look like.

Green musical notes seen on a black background along with the words artificial intelligence gets musical
Illustration of a Bohr atom model spinning around the words Science Quickly with various science and medicine related icons around the text

SUBSCRIBE: Apple | Spotify

This is the last part of a three-part Science, Quickly Fascination. Listen to Episode One. Listen to Episode Two.

Transcript


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Allison Parshall: So a few months ago I was at a dinner party with some friends from college. A friend of a friend mentioned that she’s now an editor at a pretty prominent news organization.

That was both baffling and impressive because she had only graduated a few years ago.

How did she climb the ranks that quickly? Could she let me in on her secret? I asked what types of stories she edits.

“Oh,” she said, “AI stories.”

AI stories! That was exciting! I loved writing about AI! Maybe I could write for her!

No, she corrected–stories written by AI: an algorithm would scrape press releases and information from the web and refashion it into new stories, and it was my friend of a friend’s job to catch any mistakes, called “hallucinations,” in the text.

That was my first encounter with the impending obsolescence of my creative craft. But visual artists have been waving their arms in warning about this for a while, especially since decent AI image generators like Dall-E 2 became publicly available in the past year.

Now AI appears to be coming for musicians, too.

Rujing Huang: You can’t stop it; it’s happening. But I think what’s important is: we have to talk about it and have to talk about issues that now rise when it happens.

Christine McLeavey: I don’t think anyone knows for sure, like, “How do we do this well?”

Parshall: You’re listening to Science, Quickly. I’m Allison Parshall.

Together, we’ve been exploring music in the brave new world of artificial intelligence.

Last episode, we were serenaded with AI-generated music from Google’s new model MusicLM.

And today, in our final episode, we’re going to grapple with the consequences.

So machines can make music. What’s next?

[CLIP: Music ends]

Parshall: But before we look toward the disrupted musical future..., let’s just say tech has been messing with musicians for a long time.

The phonograph was the disruptive bogeyman of the early 1900s. It threatened the livelihoods of instrumentalists that played in silent movie theaters and cabarets.

Early recording technology led to a legendary strike in the 1940s, as musicians fought for better compensation by recording companies.

Eventually, the industry adapted. By the 1960s a lot of musicians were making a living as session artists, making the recordings that once disrupted their lives. Then the synthesizer struck.

[CLIP: Synth notes]

Shelly Palmer: When I started in music, I was 12 years old. I brought my synthesizer to a recording session … that was in, like, 1970. 

Parshall: That’s Shelly Palmer, the composer we heard from last episode.

Palmer: Then I played, like, six lines of, like, what you would call violin sounds. There were three different violins, a viola pass, a cello pass and a bass pass. And the union shop steward made the producer pay me for every single musician I replaced.

Parshall: Of course, that didn’t last too long.

Palmer: The producers were having none of the idea that they’d have to pay 15 times for one guy. They were not doing it, and the union sort of acquiesced. But it’s—like, instead of hiring 600 musicians, they’re probably hiring 50 musicians a year, [the] same guys over and over, but not the 600 from five years earlier. 

This is going to piss people off. Like, this is about what’s going to happen again.

Parshall: Even musicians who work with AI are feeling the angst.

Sofía Oriana Infante is one of the members of PAMP! whose second-place entry to the 2022 AI Song Contest was voted an audience favorite. She’s a composer, not a computer scientist or engineer.

Oriana Infante: Just think about this, no? If a company wants to record a video, and they need to put music to this video. What are they going to choose–to buy one only license for the year that gives them the company a lot of compositions for a very low, low budget? 

Or they are gonna choose the composer that takes weeks or even a month to compose a piece they have to wait [for] and maybe for something they don’t like? Obviously you’re gonna think about your benefits as a company.

Parshall: Sofía got interested in AI while studying music composition for film. Even back in 2015 these algorithms were already starting to invade the classrooms around her.

Oriana Infante: My classmates were using software that could compose by itself, almost, no? You didn’t have to do that much, just, like, press some keys on the keyboard, and it worked very automatically. That was my first ... click? I think that it started there, with some questions.

Parshall: Those questions were pressing enough that she pursued a Ph.D. in musicology to study AI as a compositional tool. She knew this was going to be important, and she wanted answers.

Oriana Infante: At the beginning, I was like very angry with the AI, with these guys, these companies–what they are doing with the music. But then, I realized that in a percentage, I was right, in another percentage, I was wrong because you can choose to use it to replace composers to earn more money. Or you can say, “Okay, I’m gonna do something in symbiosis.”

Parshall: That symbiosis is what she set out to achieve with her team’s AI Song Contest entry. Like Hanoi Hantrakul’s piece grounded in traditional Thai music, Sofía’s AI composition brought traditional music from her native Galicia in Spain into an electronic setting.

[CLIP: Sample of “AI-LALELO,” by PAMP!]

Parshall: She and her team rewrote traditional Galician cantigas with new verses and melodies. Her team also used AI tools to transform vocals to sound like a hurdy-gurdy, a traditional cranked string instrument.

Oriana Infante: For me—and this is really, really important—the most important thing of using artificial intelligence in compositions is doing things that the human cannot do.What if the AI can tell you the changes of the weather and turn that into music? You are in a sunset with your girlfriend or boyfriend, and then you ask the AI to write a song for that moment. That is interesting, where the AI can take part giving you new ideas that you cannot have.

Parshall: I personally might find an AI composing music for my life creepy. But still, when musicians approach music AI as a partner in composition, then these models can certainly be a source of artistic inspiration.

Christine McLeavey, the pianist and AI engineer from OpenAI who spoke with us last episode, has thoughts.

McLeavey: It sort of opened up in my mind so many more creative possibilities. Like, especially with MuseNet, I would feed into it the beginning of a piano piece that I’ve played a million times. So in my brain, there was only one way that that piano piece went. 

But, of course, MuseNet would just be like, “Well, why don’t we go this other way?” And it would suddenly veer off into something just totally crazy and different. There’s no embarrassment, right? Like, the model just generates tons of samples, and some of them are great, and some of them are really terrible. 

And I think, as humans, if I compose a piece, and it’s terrible, then I’m just like, “Oh, I’m, I’m no good at this.” And I, you know—whereas to think, like, “Okay, I just need to, like, I need to write 32 pieces, and maybe one of them will be okay and then I could write another 32 pieces….” And to kind of bring that mentality to sort of the human creative process for me was, that, that was really exciting.

Parshall: But all this is looking at the bright side. Even if these positive aspects yield cool new music and a creative sidekick for tech-savvy composers, it would still make it harder to find work if potential clients can just click a button and get what they want for cheap.

And here we veer off from the idealized idea of music as art to the practical reality of many composers, which is of music as a product.

Shelly Palmer, who spent a lot of his career composing commercial music for brands such as Meow Mix and Bojangles Fried Chicken is pretty blunt about how AI will affect this sector of the music industry.

Palmer: If you’re the buyer of commercial music, you’re going to be paying less. If you are the seller of commercial music, you’ll be earning less—unless you use this in a way that I think it’s going to be used by the best of the composer-producers in the world, which is to just make them 10 times more powerful and 10 times more productive. I see these as productivity enhancers first and foremost. 

I’ve been, already, bar fights over it, if you will, with my friends who are like, ‘how could you be all over this? And how could you care about it?’ and it’s like, I think it’s awesome. To me, I love this. And maybe I’m allowed to love it because, you know, I’m at the end of my writing career, not the beginning of it.

Parshall: Yeah, as someone at the beginning of a creative career, even if it’s not music, I can’t say I’m psyched. There’s a part of me that will always go to bat for the idea that there’s something special about human creativity, even in cat food commercials.

If music AI really takes off and starts sucking up metric tons of music and learning to reproduce patterns, won’t the results be samey, flat, boring?

Brad Garton: You’ve kind of poked at it with this idea of, you know, this is gonna be music that kind of sounds like all the other music ....

Parshall: That’s Brad Garton, a computer musician at Columbia University. We heard from him in the last episode.

Garton: One of the things that keeps me alive and kicking in music is discovering new stuff. This is a sound I’ve never really heard before. And I hope we don’t lose that by relying so much on the preexisting bulk of stuff that’s there, that we wind up just kind of producing something that sounds again like another Bruno Mars hit. I don’t think that’ll happen, because I think people are going to become hungry for new stuff. 

Parshall: And all of this is setting aside the legal copyright issues. Visual artists are grappling with these legal questions, now. But music has a particularly complicated relationship with copyright law.

Rujing Stacy Huang: Music AI, it’s actually a lot of questions that are ... not new, right? We have a lot of similar questions that already happened with sampling.

Parshall: That’s Rujing Stacy Huang. She’s a musician and one of the six organizers of the AI Song Contest. She’s also a musicologist by training who has spent the past few years trying to unpack the ethics of music AI.

Huang: A lot of the issues are not new. It’s [the] same issue but in a whole new scale and velocity—because, all of a sudden, you can have a machine learn the songs of 2,000 musicians, and it feels more severe than having one artist sample the sound of another artist.

Parshall: Rujing has been thinking and writing about these issues for years and is still hesitant to come down either way on the “Music AI: good or bad?” question.

Huang: It’s really so polarized these days. When you look at that public sentiment about music AI, creative AI. It’s literally the end of art versus that golden moment of everyone being artists. It’s so polarized, and people seem to just really be falling on those two extremes of fear and celebration. I don’t really know what I think. Maybe I know what I think is that I don’t really care about this question.

Parshall: As an ethnomusicologist, someone who studies music and the culture that exists around it, she sees it as her job to dig deeper.

Traditional ethnomusicologists might have shown up to a remote village with a recorder, but the field is changing, and she sees organizing the AI Song Contest as a type of fieldwork—an opportunity to observe, to listen and to figure out how to steer this ship now that it’s set sail.

Huang: I do not celebrate it like many are, and I do not think it will end art or it will end music like many are saying, so I guess, personally, I’m actually neutral. I think it’s happening. So you can’t stop it; it’s happening. But I think what’s important is: we have to talk about it. And we have to talk about issues that now rise when it happens.

Parshall: Christine McLeavey at OpenAI agrees.

McLeavey: There were a few years where it worried me that we weren’t talking about this more as a society. I’m glad now that people are involved. I think this is the point where we need to be hearing from, from as many voices as possible. I don’t think anyone knows for sure, like, “How do we do this well?”

Parshall: And as far as the artistry question goes—that perennial fear “Are we going to be replaced?”—basically everyone I talked to said, “Not really.” But I think Hanoi, the AI Song Contest winner, said it best.

Hantrakul: I think a lot of the conversations that we’ve been having about technology upending creativity were the same exact conversations that happened when, like, photography was becoming a thing. People were genuinely worried that photography would kill painting. 

But, like, lo and behold, you have this explosion of creativity that came as a result of photography. And then it, you know, caused this whole paradigm shift in how people painted now, to give you, like, more abstract things that would’ve never happened if the camera was never invented. So I, what I’m really excited about, and I think where AI should go, is enabling us to write music that was never possible before.

Parshall: If you’re a musician or an AI engineer, and you want to try your hand at AI composing, the 2023 AI Song Contest is on, and will be announced soon, according to the organizers. If you aren’t a composer, you can still participate: the finalists typically go up for a public vote, like American Idol or Eurovision except cooler.

I, for one, can’t wait to hear them.

You’ve just listened to the final episode of our three-part podcast Fascination on AI that makes music. MusicLM, buddy, play us out.

[CLIP: MusicLM sample: The Starry Night (Dutch: De sterrennacht)]

Science Quickly is produced by Jeff DelViscio, Tulika Bose and Kelso Harper. Our theme music was composed by Dominic Smith.

Don’t forget to subscribe to Science, Quickly wherever you get your podcasts. For more in-depth science news and features, go to ScientificAmerican.com.

For Scientific American’s Science, Quickly, I’m Allison Parshall.

[CLIP: Theme music]

SUBSCRIBE: Apple | Spotify

Allison Parshall is an associate news editor at Scientific American who often covers biology, health, technology and physics. She edits the magazine's Contributors column and has previously edited the Advances section. As a multimedia journalist, Parshall contributes to Scientific American's podcast Science Quickly. Her work includes a three-part miniseries on music-making artificial intelligence. Her work has also appeared in Quanta Magazine and Inverse. Parshall graduated from New York University's Arthur L. Carter Journalism Institute with a master's degree in science, health and environmental reporting. She has a bachelor's degree in psychology from Georgetown University. Follow Parshall on X (formerly Twitter) @parshallison

More by Allison Parshall