It’s logical for humans to feel anxious about artificial intelligence. After all, the news is constantly reeling off job after job at which the technology seems to outperform us. But humans aren’t yet headed for all-out replacement. And if you do suffer from so-called AI anxiety, there are ways to alleviate your fears and even reframe them into a motivating force for good.
In one recent example of generative AI’s achievements, AI programs outscored the average human in tasks requiring originality, as judged by human reviewers. For a study published this month in Scientific Reports, researchers gave 256 online participants 30 seconds to come up with imaginative uses for four commonplace objects: a box, a rope, a pencil and a candle. For example, a box might serve as a cat playhouse, a miniature theater or a time capsule. The researchers then gave the same task to three different large language models. To assess the creativity of these responses, the team used two methods: an automated program that assessed “semantic distance,” or relatedness between words and concepts, and six human reviewers that were trained to rank responses on their originality.
In both assessments, the highest-rated human ideas edged out the best of the AI responses—but the middle ground told a different story. The mean AI scores were significantly higher than the mean human scores. For instance, both the automated and human assessments ranked the response “cat playhouse” as less creative than a similar AI-generated response from GPT-4, “cat amusement park.” And people graded the lowest-scoring human answers as far less creative than the worst of the AI generations.
On supporting science journalism
If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.
Headlines ensued, proclaiming that “AI chatbots already surpass average human in creativity” and “AI is already more creative than YOU.” The new study is the latest in a growing body of research that seems to portend generative AI outpacing the average human in many artistic and analytical realms—from photography competitions to scientific hypotheses.
It’s news such as this that has fed Kat Lyons’s fears about AI. Lyons is a Los Angeles–based background artist who works in animation and creates immersive settings for TV shows including Futurama and Disenchantment. In many ways, it’s their dream job—a paid outlet for their passion and skill in visual art, which they’ve been cultivating since age four. But some aspects of the dream have begun to sour: the rise of visual generative AI tools such as Midjourney and Stable Diffusion (and the entertainment industry’s eagerness to use them) has left Lyons discouraged, frustrated and anxious about their future in animation—and about artistic work in general. For instance, they were disheartened when Marvel and Disney decided to use an AI-generated, animated intro sequence made by the visual effects company Method Studios for the show Secret Invasion, which premiered in June. “It feels really scary,” Lyons says. “I honestly hate it.” Disney, which owns Marvel Studios, and Method Studios did not immediately respond to a request for comment.
Like many professional creatives, Lyons now worries about AI models—which need to train themselves on vast swaths of Internet content—stealing and rehashing their artistic work for others’ profit. And then there’s the corresponding loss of employment opportunities. More broadly, Lyons fears for the future of art itself in an era when honing a craft and a personal voice are no longer prerequisites for producing seemingly original and appealing projects. “I worked so hard for my artistic dreams. I’ve been drawing since I was in preschool,” they say. “This is always what I’ve wanted to do, but we might be entering a world where I have to give that up as my full-time job—where I have to go back to waiting tables or serving coffee.”
Lyons isn’t alone. Many people have found themselves newly anxious about the rapid rise of generative AI, says Mary Alvord, a practicing psychologist in the Washington, D.C., area. Alvord says her clients of all ages express concerns about artificial intelligence. Specific worries include a lack of protection for online data privacy, the prospect of job loss, the opportunity for students to cheat and even the possibility of overall human obsolescence.AI’s advance has triggered a vague but pervasive sense of general public unease, and for some individuals, it has become a significant source of stress.
As with any anxiety, it’s important to manage the emotion and avoid becoming overwhelmed. “A certain amount of anxiety helps motivate, but then too much anxiety paralyzes,” Alvord says. “There’s a balance to strike.” Here’s how some psychologists and other experts suggest tackling our AI fears.
First off, context is key, says Sanae Okamoto, a psychologist and behavioral scientist at the United Nations University–Maastricht Economic and Social Research Institute on Innovation and Technology in the Netherlands. She suggests keeping in mind that the present moment is far from the first time people have feared the rise of an unfamiliar technology. “Computer anxiety” and “technostress” date backdecades, Okamoto notes. Before that, there was rampant worry over industrial automation. Past technological advances have led to big societal and economic shifts. Some fears materialized, and some jobs did disappear, but many of the worst sci-fi predictions did not come true.
“It’s natural and historical that we are afraid of any new technology,” says Jerri Lynn Hogg, a media psychologist and former president of the American Psychological Association’s Society for Media Psychology and Technology. But understanding the benefits of a new tech, learning how it works and getting training in how to use it productively can help—and that means going beyond the headlines.
Simone Grassini, one of the researchers of the new study and a psychologist at Norway’s University of Bergen, is quick to point out that “performing one specific task that is related to creative behavior doesn’t automatically translate to ‘AI can do creative jobs.’” The current technology is not truly producing new things but rather imitating or simulating what people can do, Grassini says. AI’s “cognitive architecture and our cognitive architecture are substantially different.” In the study, it’s possible the AI won high creativity ratings because its answers simply copied verbatim parts of a human creation contained somewhere in its training set, he explains. The AI was also competing against human volunteers who had no particular motivation to excel at their creative task and had never necessarily completed such an assignment before. Participants were recruited online and paid only about $2.50 for an estimated 13 minutes of work.
Confronting fears of generative AI by actually trying out the tools, seeing where and how they can be useful, reading up on how they work and understanding their limitations can turn the tech from a boogeyman into a potential asset, Hogg says. A deeper understanding can empower someone to advocate for meaningful job protections or policies that rein in potential downsides.
Alvord also emphasizes the importance of addressing the problem directly. “We talk about what actions you can take instead of sticking your head in the sand,” she says. Maybe that means gaining new skills to prepare for a career change or learning about ongoing efforts to regulate AI. Or maybe it means building a coalition with colleagues at work. Lyons says being involved with their union, the Animation Guild, has been crucial to helping them feel more secure and hopeful about the future. In this way, remedies for AI anxiety may be akin to ones for another major, burgeoning societal fear: climate anxiety.
Though there are obvious differences between the two phenomena (AI clearly offers some significant possible benefits), there are also apparent similarities. In tackling the biggest concerns about AI and in confronting the climate crisis, “we’re all in this challenge together,” Okamoto says. Just as with climate activism, she explains, meaningfully confronting fears over AI might begin with building solidarity, finding community and coming up with collective solutions.
Another way to feel better about AI is to avoid overly fixating on it, Okamoto adds. There is more to life than algorithms and screens. Taking breaks from technology to reconnect with nature or loved ones in the physical world is critical for mental health, she notes. Stepping away from tech can also provide a reminder of all the ways that humans are distinct from the chatbots or image generators that might threaten a person’s career or self-image. Humans, unlike AI, can experience the world directly and connect with one another about it.
When people create something, it’s often in response to their environment. Each word or brushstroke can carry meaning. For Lyons, human creativity is a “feral, primitive drive to make something because you can’t not make it.” So far, all AI can do is mimic that ability and creative motivation, says Sean Kelly, a Harvard University philosophy professor who has been examining the relationship between human creativity and AI for years. When an AI model generates something, Kelly says, “it’s not doing what the original artist did, which was trying to say something that they felt needed to be said.”
To Kelly, the real societal fear shouldn’t be that AI will get better or produce ever more interesting content. Instead he’s afraid “that we’ll give up on ourselves” and “just become satisfied” with what AI generators can provide.
Perhaps the better, and more characteristically human, response is to use our AI anxiety to propel us forward. Mastering a craft—be it drawing, writing, programming, translating, playing an instrument or composing mathematical proofs—and using that skill to create something new is “the most rewarding thing that we can possibly do,” Kelly says. So why not let AI motivate more creation instead of replace it? If the technology spits out something compelling, we can build on it. And if it doesn’t, then why worry about it at all?