AI Doesn’t Threaten Humanity. Its Owners Do

We shouldn’t be afraid of AI taking over humanity; we should fear the fact that our humanity hasn’t kept up with our technology

Illustration of a man in suit with text bubbles and graphic elements representing artificial intelligence

Moor Studio/Getty Images

In April a lawsuit revealed that Google Chrome’s private browsing mode, known as “Incognito,” was not actually as private as we might think. Google was still collecting data, which it has now agreed to destroy, and its “private” browsing does not actually stop websites or your Internet service provider, such as Comcast or AT&T, from tracking your activities.

In fact, that information harvesting is the whole business model of our digital and smart-device-enabled world. All of our habits and behaviors are monitored, reduced to “data” for machine learning AI, and the findings are used to manipulate us for other people’s gains.

It doesn’t have to be this way. AI could be used more ethically for everyone’s benefit. We shouldn’t fear AI as a technology. We should instead worry about who owns AI and how its owners wield AI to invade privacy and erode democracy.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


No surprise, tech companies, state entities, corporations and other private interests increasingly invade our privacy and spy on us. Insurance companies monitor their clients’ sleep apnea machines to deny coverage for improper use. Children’s toys spy on playtime and collect data about our kids. Period tracker apps share with Facebook and other third parties (including state authorities in abortion-restricted states) when a woman last had sex, their contraceptive practices, menstrual details and even their moods. Home security cameras surveil customers and are susceptible to hackers. Medical apps share personal information with lawyers. Data brokers, companies that track people across platforms and technology, amplify these trespasses by selling bundled user profiles to anyone willing to pay.

This explicit spying is obvious and feels wrong at a visceral level. What’s even more sinister, however, is how the resulting data are used—not only sold to advertisers or any private interest that seeks to influence our behavior, but deployed for AI training in order to improve machine learning. Potentially this could be a good thing. Humanity could learn more about itself, discovering our shortcomings and how we might address them. That could assist individuals in getting help and meeting their needs.

Instead, machine learning is used to predict and prescribe, that is, estimate who we are and the things that would most likely influence us and change our behavior. One such behavior is how to get us to “engage” more with technology and generate more data. AI is being used to try and know us better than we know ourselves, get us addicted to technology, and impact us without our awareness, consent or best interest in mind. In other words, AI is not helping humanity address our shortcomings, it’s exploiting our vulnerabilities so private interests can guide how we think, act and feel.

A Facebook whistleblower made this all clear several years ago. To meet its revenue goals, the platform used AI to keep people on the platform longer. This meant finding the perfect amount of anger-inducing and provocative content, so that bullying, conspiracies, hate speech, disinformation and other harmful communications flourished. Experimenting on users without their knowledge, the company designed addictive features into the technology, despite knowing that this harmed teenage girls. A United Nations report labeled Facebook a “useful instrument” for spreading hate in an attempted genocide in Myanmar, and the company admitted the platform’s role in amplifying violence. Corporations and other interests can thus use AI to learn our psychological weaknesses, invite us to the most insecure version of ourselves and push our buttons to achieve their own desired ends.

So when we use our phones, computers, home security systems, health devices, smart watches, cars, toys, home assistants, apps, gadgets and what have you, they are also using us. As we search, we are searched. As we narrate our lives on social media, our stories and scrolling are captured. Despite feeling free and in control, we are subtly being guided (or “nudged” in benevolent tech speak) towards constrained ideas and outcomes. Based on previous behaviors, we are offered a flattering and hyperindividualized world that amplifies and confirms our biases, using our own interests and personalities against us to keep us coming back for more. Employing AI in this manner might be good for business, but it’s disastrous for the empathy and informed deliberations required for democracy.

Even as tech companies ask us to accept cookies or belatedly seek our consent, these efforts are not done in good faith. They give us an illusion of privacy even as “improving” the companies’ services relies on machines learning more about us than we know ourselves and finding patterns to our behavior that no one knew they were looking for. Even the developers of AI don’t know how exactly it works, and therefore can’t meaningfully tell us what we’re consenting to.

Under the current business model, the advances of AI and robot technology will enrichen the few while making life more difficult for the many. Sure, you could argue that people will benefit from the potential advances (and tech industry-enthralled economists undoubtedly will so argue) in health, design and whatever efficiencies AI might bring. But this is less meaningful when people have been robbed of their dignity, blamed for not keeping up and continuously spied on and manipulated for someone else’s gain.

We shouldn’t be afraid of AI taking over humanity; we should fear the fact that our humanity hasn’t kept up with our technology. Instead of enabling a world where we work less and live more, billionaires have designed a system to reward the few at the expense of the many. While AI has and will continue to do great things, it’s also been used to make people more anxious, precarious and self-centered, as well as less free. Until we truly learn to care about one another and consider the good of all, technology will continue to ensnare and not emancipate us. There’s no such thing as artificial ethics, and human principles must guide our technology, not the other way around. This starts by asking about who owns AI and how it might be employed in everyone’s best interest. The future belongs to us all.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Joseph Jones is an assistant professor of media at West Virginia University, where he teaches courses about media ethics, law, history, sociology, philosophy and power. His research focuses include AI and the political economy of our mediascape, and how media create meaning in our everyday life and invite us to understand the world and our place in it.

More by Joseph Jones