Please Be Polite to ChatGPT

The benefits of being polite to AI may include prompting better chatbot replies—and nurturing our humanity

Sankai/Getty Images

If you’ve ever caught yourself saying “please” and “thank you” to ChatGPT, you’re in good company. In an informal online survey by Ethan Mollick, an associate professor at the University of Pennsylvania, nearly half of the respondents said they are often polite to the artificially intelligent chatbot, and only about 16 percent said they “just give orders.” Developers’ comments, posted in a forum hosted by OpenAI, the company that created ChatGPT, also reflect this tendency: “I find myself using please and thanks with [ChatGPT] because it’s how I would talk to a real person who was helping me,” one user wrote.

This might, at first, seem a bit baffling. Why be kind to an unfeeling machine? Before ChatGPT, most of us regularly interacted with automated systems without giving a second thought to our tone. (If you overheard someone being obsequious to a bank’s customer service robo representative, for instance, you might give that person a wide berth.) But the sophistication of recent artificial intelligence chatbots—including ChatGPT, Claude, Gemini and others—marks a major leap in human-computer interaction: their ability to communicate in a natural-sounding way, sometimes with humanlike voices, makes them seem less like cold, calculating machines and more like conscious entities. And these chatbots are increasingly being woven into the fabric of everyday life. In June Apple announced a new partnership with OpenAI to integrate ChatGPT with Siri and other on-device features. Like it or not, engaging with conversational AI could soon become as routine as checking e-mail. Questions about how we interact with AI, therefore, are more pressing than ever.

Since the release of ChatGPT, a typical running gag goes something like this: be nice to the chatbot, or else you’ll be toast in the inevitable AI uprising. “If you’re not saying please and thank you in your ChatGPT conversations, then you’ve clearly never seen a sci-fi movie,” one user posted on X (formerly Twitter) in December 2022. But all jokes (and anxieties) aside, are there any legitimate reasons we should be polite to AI?


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


The answer is yes, at least according to one recent study posted on the preprint server arXiv.org by a team at Waseda University and the RIKEN Center for Advanced Intelligence Project, both in Tokyo. Using polite prompts, the authors found, can produce higher-quality responses from a large language model (LLM)—the technology powering AI chatbots. But there’s a point of diminishing returns; excessive flattery can cause a model’s performance to deteriorate, according to the paper. Ultimately, the authors recommend using prompts that tread a middle path of “moderate politeness,” not unlike the norm in most human social interactions. “LLMs reflect the human desire to be respected to a certain extent,” they write.

Supportive prompts, such as “Take a deep breath and work on this problem step-by-step,” boost an LLM’s ability to solve grade school math problems

Nathan Bos, a senior research associate at Johns Hopkins University studying relational patterns between humans and AI—who was not associated with the recent preprint study—often uses “please” with chatbots “because it’s a good prompting practice,” he says. “‘Please’ indicates that what follows is a request, making it easier for the LLM to know how to respond.” (Bos also says “thank you,” which he attributes to his well-mannered Michigan upbringing.)

The tone of a prompt, Bos adds, could also cause an LLM to pull from linguistically correlative responses when it’s formulating its reply; in other words, AI gives out what you put in. Polite prompts may direct the system to retrieve information from more courteous, and therefore probably more credible, corners of the Internet. A snarky prompt could have the opposite effect, directing the system to arguments on, say, Reddit. “LLMs could pick that up in training without ever even registering the concept of ‘positivity’ or ‘negativity’ and give better or more detailed responses because they’re associated with positive language,” Bos explains.

A little support of the sort that a patient teacher might show toward a struggling pupil may also influence chatbot performance. A preprint paper posted last year by a team of researchers at Google DeepMind found that supportive prompts, such as “Take a deep breath and work on this problem step-by-step,” boost an LLM’s ability to solve grade school math problems that require basic reasoning skills. Here, too, the explanation can be traced back to the model’s training data: such language may trigger the system to refer to online tutoring sources, which often encourage students to break a problem into parts—thereby causing the algorithm to do the same.

It does appear, then, that being polite toward AI can improve its technical performance. But like all communication, exchanges with chatbots are two-way streets. As we train AI to behave in certain ways, the interactions may also be training us. The real reason we should be polite with our AI assistants, according to this view, is because it can simply help keep us in the habit of being civil toward our fellow humans.

Politeness toward AI is “a sign of respect”—not to a machine but to oneself, says Sherry Turkle, a clinical psychologist and founding director of the Massachusetts Institute of Technology Initiative on Technology and Self. “It’s about you,” she says. The danger, in her view, is that we might become habituated to using crass, disrespectful and dictatorial language with AI and then unwittingly act the same way with other human beings. The world may have already glimpsed this kind of desensitization at work: some parents have begun to complain in recent years that their children, used to barking commands at automated virtual assistants such as Siri and Alexa, have become less respectful. In 2018 Google sought to remedy this by introducing a feature called “Pretty Please” for Google Assistant as a way of encouraging kids to use polite language when making requests. “We have to protect ourselves,” Turkle says, “because we’re the ones that have to form relationships with real people.”

Our interactions with AI will affect the evolution of human social norms as it seeps ever more steadily into our daily lives, says Autumn P. Edwards, a professor of communication at Western Michigan University and an expert in human-computer interaction. “This is a moment where we can either completely adapt to command-based and machinelike interaction and change what it's meant to engage in human communication on a broad scale,” she says, “or we can preserve [our humanity] and try to integrate the best of human communication into our dialogues with these entities.”

Edwards also points out that companies sometimes hire human beings to pose as chatbots, because customer expectations for quality of service can be lowered if people believe they’re interacting with AI as opposed to a living worker, which can decrease demands for a business’s time and accountability. “Those people have reported some pretty horrific psychological effects from the sorts of things that are said to them,” she says. “We think we’re hurling abuse at a system ... without realizing there’s somebody on the other end who may have to see it.”

Advancements in generative AI are happening at breakneck speed. GPT-4o, which OpenAI released in May, can communicate in a variety of automated voices that sound strikingly human. (OpenAI representatives declined to be interviewed for this story.) Innovations in AI-generated video, meanwhile, make it likely that some chatbots will soon come with virtual, highly expressive faces, pushing us closer to the far side of the uncanny valley. What Turkle calls our “Darwinian buttons”—the evolutionary impulse to ascribe agency to anything that seems human—are about to be manipulated more deftly than ever. Tech companies have clear financial incentives to drive such advancements. The psychological consequences for individuals and societies remain to be seen. For the time being, however, it appears that showing some civility to AI chatbots is worth the effort—for your own well-being and that of those around you.