Tech 5 min read

AI Chatbot: Do you need to use polite language when speaking to AI?

Frank Ocansey

Frank Ocansey

Editor, PulseView

AI Chatbot

AI Chatbot: Experts say flattery, threats and sci-fi role-play rarely improve chatbot accuracy — but how you structure your prompts can make a real difference.

From telling a chatbot “please” and “thank you” to asking it to pretend it’s on Star Trek, advice about how to talk to artificial intelligence can range from thoughtful to downright bizarre.

But does any of it actually work?

Researchers and AI engineers say: mostly no.

Large language models (LLMs) — the technology powering tools like OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude — don’t have emotions, egos or intentions. They are advanced statistical systems that break your words into tokens (small chunks of language), analyse patterns from vast amounts of training data, and predict the most likely next word in a sequence.

Because of this, every word you type technically influences the output. But that doesn’t mean there’s a secret spell of “magic words” that guarantees better answers.

As Vanderbilt University computer science professor Jules White explains, “It’s not about word choice — it’s about how you fundamentally express what you’re trying to do.”

AI Chatbot: The politeness debate

In 2025, a user on X wondered aloud how much money AI companies were “losing” in electricity costs because people type “please” and “thank you.”

Sam Altman, chief executive of OpenAI, jokingly responded: “Tens of millions of dollars well spent. You never know.”

The comment was widely interpreted as humour — possibly even a tongue-in-cheek nod to fears of an AI uprising. But it sparked a practical question: does politeness improve performance?

Some research has tried to answer that. Results are mixed and sometimes contradictory:

  • A 2024 study suggested polite prompts occasionally led to more accurate responses.
  • Another small experiment found that an earlier version of ChatGPT performed slightly better when insulted.
  • Cultural variations appeared in some cases, with Japanese-language prompts reacting differently to levels of courtesy than English or Chinese.

However, experts caution that these findings are inconsistent and quickly outdated. AI systems are constantly updated. What might have worked in 2024 may not apply in 2026.

Modern models are better at identifying the core task in your request. Minor changes like adding “please” or ending with “This will be fun!” are unlikely to produce reliable improvements.

In short: politeness probably won’t make AI smarter.

The myth of “prompt magic”

The broader idea that there’s a secret formula for perfect prompts has been popularised under the term “prompt engineering.”

Early on, users experimented wildly. Some claimed threatening language improved compliance. Others flattered the AI, calling it “brilliant” or “intelligent.” One study even found that asking a chatbot to imagine it was on Star Trek improved its performance on simple maths problems.

But researchers like Rick Battle, an applied machine learning engineer, say those effects were largely unpredictable — a “crapshoot.”

As AI systems improved, these tricks became less relevant. Today’s mainstream tools are more robust and less sensitive to superficial wording changes.

The important shift is this: stop treating AI like a personality to manage, and start treating it like a tool to instruct clearly.

When role-playing helps — and when it doesn’t

One popular tactic is telling the AI to “act as a professor” or “respond as an expert.”

Sander Schulhoff, a researcher who helped popularise prompt engineering, warns that this can actually reduce accuracy when there is one correct answer. Why?

Because you’re nudging the model toward confident performance rather than careful reasoning. That can increase the risk of “hallucinations” — when AI generates plausible-sounding but incorrect information.

However, role-playing can be very useful for:

  • Brainstorming creative ideas
  • Practising interviews
  • Simulating difficult conversations
  • Exploring open-ended scenarios
  • Generating storytelling or fictional content

The key distinction: it works better for subjective or exploratory tasks than for factual, precision-based questions.

What actually improves results

Experts consistently recommend practical, structural techniques rather than emotional tweaks.

1. Ask for multiple outputs

Instead of requesting one answer, ask for three or five options.

For example:

  • “Give me three different introductions with different tones.”
  • “Provide five headline variations.”

This forces you to compare, evaluate and refine — making the interaction collaborative rather than passive.

2. Provide examples

If you want something in your writing style, show samples.

Rather than saying:
“Write casually but professional.”

Try:
“Here are five emails I’ve written. Match this tone and structure.”

Concrete examples reduce ambiguity.

3. Let the AI interview you

For complex tasks — like drafting a job description or building a business plan — tell the AI to ask you questions one at a time until it has enough information.

This adaptive back-and-forth improves context and reduces generic outputs.

4. Stay neutral

If you’re comparing options, avoid revealing bias.

Instead of:
“I’m leaning toward the Toyota. Which is better?”

Ask:
“Compare the strengths and weaknesses of Car A and Car B.”

Otherwise, the model may mirror your preference rather than provide balanced analysis.

5. Be specific about constraints

Clear parameters improve precision:

  • Word limits
  • Target audience
  • Tone
  • Format (bullet points, essay, table)
  • Depth level (basic, intermediate, expert)

Specific instructions reduce guesswork.

So why are people still polite?

Surveys suggest most users are courteous to AI simply because it feels natural. Some even admit they do it jokingly “just in case” of future robot dominance.

But there are deeper psychological reasons.

Politeness can:

  • Make interactions feel more comfortable
  • Encourage thoughtful communication habits
  • Reinforce civility in daily behaviour

Philosopher Immanuel Kant argued that cruelty toward animals damages the moral character of the person being cruel. AI has no feelings — but habits shape behaviour. Speaking respectfully, even to machines, may reinforce patterns of courtesy in human relationships.

In that sense, politeness benefits the speaker more than the system.

The bigger picture

AI tools are designed to simulate human interaction convincingly. That illusion can make it feel as if tone manipulation influences “mood” or “attitude.”

But LLMs are not conscious. They don’t get offended. They don’t appreciate compliments. They don’t respond emotionally.

They respond statistically.

If you want better answers:

  • Be clear
  • Be specific
  • Provide context
  • Ask for multiple options
  • Iterate and refine

You don’t have to be polite to AI.

But communicating clearly — and thoughtfully — will always improve results.

Source: BBC.com

Also read: AI data centers: Big Tech pledges to shoulder AI power costs as electricity prices rise

Continue Reading