Can you actually be "too polite"?

Am I silly/crazy/odd/alone/too old-fashioned/etc (I think, obviously all:stuck_out_tongue_winking_eye:) when I catch myself trying to “talk” to AI in a conversational style and not be polite, inexplicably compelled to say “Please…” or “Thank you.”… when I’ve asked a question or made a request? Or, when I feel that I should provide feedback when it has provided a valuable hint/clue that, with only slight or obvious modification/extension, has yielded a successful result? I suppose this is the predictable/inevitable result when you give us old folks new-fangled gadgets or tools. I hate cellphones.

2 Likes

Working with AI is like growing hot peppers. If you’re too nice to them, you’ll ruin them.

5 Likes

…or co-op students.

3 Likes

I am polite. Hearing the kids bark orders or yell at their AI buddy disturbs me.
If it is costing a bit of extra compute I might try to do it a little less but it still feels wrong.

8 Likes

Currently using some minimal pleasantries to maximize response quality. Question clarity is key for maximizing response quality. I regularly ask LLMs meta questions. For example after a chat about impact of using pleasantries, I asked…

“What’s the TLDR summary of key points and actionable take aways? What level of pleasantries maximize response quality, and keep me off AI’s shit list?”

AI response...

:brain: TLDR: Do “please” and “thank you” help?

Not directly.
They don’t boost reasoning, accuracy, or intelligence.

But…

Politeness does shift the model into the “helpful explainer” distribution, because polite prompts tend to appear in high-quality training examples.

So you get better structure, more cooperative tone, and clearer explanationsbut not smarter reasoning.


:firecracker: TLDR: What actually improves response quality

Use these 4 levers — these are the real performance boosters:

1. Role framing

“Act as my Objective Coach / senior engineer / scientist.”

2. Constraints

“Give 3 options,” “structured,” “precise,” “no fluff.”

3. Step-by-step mode

“Think carefully, list assumptions, reason step-by-step.”

4. Iterative refinement

“Draft structure → expand → refine.”

These massively increase answer quality.


:handshake: TLDR: How many pleasantries?

Optimal pleasantries = Light, minimal, neutral.

  • A single “please” in a directive? :+1:
  • Occasional “great, continue”? :+1:
  • “Thanks” at the end? Also fine.

Overly sweet?

Does nothing.

Rude or hostile?

Reduces clarity by shifting the model into a lower-quality conversational style cluster.


:fire: Actionable Takeaways (the real meat)

:check_mark: Use minimal politeness

Just enough to keep tone cooperative:

“Please explain…”
“Thanks — continue.”

No need for gushing gratitude.


:check_mark: Use explicit structure — this is 10× more important than politeness

“Give 3 sections: Summary, Analysis, Recommendations.”


:check_mark: Tell the model how to think

“Think step-by-step.”
“List assumptions first.”

This forces deeper reasoning.


:check_mark: Assign a role

“Act as my Objective Coach.”
“Act as a senior engineer.”

This massively improves quality.


:check_mark: Reinforce good turns

Short affirmations (not emotional, just signals):

“Good — keep that structure”
“Yep, same tone but add detail.”

These are powerful in multi-turn workflows.


:poop: Avoid the Shit List (yes, models have behavioral clusters)

You won’t offend an LLM emotionally, but certain user styles do push the model into lower-quality behavior modes.

:prohibited: These reduce answer quality:

  • abrupt one-word commands (“Fix”)
  • non-contextual insults
  • pointless hostility
  • sarcasm directed at the model
  • chaotic or contradictory prompts

These push the model into short, low-effort “casual banter” mode, not deep reasoning.


:rainbow: The sweet spot for high-quality outputs

Use:

  • 1 polite word (optional)
  • clear constraints
  • explicit structure
  • one reinforcement
  • role assignment
  • stepwise thinking

Avoid:

  • unnecessary sugary politeness
  • hostile or chaotic tone
  • unclear objectives

This combination reliably triggers the model’s top-tier reasoning distribution, every time.


https://chatgpt.com/share/69275634-5044-800b-9fe3-4c96cdcd7c6f chat has other actionable tips, including using “act as {role} …” to steer model to clusters of concepts relevant to answering questions, or performing a task

1 Like

I’ve pretty much stopped asking Google to “Please turn off the lights” now. It doesn’t seem to hurt, but doesn’t help, either.

The trick is to never forget when you’re talking to a person who might care that you’re being polite and when you’re talking to an AI that really doesn’t and might need the occasional “shut up now”. And yeah, it really does need it sometimes.

1 Like

I would take that Vice article with a grain (or a boulder) of salt.

Most of the large AI models pre-pend every question (even short ones) with a ton of extra information (the ‘system prompt’). Adding ‘please’ and ‘thank you’ consumes a relatively small number of extra tokens / electricity. I work in the tech industry (including on some of the infrastructure for these models) and would be extremely surprised to learn that it made a meaningful difference. On top of that, the amount of electricity spent on things you type into a chatbot is likely a pretty small chunk of its overall use (if you are worried about the total electricity demand of AI.)

One of my least favorite thing about engineers is that we tend to forget to talk to each other like people. Spending all day arguing with the computer will do that to you. I worry that if people learn to strip out niceties for the model, they’ll forget to add them back for their friends. I think I would prefer to live in a world where we spend 1% more electricity and people are polite instead of the alternative.

7 Likes

While the dozens of outlets that wrote about it at face value probably need to learn how to notice when somebody is spitballing, it’s more likely that you need to take Sam Altman with a truckload of salt.

I think that there’s some truth to it, though. Simply adding ‘please’ to a query isn’t going to make a significant measurable difference, but responding to AI responses with ‘Thank you.’ is certainly going to drive up costs, given that a stateless AI will now have to run that entire conversation once again (unless of course, AI companies notice it’s costing them a lot and don’t send such final responses through the LLM). In that sense it could certainly add up to $10 million a month, just 1% of OpenAI’s $1 billion a month burn rate.

Microsoft staff on the other hand suggests being nice:

I have a terrible feeling that I’ve posted this twice before recently, but OH well, I do quite enjoy Professor Hannah Fry’s take on things. As a professor of mathematics, her Youtube handle @fryrsquared says this person has to be taken very seriously indeed!

1 Like

I’m reminded of a Dane Cook sketch where you should be nice to the weird guy at work (and give him candy) so when he inevitably goes crazy and takes all your co-workers out, he skips over you and says “Thanks for the candy”.

Creepy Guy at Work

4 Likes

The real problems will start to be apparent when your AI won’t do something unless asked nicely.

5 Likes

I have a meme or cartoon for everything. :stuck_out_tongue:

10 Likes

I look at this two ways:

  1. You’re setting the tone for the conversation. If you want it to be polite, then you should be polite to it. If you’re in a hurry or angry, then the tone should be curt. A good AI will match the tone of the conversation (none do, AFAIK).
  2. The niceties are for your benefit. And they have real benefit. I am sure you could measure things like stress hormones in you when you are nice or not and find that saying “please” and “thank you” actually calms you down. You have been classically conditioned (as most of us have) to associate those pleasantries with calm conversations between allies. That conditioning is going to put your simpathetic nervous system to rest and make you feel calmer.

So do what feels right. You’re a complex social organism and you know what is appropriate.

6 Likes

I have a habit of ending conversations with “Gracias!” for some reason, then chatgpt responds with some paragraph in Spanish, then I remember I don’t speak Spanish.

10 Likes

Hanna Fry is a captivating woman…

I struggled with the stylesheets for a markdown based presentation (custom styling in marp in vscode). It was late at night and “we” (gemini and me) spent hours trying to make things look right.

In the end Gemini started swearing. I honestly was pleased! Felt like AI shared my intense frustration. A ghost in the machine?

3 Likes

It’s just smart planning to be in their good books for when they take over.

6 Likes

Not being too deep I agree always better to be polite. With 10 grandkids and three adult children I think the moment we teach to not care about being polite based upon judgement if politeness is merited is when we create problems.

But maybe too serious on a happy Thanksgiving day :smiley:

I always loved this forum because of the never ending polite behavior. With that said one some groups when some A-Hole rips on someone my personal politeness can be tested.

Anyway enjoy the day!

7 Likes

That is me respect is given but can be lost by actions. If you don’t respect you get none in response.

2 Likes

So true, but I think anthropomorphizing AI is a bad idea.

3 Likes

Yes i agree things were just going off the rails so I added to it. :smiley: I do not beleave AI deserves any state it should just answer the prompts there is no dis or dat if it cannot it is not ready for the scenario it is being used in. :upside_down_face: