Please, Thank You, and the Ghost in the Machine

Share on:

The hidden habits behind our interactions with AI.

Speaking politely to AI reveals deep human instincts — from cognitive shortcuts to seeing minds where none exist. Courtesy shapes better interactions, but the “ghost” in the machine is only a reflection of ourselves.

KEY POINTS

  • Speaking kindly to AI trains it — and reminds us who we want to be.
  • Humans instinctively see agency in objects, from statues to chatbots.
  • Don’t mistake awe for agency: the real mind behind AI is still our own.

Last week OpenAI’s CEO Sam Altman shared an interesting statistic: the polite inclusion of “please” and “thank you” in many users’ ChatGPT prompts costs the company millions of dollars in compute expenses every year. On the face of it, that sounds absurd. Why are we wasting huge amounts of electricity on courtesies that mean nothing to a large language model?

Fifty-five percent of Americans say they speak politely to chatbots because “It’s the nice thing to do” while another 12% say they do it because they want to keep their future AI overlords happy. These answers sound pretty straightforward, although we might wonder how serious some of those in the second group are. But if we dig a little deeper we can see that the way we talk to chatbots offers a window onto some fascinating features of the human mind.

So, why do we instinctively treat non‑conscious software as though it possesses an inner life?

Convenience beats cognitive friction

The simplest answer is convenience. Polite speech is the default setting we practise all our lives with other humans. Abandoning it when we talk to bots forces the brain to switch to a new conversational rulebook. That mental gear‑shift is tiny but constant, so most people let the old habits run. Life is just easier that way.

As we discuss in our book Transcend: Unlocking Humanity in the Age of AI, choosing linguistic convenience can sometimes lead us down dangerous paths when it comes to letting AI make choices for us. But in this case, the path of least resistance actually yields up some important benefits.

Politeness trains the mirror

Large language models function by making predictions: they learn to assemble sentences that look statistically plausible in context. Every time we choose to interact with an LLM in a polite way, we provide another data point that nudges the model toward reflecting a better version of ourselves. This isn’t just a moral issue. There are practical upsides here too.

As Kurtis Beavers, a senior designer at Microsoft, points out, if you speak to an AI model in a polite, collaborative, and professional tone, you increase the chances of getting polite, collaborative, and professional responses. Being nice can have workplace payoffs in a chat window just as it can when chatting to colleagues in the office. So, in this sense, politeness isn’t wasted – it is a kind of prompt engineering that mirrors the social engineering that goes on all the time in human-to-human interactions.

One of the oldest habits in the world

Our impulse to treat objects as if they have minds long predates our interactions with silicon. We have been interacting with inanimate objects, and even attributing agency to them, for millennia. As Dr. Georgia Petridou, a scholar of ancient religion, reminded me recently, the Greeks and the Romans would dress statues of the gods, talk to them, and even attribute physical and social events in the world to them. And it wasn’t just representations of gods that had a kind of agency: the paintings on the walls of Pompeii and the layout of shrines steered the way people looked, walked, and felt as they moved around the city and lived in their homes. Even places, like city squares, shaped the types of interactions people had and took on a ‘character’ of their own.

Objects that command – and how we talk back

These kinds of habits aren’t just relics of ancient cultures. We still think the same way today. We take orders from road signs and we cajole or curse at stop lights that turn red at the wrong moment. We find winding old streets “beguiling” or “charming” while a dilapidated building can feel “oppressive.” Places and objects still exert a pull that seems to make us feel or act in certain ways. Some modern management thinkers even see leadership as something that emerges from the way humans and physical environments interact.

Awe, agency, and AI

We might be particularly vulnerable to treating AI models as beings that act of their own accord. Research into the origins of religious beliefs has found that humans have a built-in tendency to detect agency – intentional and purpose driven action – under certain conditions, regardless of whether there is really any purposeful agent present. In particular, a feeling of awe can make people disregard uncertainty about the existence of an agent and instead commit to a belief. The grander the phenomenon, the more likely we are to imagine an underlying agent that is responsible for it. Large language models may not be conscious, but they are undeniably awe‑inspiring in their potential, and that alone primes the mind to conjure a ghost into the machine.

So, should we keep saying please?

The answer is … probably. So long as we remember what is really happening, there is little harm in courteous interaction and this behavior comes with some direct upsides. It lowers cognitive load, sets helpful norms inside the models, reinforces a habit of respect that leaks into the rest of life, and reminds us of the values we hope technology will amplify.

At the same time, we need to keep our anthropomorphizing tendencies on a leash. The “mind” behind the screen is a lattice of probabilities owned and optimised by whoever pays the electric bill. Good manners are a fine habit; blind deference is not. If we start treating a polite exchange as implicit evidence of personhood, we risk surrendering moral authority to systems that cannot suffer the consequences of their advice. So go ahead and type “thanks” if it makes the conversation flow. Just remember who is doing the real thinking – and who will be held responsible for following the advice of the reflection in the mirror.

[Source Photo: Shutterstock]

A version of this article @ Psychology Today.  

Share on:
error: