I say, “thanks,” after an AI chatbot provides me with a useful answer. I begin new chats with a hint of exuberance: “Hello!” I respond to obvious AI hallucinations with, “That’s not quite right. Why don’t we try…” Right now, I operate with some low-level version of niceness to AI tools, gentling cheering their successes and pushing them away from mistakes.
Not anymore. I changed my mind: I’m going to interact with AI tools differently, more bluntly. Less “please” and “thank you.”
My impression is that people have this same inclination towards conversational politeness when using tools like ChatGPT and Claude. The impulse to treat AI with respect is a good one. These tools are designed to imitate our relationships with real people; it’s no surprise that we tend to import the respect that we show towards others in casual conversations. I’ll guess that we are more inclined towards politeness when using large language models compared to other, similar technologies, like virtual assistants in our personal devices (Siri, Alexa) or search engines (Google, Bing if you are the CEO of Bing). Those tools don’t imitate the fullness of conversation. They’re task-oriented: show me this, perform this action. The promise of the popular, public-facing chatbots is that they offer a more conversational, dialogic, human (?) experience than other tools. So, we treat them like humans.
They’re not humans, though. These chatbots are futuristic versions of AutoCorrect, ultra-advanced prediction models that mimic a mind without becoming one. I don’t say that to disparage the technology. I am drawing a clear line that I’m increasingly worried these tools are designed—intentionally or not—to blur for new users.
The human-ness of chat bots is improving, and it’s giving me goosebumps. Young people are falling in love with ChatGPT. Friends of mine are turning to it for therapy. Cue the montage of Her, Ex Machina, and Black Mirror. It doesn’t end great. There’s reason to celebrate these advancements, too: for example, I’m happy about more human-ish AI tools in an medical profession that is quickly adopting them. But, the shrinking of the Uncanny Valley makes me want to have a clearer sense about what AI is and isn’t.
AI isn’t human, but that’s a misstatement of the issue. The problem is that humans aren’t AI. The danger of importing our real-life, interpersonal tendencies into our interactions with AI tools isn’t merely that we might offer one too many kind words to a robot… but that our psychological current begins to run in the other direction: our human interactions might start borrowing from our AI interactions. A question to a friend is not a prompt entered into ChatGPT. The patience I have for nonsense responses from Claude about how to pick the best replacement lightbulb is not the same patience I should have for a colleague telling me about their weekend. There are two categories here, and the appropriate level of directness, patience, care, earnestness, and frustration are not the same.
I’m going to construct a different, less human relationship with AI because I want to keep safe the distinction between those two categories. For some, that’s unnecessary. The threat of Venn diagram-ing their worldly and digital perspectives is silly. People are people; computers are computers. For others, like me, it’s worth keeping a more watchful eye.
I’d love to hear disagreement. Here are a couple counter-arguments that I’m thinking about, in descending order of how much they persuade me:
No Servants, Please: We should be hesitant about treating AI tools as sub-human because the tasks for which many of us will use AI, like answering brief questions or making purchases or planning a schedule, are strikingly similar to many human jobs. We all interact with or work as customer service representatives or administrative assistants. Bluntness with AI might feel like a preservation of the human-to-human connection, but the exact opposite result obtains if the person begins to associate bluntness with certain tasks instead of certain actors. The right approach to scheduling a haircut over the phone with a barber should be warmer and more human than doing the same with a chatbot on the barber’s website.
Infinite Pool of Love: Can’t we just be a respectful person all the time? If someone can be polite to AI without fail, then there’s much less reason to be worried about poisoning their human interaction. They can simply maintain a minimum level of decorum in all interactions. Nobody loses.
Altman’s Wager: To paraphrase a 17th century philosopher, the rational choice must be to live a religious life because, even assuming that it’s unlikely that God exists and will pass judgment upon your death, the expected value of a blissful eternity in heaven is so great that we should all be religious adherents. To paraphrase a bad dream I had last night, our eventual robot overlords might be kinder to those who said “please” before asking them for five more soup recipes. Maybe, I’ll have a more comfortable Matrix pod when the time comes.
None of the above persuade me. I’m going to try my rough-and-tumble attitude and see how it affects my relationship with these tools. What is your approach? Is this something that you considered in your early interactions with AI tools? Am I wrong about how our AI interactions affect our personal lives?