Opinion

Hey Siri, How Are You?

We must become more mindful about how we talk to our technology
By Sara Tillinger Wolkenfeld
A man yells at his technology in an illustration created by ChatGPT.

All of a sudden we are like God in Genesis: We speak, and there is light.

From Claude, the chatbot that obliged me by drafting an email yesterday, to the Amazon Echo I address every night to conjure bedtime music for my youngest child, language functions as the magical formulas of our current technological moment.

But along with great power comes great responsibility. In recent years, ethicists have been debating numerous implications of artificial intelligence. But one they have left relatively untouched has been the matter of AI etiquette. Given our world of newly empowered speech acts—in which the very utterance of words to a computer mind can summon cars, pay tuition, unlock doors (literal and metaphorical)—we should be paying attention to how we speak to artificial intelligence.

The issue is increasingly pressing. No longer are we just leaving messages on answering machines, or programming a VCR; rather, we are actually talking to the computers themselves. More and more of the conversations that used to involve speaking with a human are now handled by chatbots. Calling an airline? A computer will answer. Want to turn on some music? Just tell Alexa. And as we know from years spent on Twitter, the more anonymous an interaction, the less we know the other participant, the crueler we can be. It stands to reason that talking to computers doesn’t always bring out the best.

So, to put a finer point on the question: When we address AIs, should we be nice, kind, polite? Is it important to thank Echo, applaud Claude, say “please” to Alexa?

The ethics of this are not straightforward. When asked, ChatGPT offered the following thoughts about treating AI with kindness and civility: “As an AI language model, I do not have feelings, emotions, or the ability to be offended, so you don’t need to say ‘please’ and ‘thank you’ to me.”

That’s all well and good, but should we trust ChatGPT to answer questions about ethical behavior? ChatGPT will not be harmed by our speech, but, like all other forms of AI, ChatGPT lacks the empathy to understand how we ourselves might be harmed by our speech. Certainly, the mother in me bridles at the idea that my children would think it okay to speak rudely to anyone or anything. Imagine a child speaking rudely to a family pet; it’s hard to say why it is wrong, but we feel intuitively that it is. So while it may not be important to ChatGPT that we express gratitude, it should be important to us.

When we address AIs, should we be nice, kind, polite? It is important to thank Echo, applaud Claude, say ‘please’ to Alexa?

We use our words, and our actions, to train our hearts. This is a central insight of virtue ethics, familiar to anyone who has read Aristotle, or his more recent exponents, like philosopher Alasdair MacIntyre and theologian Stanley Hauerwas. As a scholar of ancient Jewish texts, I am especially sensitive to the tremendous significance of speech. Generations of Jewish literature have emphasized the need to attend to the words we speak, not just for how the words affect others, but for how they change the speaker. We hear what we say even when no one else does, and the words we use establish patterns of behavior within our own psyches even when there is no corresponding intellect on the other end.

That’s what Jewish tradition tells us: that how we deploy our words is crucial because they are central to our humanity. Even though modern science tells us that other creatures also have the power to speak to one another, Jewish tradition asserts that the power of language is definitional to what it means to be a human being. Genesis 2:7 describes how the Divine blows nishmat chayim, the breath of life, into the newly created Adam, rendering the human a living being. Onkeles, the first-century Aramaic translator of the Five Books of Moses, explains the transition to becoming fully human as being endowed with “a soul that speaks.” Becoming a human being means having the ability to use language, the very mechanism by which God made all of creation.

Our ability to speak defines us as human beings, and so how we use that power is what forms our character. An influential thirteenth-century Jewish legal work, authored anonymously by a father writing for his son, famously states: “A person’s heart and thoughts follow from their actions.” Behave with cruelty, argues the author, and you will become cruel. The same is true with speech; a person’s heart and thoughts follow from the words they express. Speech is a way for humans to shape the world around us, but the words we utter leave marks on our own soul as well. We practice our spoken derision or cruelty on AIs and are honed and ready when we turn to human interaction. Our speech acts have the power to form our character and harm our own moral development.

We hear what we say even when no one else does, and the words we use establish patterns of behavior within our own psyches even when there is no corresponding intellect on the other end.

That’s true even if your words don’t actually hurt others. Consider the following story from the Talmud, the compendium of rabbinic legal material and conversations redacted in about the sixth century. It’s a kind of moralistic fairytale, in which a frustrated husband is forced to confront the consequences of deploying falsehoods to get the food he actually wants. The wife of Rav, a third-century rabbi, always prepares precisely the food he did not request. If he asks for lentils, she makes peas, and vice versa. Their son perceives this dynamic and decides to solve the problem by conveying to his mother the opposite of his father’s requests, thereby ensuring that the father gets what he actually wants. Rav is impressed with his son’s ingenuity, but ultimately condemns the behavior. Quoting the prophet Jeremiah, he says “they have trained their tongues to speak falsely.”  Although Rav appreciates that his son’s behavior is effective, the cost—becoming deceitful, becoming a liar—is too high.

The words our mouths shape are also the words that shape us, and we must ensure that the increasing intensity of our encounters with digital technology does not corrupt our ability to speak with humanity to those around us. We cannot depend on tech companies to program these bots to push against unkind human interlocutors. The responsibility lies with us. In the age of artificial intelligence, we need rules of speech that ground us in the emotional intelligence that humans have developed over the course of millennia. We don’t need to treat machines as though they were human, but we need to ensure that we remain humane in all of our interactions.

There is a special Hebrew name for speech that does not hurt the other, but nonetheless taints the mouth of the speaker. In the Talmud, this category is defined by lewd comments, but the name, which translates to “tainting one’s mouth,” has broader implications. My power, as a human made in the divine image, is in my ability to speak. To abuse that power corrupts this divine gift.

We use our words, and our actions, to train our hearts.

As it becomes harder to distinguish between the chatbot providing customer-service and the human customer-service representative, cultivating digital empathy will be ever more important. If we respond to emerging AIs with rudeness, anger, and cruelty, not only will we teach AIs the type of rudeness we would not tolerate in our friends or children, we will ourselves become more callous people at our core.

Governments are beginning to regulate artificial intelligence, but without considering how the human character is implicated in our interactions with AI.  On August 1, 2024, the European Artificial Intelligence Act (AI Act), the most sweeping AI regulation to date, officially came into force. Meanwhile, with Peter Thiel’s protégé, JD Vance, running for vice president, Big Tech’s anti-regulation checklist is firmly on the ticket in the United States, moving to counter last year’s executive order on the safe development of AI. But government interventions tend to focus on shaping the uses of AI, rather than focusing on how AI affects humans acting as our best selves. We can’t fully control the gods of Silicon Valley, but we can hold ourselves to standards of speech that we would want to live with in our society.

Ethics of the Sages, a third-century compendium of rabbinic wisdom, teaches: “In a place where there are no people, strive to be a person.” In its ancient context, this was a call to exert our own humanity when no one around you was willing to do so, but it also resonates deeply with our modern moment. The proliferation of non-human personas around us should inspire us to invest in that which is most deeply human about ourselves.

The same basic guidelines that we teach to kindergarten students should suffice: Say “please” and “thank you,” don’t yell unless it is an emergency, and don’t be obnoxious. Sometimes, we need to be reminded about these rules even when we are speaking with other humans. We can use the rapidly growing number of machines around us as a spur to reinvigorate our commitment to speaking with care. I’m not opposed to advocating for legislation that controls how we treat machines, or laws that require hard-coded fixes, but I’m more committed to, and more optimistic about, investing in our mindfulness about the ethics of speech.

As ChatGPT is quick to reassure us, current AI models don’t need us to treat them as we would human beings. The danger is not in hurting the inanimate other, but in ensuring that we preserve what is unique about the human soul. In a world in which the power of our speech sets machines in motion and has far-reaching impact, we need to exert that power to express the best of humanity. Refining our own humanity allows us to teach the artificial-intelligence models of the future, and the humans of the present who still surround us, what it means to interact humanely. Start by saying “please” to Alexa, and train your heart to be grateful.

Sara Tillinger Wolkenfeld serves as chief learning officer at Sefaria, a new online database and interface for Jewish texts, and serves as Scholar in Residence at Ohev Sholom Congregation in Washington DC.

ARC welcomes letters to the editor

Write to Us