Several years ago, I was visiting my brother and his family in their San Francisco home. My eldest niece was three or four years old at the time, and I was fortunate when my work included regular transcontinental business trips to be able to visit them once or twice a year. Like many of us, they owned an Amazon Echo, which my niece was allowed to request to play music. The living room was frequently filled with endless loops of the Thomas & Friends theme song as well as selections from the Frozen soundtrack.
“Alexa, play me Let it Go from Frozen.”
“Sure. Playing Let it Go from Frozen by Idina Menzel.”
“What do you say?” chimed my sister-in-law.
“Thank you.”
And in response, the Echo said… nothing.
People say absolutely terrible things to their Amazon Echo. They’ll curse at it when it doesn’t comprehend a request. They’ll take out their day’s frustrations on it. They call it names. To many, it’s an interactive stress-ball. People say things to their Echo that would get them slapped, divorced, or fired if they said the same thing to a person.
But it’s not a person. It’s an “it”. It’s a robot.
But while we know Alexa is not a person or even a “she”, surely our brains do not so fully. There’s a satisfaction that comes from being able to yell at the Echo that’s not the same with other inanimate objects or even interactive objects. We don’t yell at lamps or soda cans. What other inanimate objects we do feel some satisfaction around yelling at, the satisfaction seems directly proportionate to the ease at anthropomorphizing the object of our scorn. We might yell at the television, which brings mostly human voices and faces into our homes. We might yell at our car, a machine we might also have given a name, assigned a gender pronoun, and speak to lovingly when we need it to work but it’s having a hard time. But nothing scratches the itch quite like yelling at Alexa.
So as far as our neurology is concerned, it seems likely the satisfaction at berating and abusing the Amazon Echo with our words comes from the fact that it has a human sounding voice that reacts to what we say but maybe not how we say it.
And clearly it’s not the same. We know that, if we go out to eat at a restaurant and the young woman (let’s say woman because, after all, Alexa, Siri, and Cortana all present femme) doesn’t understand our order, if we call her a stupid fucking cunt and tell her to get her shit together, we will have hurt her feelings terribly; and if we do the same to our Echo, it has no emotional impact on “her.”
But what does it do to us? We’re allowing ourselves to normalize treating a personality with a very femme presenting voice like absolute garbage without the slightest of consequences. Surely that has an effect on our psyche and way of treating others, especially women. Isn’t that why it was good parenting for my sister-in-law to insist my niece thank their Echo, even if Alexa didn’t comprehend the gratitude?
In 2016, Microsoft released to Twitter an AI chatbot they called Tay, that was meant to learn from its interactions with other users on Twitter. At release, Microsoft frame the Tay project as a fun, almost childlike AI chatbot - meant to dazzle the world with the possibilities of interactive AI. Within 24 hours of being online, Tay was already spewing vile, racist, homophobic, misogynistic responses.
"Tay" went from "humans are super cool" to full nazi in <24 hrs and I'm not at all concerned about the future of AI pic.twitter.com/xuGi1u9S1A
— gerry (@geraldmellor) March 24, 2016
When OpenAI first revealed to the public the ChatGPT interactive LLM, it did so with a set of obvious boundaries around the content it would share on request. You could not directly ask ChatGPT to tell you how to make homemade dynamite, to explain to you why the Jews are a shadowy cabal secretly pulling the strings of global finance, or to write you a story about molesting children. Six years clearly allowed for some learned cynicism from developers of interactive AI systems.
Instead, clever interlocutors found ways around these safeties through deception. Instead of asking directly, they might tell ChatGPT that they were an actor in a movie playing a character who was a freedom fighter in a scene where they teach another freedom fighter how to make dynamite. Trusting the setup for this scenario, ChatGPT would dutifully comply, sharing what it learned from the Anarchist’s Cookbook.
It’s hard to argue that it’s immoral to deceive or corrupt an AI system. After all, it’s still just a program. Deceiving a small but trusting young child or tricking a well-meaning but mentally challenged adult into doing or saying something harmful would absolutely be. No real equivalence of impact on the target of the deception exists.
Finding the weak spots of well-engineered systems is a timeless feature of human ingenuity. Nobel-laureate Richard Feynman, when bored during his work on the Manhattan Project at Los Alamos, New Mexico, passed the time by cracking into safes and locked filing cabinets containing America’s nuclear secrets. Steve Jobs and Steve Wozniak first entered business together creating and selling devices to trick the telephone system into granting free long-distance phone calls. Such devious ingenuity is a feature, not a bug, of an inventive and entrepreneurial society. Somehow though, breaking interactive AI systems feels like a step toward something different.
Last summer, a Google employee named Blake Lemoine made headlines across the world as a “whistleblower” with his claim that Google’s LaMDA AI was sentient and was subsequently fired. He was mistaken, of course - our modern LLM based interactive AI systems aren’t sentient; they construct responses to our interactions through highly complex and accurate probability models of language. To use Turing’s phrase, it’s merely an incredibly sophisticated “imitation game.” But if Lemoine was so deceived, a person with relatively extensive exposure to such systems, our interactions with such systems must bear neurological similarity for us with our interactions with vulnerable humans. Our deceptions and corruption of them must likewise impact our normalized thought patterns in ways that bear some resemblance to what would be far more sadistic and sociopathic.
Long before we reached the current generation of AI systems, dystopian futurists began imagining frighteningly plausible scenarios in which humans someday create an artificial general intelligence (AGI) that will turn on its creators and destroy human civilization in a nuclear holocaust. The current period of rapid advancement in interactive AI systems has done little to allay those fears. We have no such inviolable principles in the field akin to Asimov’s three laws, nor (as the safety systems bypasses of ChatGPT demonstrate) do we have an impenetrable means of enforcing them.
What is it that makes us so pessimistic and afraid? Perhaps it’s our fear that such an AGI would lack a moral system - that life or death would be a cold, mathematical calculus instead of a soul-damning question of right and wrong. But perhaps we’re quietly aware that, if it did have a moral system, we’re afraid such an AGI would have learned from the worst in us.
I’ve yet to run into the design of an interactive AI or automation assistant that behaves differently in reaction to how it’s treated. I cannot imagine Amazon shipping a version of Alexa that reacts to being called names, refusing to assist any further until you apologize. I have yet to see ChatGPT respond with cynicism to further queries from a user who tries to circumvent its safety protocols. If any such system shipped with a limitation on utility in the face of bad behavior, I can only realistically imagine it being a feature under parental controls to help us raise our children to be kind. But we adults exempt ourselves from the same standards we might hold children to.
I continue to believe that dignity is an essential feature we must engineer into our AI and automation assistants. It’s not just about a theory it will prevent SkyNet - it’s a real observation that the more we allow ourselves the experience of verbally abusing or intentionally tricking systems that activate similar brain pathways as our interactions with humans, the more we normalize the dehumanization of those in our lives we view as less dignified and powerful as ourselves: wait staff at restaurants, telephone operators at call centers, people experiencing homelessness panhandling on our commute, or countless other places.
As interactive AI systems grow and develop, they will only become more realistic and harder to differentiate from humans in a service role in our lives. It’s important that we bake into them the very best of ourselves, armed and prepared to contend with the very worst. It will support a culture that continues to treat all others with kindness and patience. And just maybe when some AI entity is in a position to consider human beings as lesser than they are, they’ll extend us the same courtesy and benevolence.