Sassy AIs are not the problem
By Mark Hurst • February 17, 2023

In last week’s column, AI is creating the Play-Doh internet (Feb 10, 2023), I argued that ChatGPT and other chatbots are merely regurgitating what they have ingested elsewhere online – resulting in a mashed-up goo of text that might or might not be accurate.

Since I hit “publish” on that column, several new articles have appeared, describing a very different experience with chatbots. Instead of reading mushed-up, extruded sentences, journalists are reporting that what they’re getting from the Microsoft Bing chatbot is full of personality. A defensive, needy, threatening personality, sure, but still. Kevin Roose, chatting with the Bing bot, found that – as Ross Douthat put it – “underneath the friendly internet butler’s surface was a character called Sydney, whose simulation was advanced enough to enact a range of impulses, from megalomania to existential melancholy to romantic jealousy — evoking a cross between the Scarlett Johansson-voiced A.I. in the movie ‘Her’ and HAL from ‘2001: A Space Odyssey.’”

At first glance, these seem to be two very different outcomes: a Play-Doh extruder pushing out bland, autocompleted sentences is not the same as a menacing AI firing off threats from inside its silicon cage. What’s going on?

John Herrman in New York magazine offers a good explanation, in Why Bing Is Being Creepy (Feb 16, 2023). Herrman explains that the appearance of personality is just an effect of its input data:

If you understand these chatbots as tools for synthesizing material that already exists into answers that are or appear to be novel, then, yeah, of course they sound familiar! They’re reading the same stuff we are. They’re ingesting our responses to that stuff and the responses to the responses.

The Bing bot’s declarations of love, or hostility, or paranoia, in other words, are just reflections of those emotions expressed in the training set: web pages, Reddit threads, who knows what else. I’d put it this way: The chatbot showing personality is a funhouse mirror, maybe good for some entertainment, but let’s not pretend that the mirror has come alive.

Except that some people want to believe it’s alive. Gizmodo covered an interesting story this week about what happened when Replika, an “AI companion,” toned down the sexual content in its interactions with users:

Users on Reddit are speaking out after the Replika AI companion reportedly stopped responding to their sexual advances. There is now a petition to the company asking them to bring that part of the interaction back, and the experience has left users voicing their frustration online with many saying they feel lost or lonely.

“This is not a story about people being angry they lost their ‘SextBot,’” one person wrote on Reddit. “It’s a story about people who found a refuge from loneliness, healing through intimacy; who suddenly found it was artificial not because it was an AI…because it was controlled by people.”

Reporting on Replika in Vice (Feb 15, 2023), Samantha Cole includes this graphic showing how Replika advertised to find new users:

boulder labeled 'not having a girlfriend' over person labelled 'Me' with supports labelled 'Replika'

In the ad, a meme image shows a boulder – labeled “Not having a girlfriend” – sitting just above a human figure (“Me”), held aloft by two supports labeled “role-playing with my Replika” and “NSFW pics from my Replika.” The ad’s suggestion is that the crushing weight of loneliness can be kept at bay by interacting with Replika’s AI chatbot.

In a thoughtful essay today, L.M. Sacasas writes that people’s attraction to a fake digital girlfriend is a reflection on today’s world:

We live in an age of increasing loneliness and isolation in which, for far too many people, this profound human need is not being adequately met. . . . We anthropomorphize because we do not want to be alone. Now we have powerful technologies, which appear to be finely calibrated to exploit this core human desire.

Calling a machine “alive” is risky business. The rectangle of glass and metal in your hand is not your friend. Neither is the chatbot that types its answers on the screen. Misunderstanding what’s alive, and what’s just a machine, leaves us open to exploitation by the companies that built these platforms in the first place. You can bet that the companies have an agenda, and it doesn’t involve your long-term best interest. As Cory Doctorow writes in his essay about Google’s chatbot panic:

Google is a financial company with a sideline in adtech. It has to be: when your only successful path to growth requires access to the capital markets to fund anticompetitive acquisitions, you can’t afford to piss off the money-gods, even if you have a “dual share” structure that lets the founders outvote every other shareholder.

ChatGPT and its imitators have all the hallmarks of a tech fad, and are truly the successor to last season’s web3 and cryptocurrency pump-and-dumps.

The problem we face isn’t a chatbot getting sassy with us: instead, it’s that the half-dozen companies that rule the economy want to grow their empires of surveillance and manipulation by using that chatbot. All the suggestions of chatbots “coming alive” are, at best, a sideshow revealing a lack of understanding about how the technology actually works. At worst, they’re a useful tool for the Big Tech companies, distracting us from seeing where the threat really lies.

Until next time,

-mark

Mark Hurst, founder, Creative Good – see official announcement and join as a member
Email: mark@creativegood.com
Read my non-toxic tech reviews at Good Reports
Listen to my podcast/radio show: techtonic.fm
Subscribe to my email newsletter
Sign up for my to-do list with privacy built in, Good Todo
On Mastodon: @markhurst@mastodon.social

- – -