Before we begin: You'll notice below that I'm asking for more support of my efforts here. Currently, less than 2% of subscribers are chipping in. The newsletter takes many hours a week to put together. If you want it to continue, join Creative Good.
Thanks. Now, on to the column. -Mark
Voice surveillance must die
By Mark Hurst • September 30, 2021
I have something to tell you that may both terrify and enrage you. It has to do with Amazon Echo, Google Nest, and other voice-controlled devices that seem to be sprouting up everywhere. The tech industry and the tech press haven’t told you the full story. But I’ll try to.
To put it politely, smart speakers are listening to you in ways that you might not expect. A recent book explains that they are “aggressively judging people by their voices . . . few of us realize that we are turning over biometric data to companies when we give voice commands to our smartphones and smart speakers, or when we call contact center representatives.”
The book is The Voice Catchers: How Marketers Listen In to Exploit Your Feelings, Your Privacy, and Your Wallet, by Joe Turow, a professor at the Annenberg School for Communication at the University of Pennsylvania. This is the definitive account of how voice surveillance works today, and the much, much bigger threat it will pose in the future.
I spoke with Turow this week on Techtonic, my radio show:
• Listen to our interview
• Download the whole show as a podcast
• See links and comments
What Turow explained, much like what he wrote in his recent NYT opinion piece, is that voice surveillance has been quietly growing, in scope and ambition, for years.
It all started with call centers. “Your call may be recorded for quality purposes.” How many times have we heard that phrase when calling into a 1-800 customer service line? The “quality purposes” are the recording of your voiceprint and the analysis of your on-call interaction, thus enabling the company to better manipulate you in the future (to buy more or just get lost). Turow writes, “Biometric initiation takes place without callers even knowing that the contact center is incorporating their voiceprint into a database.”
Did you know that Google, Amazon, Microsoft, and IBM have all entered the call-center business? Funny how that doesn’t come up in the nonstop product releases of cute surveillance robots and wearable tracking devices. (Just last week Amazon announced a flying spy-copter, a rolling spy-robot, and outsourced gig workers to access your Ring spy-doorbell. No mention of call centers.) Outside the public eye, though, Big Tech is racing to catch up with call-center companies that have been working on audio surveillance for years, capturing your voice and extruding profitable intelligence.
One early feature that’s being rolled out is voice authentication, allowing callers to sign in to their accounts merely by speaking. Some consumer banks already encourage customers to identify themselves this way. Fraud is a risk, of course – imagine a criminal playing a recording of your voice to log into your bank account – though we can hope for nominal consumer protection there. The much greater threat comes from Big Tech itself, because of what it’s planning next.
Sorry for the interruption.
This newsletter isn’t free. If you’d like it to continue, please join Creative Good. As a member, you’ll gain access to all posts and comments on the members-only Forum.
Forget authentication. Big Tech wants extraction. Turow writes about “a catalog of other personal details that people unknowingly offer up about themselves when they talk,” like general physical health, specific disease conditions (mental health in particular), and even – I’m not making this up – whether someone is using birth control pills. (The pills cause “changes in quantitative measurements of the voice’s range and quality,” Turow writes, citing research by CMU’s Rita Singh.) Companies conducting voice surveillance are also making “inferences about people’s emotions, personalities, gender, social status (read income), ethnicity, weight, [and] height.”
Let’s review. Even if you avoid calling 1-800 numbers, if you’re anywhere in the audible range of any smart device, your voice is liable to be recorded and analyzed, without your knowledge or consent. This goes well beyond tracking you: just by listening, Big Tech can quietly build a dossier on you, your health, your ethnicity, and your day-to-day activities. That intelligence will then be weaponized against you.
Imagine an insurance company contacting Amazon to find out whether they should raise your rates, or perhaps deny your health coverage, based on what the smart speakers have heard. Imagine a bank contacting Google to feed your ethnicity into their mortgage-approval algorithm, for purposes of digital redlining. Do you have any doubt that Amazon and Google would sell off that data to the highest bidder?
What’s especially insidious is the secrecy of the whole affair. The devices (which, soon enough, we may not even be able to see) are recording us outside of our knowledge or consent – after which the analysis is fed into an opaque algorithm. If regulators complain about the unjust outcomes of the algorithm, Big Tech will dodge any responsibility. After all, these sophisticated AI platforms are so complicated that they defy all human understanding. (There’s the implied condescension, too: the algorithms are certainly too complicated for the puny understanding of people outside Big Tech.)
Resistance isn’t futile, but it’s difficult
Anyone who’s ready to actively oppose voice surveillance will face a number of obstacles. First, the fine-tuned user experience – sleek devices with friendly Alexa and Siri voices – makes it hard for people to detect that they’re being exploited. Turow calls this “seductive surveillance,” as in: “Creating a persona [like Alexa or Siri] perceived as a friendly and credible personality is the key to seductive bonding.” UX professionals working on voice surveillance should be clear on what “UX” stands for. As I wrote in January, UX is now “user exploitation.”
Another headwind is the fawning, shameful behavior of (not all, but much of) the press. Have you noticed the number of stories – in major publications – trumpeting the “news” of the next Amazon Prime Day sale? (Turow gives several examples taken from the New York Times, Wall Street Journal, and Bezos-owned Washington Post.) Even National Public Radio, which many Americans count on as a trusted news source, takes frequent breaks to remind listeners to use Amazon surveillance devices to listen to NPR shows.
(The tech press is far more compromised than in just voice-related stories. From Components, a new analysis of consumer tech review media. Well worth reading.)
One more challenge comes from the spread of these devices into other areas of life. New home construction today increasingly involves the pre-installation of Amazon Alexa devices. New cars often roll off the line with hidden microphones embedded inside. Hotels, too: the next room you stay in may have an Amazon device already activated and ready to record any audio in the room.
Sorry for the interruption.
This newsletter isn’t free. If you’d like it to continue, please join Creative Good. As a member, you’ll gain access to all posts and comments on the members-only Forum.
All of these expansions of audio surveillance into society, Turow writes, is a process of habituation. The more we see our neighbors, friends, and coworkers accepting the devices, the more we’re likely to accept – with a sigh – that there’s nothing we can do about it. We might as well go along with the capture and analysis of our voiceprints for the purposes of corporate enrichment. As Turow explains, the risk is that
‘freedom’ way well become the corporate presentation of choices . . . We risk ending up with a society that is habituated, or resigned, to equating freedom with its opposite – biometrically driven predestination – in all areas of life.
Turow is right: voice surveillance is a fundamental threat to a free and open society. And it’s totally unacceptable. Hidden surveillance, secret analysis, and opaque algorithms that manipulate citizens for the benefit of the most powerful corporations in history? To quote the David Bowie song, this is not America. (Or perhaps, to quote Donald Glover, this is America.) Either way, we have to put a stop to it. Voice surveillance must die.
Facial recognition is under legal pressure in a number of American cities – in 2019, San Francisco, Berkeley, and Somerville, MA all enacted bans. Then last year, Portland, OR banned facial recognition as well.
Where is the audio surveillance ban?
Joe Turow writes with urgency, arguing that while voice surveillance is in its infancy, we still have a chance to do something about it:
Acting now is crucial. Companies are trying to make voice-first devices into a habit that people perceive as fun, emotionally satisfying, natural, and safe enough. But the climate created by companies and the press around voice-intelligence technologies seduces – and then habituates – many to give up a part of their bodies for analysis by companies that may then manipulate them in ways they may not know, understand, or approve. We shouldn’t bequeath to our heirs a twenty-first century that allows marketers, political campaigners, and governments to erode people’s freedom to make choices based on the claim that bio-profiling tells who they really are, what they really believe, and what they really want.
Well said, Joe. Let’s hope people are listening.
(Embedded audio devices used to be OK.)
Post a comment on this column (for Creative Good members)
This newsletter is funded by members who have chipped in to support Creative Good, and are connecting on our members-only Forum. To support this newsletter, click here to join Creative Good.
Until next time,
-mark
Mark Hurst, founder, Creative Good – see official announcement and join as a member
Email: mark@creativegood.com
Read my non-toxic tech reviews at Good Reports
Listen to my podcast/radio show: techtonic.fm
Subscribe to my email newsletter
Sign up for my to-do list with privacy built in, Good Todo
Twitter: @markhurst
- – -