The giant brain suck of 2023
By Mark Hurst • May 12, 2023
This week on Techtonic I spoke with Nita Farahany, Duke law professor and author of “The Battle for Your Brain: Defending the Right to Think Freely in the Age of Neurotechnology.” One of the main messages of the book is that brain surveillance is here, and it’s spreading. Companies like Facebook/Meta, Elon Musk’s Neuralink, and many others are developing products to surveil your mental state, use AI to infer your private reactions, and even – if their plans succeed – read your actual thoughts and turn them into text.
I recommend listening to the Techtonic interview:
• Stream the show (interview starts at 4:29)
• See links and listener comments
• Download the show as a podcast
I know, I know. This sounds like some kind of sci-fi conspiracy, a fictional dystopia that we have no reason to be concerned about. To answer that skepticism, I’ll just say again: It’s for real. Here’s a clip from NBC News (May 10, 2023) quoting a UT Austin professor who reports a successful experiment in mind-reading. Actual thoughts, turned into a stream of text, in real time. Nita Farahany, my Techtonic guest above, appears later in the clip.
This is not totally a new development. As far back as 2006 the journal Nature published Decoding mental states from brain activity in humans, about how fMRI could be used for a primitive sort of mind-reading. 17 years later, researchers have more accurate sensors, much larger volumes of data, and super-powerful AI algorithms to make sense of it all. This enables all sorts of possibilities for mental surveillance. For example, Nita’s book mentions a recent study from Cal Berkeley in which researchers were able to extract private information – a PIN code and a home address – from a test subject’s brain, without the subject’s awareness of the intrusion. (While the subject was engrossed in playing a video game, images of numbers and addresses were briefly flashed onto the screen, allowing researchers to pick up unconscious signals of recognition from the subject’s brain.)
Get Mark Hurst’s weekly writings in email: Subscribe. (Or join the CG Forum.)
Sign up for this newsletter.
The motivation for the tech industry here is pretty clear: more surveillance data means more profit. Even better if the data is private, and not yet harvested by other companies. You might feel protective of the thoughts, feelings, reactions, and everything else in your mind – but to the tech companies, it’s all an open field ripe for colonization and harvesting. Welcome to the giant brain suck of 2023.
I can hear the skeptic: “OK, maybe this tech is for real. But if you don’t like it, then don’t put on the brain-surveillance device in the first place.” I wish it was that simple, as though people can freely choose their tech. It’s like telling people “just don’t use” whatever poorly-designed platform – Windows, Gmail, Facebook, etc. – that they’re already forced to use. Most Facebook users I talk to would leave it in a heartbeat, except for friends and family who are also locked into the sludge factory. Other people are locked into platforms required for their job. And this is where brain surveillance gets really dicey: Once employment becomes contingent on sharing your mental data with the boss, the choice will be either to submit to the brain suck, or lose your job.
The skeptic, again: “Sorry, but there’s no way employers would require employees to wear a device that surveils their thoughts.” How I wish the skeptic was right. Here’s the reality: as I wrote almost three years ago in Skin care in a dystopia (Nov 19, 2020), this was an actual Twitter post I came across. Arrow for emphasis is mine:
Citrix wants us to imagine a future where implanted tech is a job qualification. So it should be even easier to imagine the workplace requiring non-implanted tech. These devices are already in the marketplace or on their way. Here are some of the common forms:
• a headband, like the one shown below (from the notorious case of a Chinese elementary school outfitting the students with the tech; they were later removed)
• wireless ear buds, with EEG (brainwave) sensors attached – Farahany suggests in her book that Apple may be considering putting such sensors into AirPods – or headphones that have EEG sensors in the padded ear cups
• smartwatches (trained to infer brain state through analysis of other biometric data), which – in the case of Apple Watch – already include ECG sensors
• other wearables like a sleep mask, which Apple patented (see US Patent #11,290,818), with EEG sensors built-in
My point is that there’s likely to be a profusion of brain-reading devices coming at us. Saying “just don’t use it” is an insufficient response, especially considering the sheer amount of money and power being deployed to spread this stuff.
Get Mark Hurst’s weekly writings in email: Subscribe. (Or join the CG Forum.)
Sign up for this newsletter.
The companies are the problem
The problem with the mind-reading devices isn’t the technology itself. After all, BCIs (brain-computer interfaces) are already being used to deliver much-needed interventions to people suffering from a stroke, ALS, paralysis, or other conditions. I’m all in favor of these uses of the technology – who wouldn’t be? – and hope that the technology develops and spreads swiftly for the benefit of those patients.
The challenge, as with so many other new technologies, is that there aren’t many guardrails limiting this stuff to medical usage. We live in a largely unregulated environment where predatory tech companies can deploy any device, algorithm, or platform they want – and enjoy huge profits while harming millions of users – with no real consequences. (Did I mention that Big Tech spends the most of any industry on lobbyists in Washington?) So in order to properly understand the risk from brain surveillance, we must frame the discussion around the Big Tech companies that are getting involved. We can’t trust them. Whether it’s brain-reading tech, generative AI, or self-preferencing algorithms, all of their efforts are pointing us toward a future of concentrated monopoly power and disenfranchised citizens.
Naomi Klein, writing in the Guardian about generative AI, makes just this point. From her May 8 column, a real barnburner that is worth reading in full:
[W]e trained the machines. All of us. But we never gave our consent. They fed on humanity’s collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances. And their goal never was to solve climate change or make our governments more responsible or our daily lives more leisurely. It was always to profit off mass immiseration, which, under capitalism, is the glaring and logical consequence of replacing human functions with bots.
Or read Ted Chiang, in his New Yorker essay (May 4), Will A.I. Become the New McKinsey?:
Just as A.I. promises to offer managers a cheap replacement for human workers, so McKinsey and similar firms helped normalize the practice of mass layoffs as a way of increasing stock prices and executive compensation, contributing to the destruction of the middle class in America. . . .
If we cannot come up with ways for A.I. to reduce the concentration of wealth, then I’d say it’s hard to argue that A.I. is a neutral technology, let alone a beneficial one. . . . We need to be able to criticize harmful uses of technology – and those include uses that benefit shareholders over workers – without being described as opponents of technology.
Substitute “brain-reading devices” for the mentions of A.I., and the points above still apply. The problem isn’t the tech itself, but rather the inevitable trajectory of the tech companies, and their financial partners, to continually disempower the rest of us – all of us – who aren’t senior executives or owners of those companies.
One more quote, this time from Brian Merchant in Your boss wants AI to replace you. The writers’ strike shows how to fight back (LA Times, May 11, 2023):
A big reason that the AI hype machine has been in overdrive, issuing apocalyptic claims about its vast power, is that the companies selling the tools want to make it all feel inevitable — to feel like the future — and have you believe that resisting it is both futile and stupid. Conveniently, most of these discussions eschew questions such as: Whose future? Whose future does AI really serve?
The answer to that is “Big Tech” and, to a lesser degree, “your boss.”
In the giant brain suck of 2023, the danger we face is from the companies. If we want to maintain our mental privacy, if we want to support the writers on strike, if we want to preserve any semblance of a creative or free society, we have to resist. This stuff isn’t inevitable, but we’ll need to address the source of the problem. We have to do something about Big Tech.
- - -
For more
• Once again, I highly recommend my interview with Nita Farahany on the May 8, 2023 Techtonic.
• More details about mind-reading technology that turns thoughts into text at The Guardian.
• A review of implantable BCIs (brain-computer interfaces): How brain chips can change you (Insider, February 15, 2023).
• Our members-only Creative Good Forum has multiple threads on neurotech, including Neuralink and brain-computer interfaces are coming. I hope you’ll join Creative Good to get access and support my writing here.
Until next time,
-mark
Get Mark Hurst’s weekly writings in email: Subscribe. (Or join the CG Forum.)
Sign up for this newsletter.
Mark Hurst, founder, Creative Good – see official announcement and join as a member
Email: mark@creativegood.com
Read my non-toxic tech reviews at Good Reports
Listen to my podcast/radio show: techtonic.fm
Subscribe to my email newsletter
Sign up for my to-do list with privacy built in, Good Todo
On Mastodon: @markhurst@mastodon.social
- – -