ChatGPT’s drawbacks, and how to respond
By Mark Hurst • January 13, 2023

The latest tech-hype cycle is off to a roaring start. ChatGPT, a chatbot, has enraptured the media and the market, enjoying a level of attention unmatched since, I suppose, the hype around Bitcoin really got into gear. It’s called a hype cycle because eventually the wheel turns and everyone realizes that the latest tool isn’t, in fact, going to revolutionize everything, and might in fact have been better for most people to avoid. (Again, see Bitcoin.)

For now, though, Microsoft is jumping at the chance to buy into the trend, as it plans to invest $10 billion in OpenAI, the company behind ChatGPT. This will be a massive transfer of wealth from the Windows-tax collector to a chatbot company that will in turn spend the money as quickly as possible, for “growth at any cost,” on Super Bowl commercials and such. (My prediction for the TV spot: celebrities interacting with ChatGPT, pretending to be charmed and delighted when it outsmarts them.) Get ready for lots of media stories about ChatGPT and... sports! and... school! and... the weather! and on and on.

To be fair, AI text generators have been in use for some time. Futurism reports that CNet is already authoring some stories with AI (Jan 11, 2023). But that’s nothing new: The Associated Press has been using AI since 2015 to write stories. ChatGPT and other AI text generators are likely to find some long-term use, which means that the downsides might be with us for a long time, too.

One of the major problems with ChatGPT is that it gives answers definitively: you ask a question, it responds with the answer. The very design of a chatbot is to give just one answer – as opposed to, say, the existing model of a search engine. For example, not that I do this, but if you search Google, you get back multiple sites, helpfully arranged by how much they paid Google to be in the listings. (Kidding! Sometimes Google lists its own sites on top.) Seriously, whatever the countless flaws of surveillance-based search, you at least can scroll through multiple sources.

ChatGPT, on the other hand, in giving a single answer, can be confidently wrong. A Twitter user posted this example from a ChatGPT session:

Asked for a Python function showing what makes a good scientist, ChatGPT writes code checking whether the scientist is ’white’ and ’male’.

No doubt OpenAI was “shocked, shocked” by this answer and is making corrections. But the systemic problem remains: building a “knowledge base” [sic] on historical data encodes the biases of the past into today’s chatbot answers. And this is just one of the problems with AI-based answers. There are many, many more drawbacks listed in these posts:

Large Language Models like ChatGPT say The Darnedest Things, by Gary Marcus and Ernest Davis (Jan 9, 2023) logs the errors they’ve discovered in ChatGPT, including the one shown above.

Chatbots are not a good replacement for search engines, writes Emily Bender (Dec 24, 2022) in a Mastodon thread pointing to several research studies.

These are all known and documented problems. But there’s another post detailing what could happen as ChatGPT spreads throughout the internet, and what we’ll need to do as a result. From Mastodon user KeirFox, AI and the future of the internet (Jan 9, 2023), emphasis mine:

Forming tight, robust online communities will be extremely important in the not-so-distant future as AI begins to learn from and start to post to social media.

We will, in our lifetimes, experience “the information death” of the internet: when the majority of social media/forum profiles are AI bots created for a specific corporate or political purpose. . . . Trusted information stores will be lost. [AI bots] will upvote and downvote/dislike content, silencing any dissent and amplifying ideas/opinions as directed by their owners through sheer numbers greater than any human collective will ever be able to overcome. . . .

The future will be locked networks of trusted accounts – communities that we will have to establish to filter out the noise. We will very much have to know each other and stick together.

ChatGPT’s dangers will only be amplified by Microsoft’s $10 billion infusion. Once the effects fully take hold, draining the public internet of trustworthy content, people will naturally look for an oasis of community and trusted material.

And that, friends, is what we’re already building here at Creative Good. A group of us have joined up on the Creative Good Forum to post ideas and pointers, and discuss how to survive the tech age – maybe even to make tech better. It’s behind a paywall to defray expenses and “filter out the noise.” You should join us, before ChatGPT eats the internet alive. (And if you’re struggling with the cost, drop me a note and we can work something out.)

Finally, to give a sense of what we’re discussing on the Forum, here are a few recent posts:


Recent Forum post: About that Apple prediction...

An update on my prediction from December 2021, in My predictions for Apple’s smart glasses:

Within the next 18 months, Apple will launch surveillance glasses, upending all expectations of personal privacy.

This week the Wall Street Journal posted an article on Apple’s progress, with this detail (emphasis mine):

Apple’s long-rumored augmented reality headset may finally see the light of day soon. Ming-Chi Kuo, an influential analyst with TF International Securities who is well sourced in Apple’s supply chain, reported last week that the company is likely to announce the device this spring or during its developer conference in June, with a launch in the later half of the year. Bloomberg News also reported over the weekend that the company is planning to unveil the product this spring.

If the product announcement counts as a “launch” of the privacy-destroying product, Apple will be right on time with my prediction. Nice job, Tim.

Creative Good members, post a comment on the Forum.


Recent Forum post: Surveillance update in New York and beyond

I did a “surveillance roundup” on Techtonic this week. See the playlist for a bunch of links to stories I featured.

Afterward, a listener sent me this article: Fighting ‘Big Brother’: Anti-surveillance advocates in New York seek to halt creeping proliferation of cameras and tracking software (AM New York, Jan 8, 2023). There’s just a ton of surveillance news, and it’s getting relatively little attention.

It doesn’t help that the governor of New York, Kathy Hochul, makes flip statements about the surveillance systems she’s installing:

“You think Big Brother’s watching you on the subways? You’re absolutely right,” Hochul said in September.

The mayor of New York City, Eric Adams, has done the same. With elected officials working in concert with the most powerful corporations in the world, it’s up to the rest of us to mount the resistance. Thanks to the Stop Technology Oversight Group (featured on Techtonic in the past) for leading the charge in New York.

Then from Wired, Iran Says Face Recognition Will ID Women Breaking Hijab Laws (Jan 10, 2023).

Creative Good members, post a comment on the Forum.


The tech bros try geo-engineering

Here’s a thread about Silicon Valley startups attempting to “solve climate change” with their tried-and-(not)-true methods of holding government and civil society in contempt while they launch their own products with a growth-at-any-cost mindset.

This firm is working to control the climate. Should the world let it? (Washington Post, Jan 9, 2023) describes how a Y Combinator-alum tech bro has started a company to dim the sun, for profit, by releasing chemicals into the air.

From a Futurism article (Jan 11, 2023) summarizing the nonsense:

The goal was to have the balloons release sulfur dioxide particles at high altitudes, reflecting the Sun’s heating rays back into space, a process commonly referred to as solar geoengineering.

According to MIT Technology Review, the stunt — despite its tiny scale and unsophisticated methodology — likely marked the first time anyone has actually attempted such a feat.

“We joke slash not joke that this is partly a company and partly a cult,” [CEO and founder Luke] Iseman told MIT Tech late last year.

Ha ha, so funny, those tech bros joking about starting a cult to sell access to their unregulated use of chemicals that will further erode the norms of behavior around the environment.

Creative Good members, post a comment on the Forum.


Forum posts of note

• Yet another video clip of a Tesla in “Full Self-Driving Mode,” this time causing an 8-car pileup on San Francisco’s Bay Bridge. From the Intercept (Jan 10, 2023): here’s the surveillance footage. See the Forum post for a photo of the pileup. Fortunately, no one died.

• In This Is Broken, Disability Dongles talks about “elegant, yet useless solutions to problems we never knew we had” – from the perspective of the disabled community. (Reminiscent of the Playpump in Customers Included.) Here’s the Forum post.

• In our AI-in-the-workplace thread, an update: Workers are using “mouse-jigglers” to fool workplace surveillance software, reports the WSJ.

• In the Kids and Addictive Tech thread, two lawsuits worth noting: first, Seattle schools sue the tech giants for harming kids’ mental health. And second, Facebook/Meta, Snap, TikTok, and Google/YouTube are being sued for similar harms.


Fun Stuff

• Two fun items in Explainers: map of physics, and a crazy-detailed map of New York City’s trees.

• And another fun item in Fun Stuff: Smac McCreanor as Hydraulic Press Girl.

Here at Creative Good, we’re building a community of people working to make tech better.
Click here to join us and get access to the Creative Good Forum.

Until next time,

-mark

Mark Hurst, founder, Creative Good – see official announcement and join as a member
Email: mark@creativegood.com
Read my non-toxic tech reviews at Good Reports
Listen to my podcast/radio show: techtonic.fm
Subscribe to my email newsletter
Sign up for my to-do list with privacy built in, Good Todo
On Mastodon: @markhurst@mastodon.social

- – -