A new year to make tech better
By Mark Hurst • January 6, 2023

My company, Creative Good, turns 26 years old tomorrow. (I’m celebrating by guest-hosting the WFMU Saturday morning show, Double Dip Recess – tune in on wfmu.org at 9am Eastern.) It’s not quite as momentous as the 25th, but there’s something to be said for persisting for another year. Thriving, really, compared to what’s happening elsewhere in tech. We’ve finally turned a corner and the Big Tech monopolies are getting some comeuppance at last. “Vindication” is how WFMU station manager Ken Freedman put it on Techtonic this week: after five years of raising the alarm about tech’s wrong direction, I can finally say that things seem to be improving.

So I feel guarded optimism going into 2023. Maybe things will get better, as monstrous tech firms are punished, regulators enact long-overdue reforms, and the rest of us can start building, and using, more ethical tools. Our little Creative Good community is still at it, too, discussing news and answering questions on the Creative Good Forum (you should join us) – all with the belief that there is an ethical path forward today.

Supporting this view is what’s happening at Barnes & Noble, a story nicely told by Ted Gioia in What Can We Learn from Barnes & Noble’s Surprising Turnaround? (Dec 28, 2022). B&N got a new CEO who actually – get this – loves books, and has been making sweeping changes to how books are sold in-store. For example: instead of accepting payola from giant publishers to push mediocre books onto the front table, store workers get to choose which books they are passionate about selling. Gioia suggests that success comes from authentic passion in one’s work – and for companies, from treating customers and workers well.

I certainly hope that Gioia is right, as it’s more or less the point of my book Customers Included and, for that matter, all those years of the Gel conference. It’s a pretty simple ethic: Do right by people. Believe in what you’re doing. Use your creativity to improve the world.

But it’s easier said than done. These past five years have shown that trillion-dollar companies can grow – and still, to this day, survive as money machines – with the exact opposite ethic: Exploit users and their data. Be cynical and opportunist in your strategy. Pervert your creative powers in order to extract every last cent, and push the harm onto others.

Meantime, for every Barnes & Noble enjoying a measure of success, a hundred independent bookstores scrape by on good will and luck. My conversation with Jeff Deutsch a few months ago about his book In Praise of Good Bookstores showed how hard it is – and, yes, rewarding – to run a bookstore with integrity. (Deutsch also praises Barnes & Noble.) If you compare Amazon’s unethical financial growth against the countless bookstores struggling to survive, it’s hard to conclude that ethical, passion-fueled business wins in the end.

Tech news has daily examples of this sort of pressure. Consider the case of Adobe, which was found this week to be surveilling customers’ data by default. As Mastodon user Baldur Bjarnason writes (Jan 4, 2023):

Turns out that Adobe is collecting all of its customers’ pictures into a machine learning training set.

This is opt-out, not opt-in so if you use Lightroom, for example, it defaults to adding all of your photos to the set. If these are unpublished pictures, work-in-progress, etc. they’ll still be analysed as soon as they’re synced.

The news traveled fast on Mastodon, eliciting surprise and indignation: “What fresh hell is this?” wrote one user. “Adobe has opted me in to letting them train their algorithms on my photos? That’s the end of me using their cloud storage for anything, ever.” I spotted a couple of replies from Adobe employees, reassuring everyone that the company is trustworthy and explaining how to change the relevant setting, but no one seemed convinced. There are surely a few UX teams inside Adobe today asking the key question: What are we doing here?

UX and purpose

A similar question comes up in a recent essay by Stephen Farrugia (Dec 31, 2022) about the ultimate purpose of user experience. Ferrugia suggests that the very act of calling out a discipline as “human-centered” is a problem, as it suggests that doing the right thing is somehow exceptional:

If someone needs to be told to think and work in a human centered way when they are designing something, it should be a clear indication of how separated the discipline of design has become from what it is the design is being applied to.

Is there Dog-Centered Design in the dog product industry? Do the designers there need to be reminded of the purpose of the things they are designing? Do dog toy companies practice rubber-centrism, where they search for dog related problems that strong, but malleable, rubber can satisfy the purpose?

Farrugia is right. If we need to call out “human-centered design” as an aspiration of UX, it probably wasn’t the basis of the work in the first place. Indeed, the reality is that UX is more often practiced as a legitimizing force: at best, making tactical improvements while trying to ignore the company’s bad behavior; at worst, actively helping to hide the exploitation in the business model. I’ve written about this before: see my column Why I’m losing faith in UX (Jan 28, 2021).

In the end, I come back to the simple tech ethic I wrote back in July 2020. Whether you’re a product manager, a UX researcher, a developer, or even the CEO, I’d offer the same exhortation. Here it is:

Build something that acts in people’s long-term best interest. That means, create something good (and yes, I’m making reference to my company name, as I named it with this very concept in mind). Don’t overcomplicate the philosophical implications of the word “good.” Something that’s useful or helpful, as determined by the outcomes it has on the lives of users and their communities.

Don’t cheat. Don’t use deceptive “dark patterns” to nudge someone into acting in a way that harms their long-term interest. Don’t bury hidden “gotchas” in the Terms and Conditions. Don’t promise one thing but deliver another: that’s cheating and lying. Don’t take something (users’ personal data, say) and use it in a way they didn’t expect and would never have approved (selling to data brokers, say). That counts as cheating, lying, and stealing, and incidentally is also the profit engine of Facebook and Google.

Build a product that you would recommend to your own family and friends. This isn’t the full set of people to be concerned with (see next point), but it’s a good baseline as an empathy check. If you’re building something that you’d be embarrassed to present to your own relationships, that’s a red flag.

Widen your scope beyond the features themselves. Consider how the product acts in the world, what effects it has on people’s lives, what secondary effects or consequences it might bring about. Widen your scope...
- in time (beyond the immediate future to the long term)
- in causality (beyond first-order effects to other consequences)
- in geography (beyond your local region, and beyond the urban or suburban environment you might be accustomed to)
- in the social graph (beyond people or types of people you already know – that is, beyond your own family, neighbors, and social cohort)
- in the organization (beyond your individual product team or business unit to consider the whole enterprise, which on a global level may be dependent on a toxic business model – see Facebook and Google above)

Listen to users to find out what they want BEFORE you build it. This suggestion flies in the face of current Silicon Valley thinking, which says to build it first (as per the wishes of investors, who prefer to play roulette by launching a million things at once) and then use the launch itself as the first opportunity for real customer input. I sincerely hope that we are passing that phase, now. As I wrote in Customers Included, there are tons of examples of companies that “built first, asked later” and wasted hundreds of millions of dollars. I can’t emphasize this enough: you should be building what what will benefit users in the long term, and you can find that out by listening to users before you build it. (As a reminder, I advise teams on how to do just this.)

If your company doesn’t listen, quit your job – if you can. Former Amazon distinguished engineer Tim Bray quit his job at Amazon: here’s the NYT profile from 2020. Stacy Mitchell quit her job at Yale after her director was found to have been paid (off) by Big Tech. I once did a whole show on whistleblowers in tech, featuring people like Jack Poulson, who quit his job at Google when he saw unethical behavior at the highest levels of the company. Many people can’t afford to quit their jobs – and others prefer to organize within the company – but whatever you choose to do, at least consider your options.

Overall, the approach is simple: Work for the long-term benefit of users and communities. Consider the outcomes. And don’t cheat. If we had a tech industry that followed that ethical approach, just imagine how different the world would be.

Here at Creative Good, we’re building a community of people working to make tech better. Join us.

Happy new year, everyone. Until next time,

-mark

Mark Hurst, founder, Creative Good – see official announcement and join as a member
Email: mark@creativegood.com
Read my non-toxic tech reviews at Good Reports
Listen to my podcast/radio show: techtonic.fm
Subscribe to my email newsletter
Sign up for my to-do list with privacy built in, Good Todo
Twitter: @markhurst

- – -