The Problem With Building What People Want
We think we’re serving the rider, but we’re really feeding the elephant
Thank you for being here. As always, these essays are free and publicly available without a paywall. If my writing is valuable to you, please share it with a friend or support me with a paid subscription.
Mark Zuckerberg’s recent interview with
has been making the rounds lately. There are a number of segments worth commenting on (like the part where Mark talks about using AI to satisfy our demand for friends), but one in particular has been grabbing my attention over the last week. The video segment is here, but here’s the transcript (emphasis mine).Dwarkesh Patel
On this point about AI-generated content and AI interactions, already people have meaningful relationships with AI therapists, AI friends, maybe more. This is just going to get more intense as these AIs become more unique, more personable, more intelligent, more spontaneous, more funny, and so forth.People are going to have relationships with AI. How do we make sure these are healthy relationships?
Mark Zuckerberg
There are a lot of questions that you only can really answer as you start seeing the behaviors. Probably the most important upfront thing is just to ask that question and care about it at each step along the way. But I also think being too prescriptive upfront and saying, "We think these things are not good" often cuts off value.
People use stuff that's valuable for them. One of my core guiding principles in designing products is that people are smart. They know what's valuable in their lives. Every once in a while, something bad happens in a product and you want to make sure you design your product well to minimize that.
But if you think something someone is doing is bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong. You just haven't come up with the framework yet for understanding why the thing they're doing is valuable and helpful in their life. That's the main way I think about it.
Mark's driving principle is an idea that drives much of the Silicon Valley culture: build something people want. Perhaps this is a good business strategy, but is it really what's good for us? To begin to understand the answer to that question, we need to reflect on how our desires shape our decisions.
Elephants and their riders
My favorite metaphor to consider when thinking about how we make decisions is from social psychologist and NYU professor Jonathan Haidt. You may be familiar with Haidt from his recent book, The Anxious Generation, which has been a catalyst for many conversations around technology use, especially related to smartphones in schools. Reasonable people can disagree about the degree to which smartphones are the source of our current issues in schools, but today I want to talk about the foundation that Haidt is building on. To find that, we need to trace the thread back to his 2006 book, The Happiness Hypothesis.
In The Happiness Hypothesis, Haidt formulates a metaphor to help us understand our divided selves. At the risk of oversimplifying it, the basic idea is that two main processes drive our behavior: conscious processes which rely on reason, and unconscious processes which rely on intuition and emotion. To illustrate this point, he uses the illustration of a rider (reason) on top of an elephant (intuition and emotion).
When we think about the decisions we make, we often assume (incorrectly) that the rider is steering the elephant. This is often self-deception. Haidt highlights this by describing a series of clever experiments designed to test the influence of our emotional vs. rational decision-making processes. These show that the elephant is in control far more frequently than we might like. Many times, the rational rider is simply justifying the path the elephant is taking rather than intentionally guiding it in a certain direction.
When we make what people want, are we appealing to their riders or elephants?
Using technology or being used by it
When we use technology, we all too often think only of the rider. We assume that we can engage with the tools rationally, carefully considering the relevant tradeoffs to make use of the tools in areas where we benefit and avoid them in areas where we don't.
But if you buy Haidt's metaphor and the theory of intuition vs. reason that it represents, the more accurate picture must consider the way that technology shapes our intuitions and emotions. This often happens at a subconscious level, leading us to rationalize after the fact to believe that it is something we want. If you're anything like me, you've had many experiences that suggest that there might be something to this hypothesis.
Even as the users of technology are mostly unaware of the role of the elephant in shaping our engagement with them, the same is not true of the engineers, teams, and companies designing them. It's an uphill battle for us from the start.
The people building these tools are well aware that the elephant dominates. So much of the digital world that we interact with is specifically designed to steer the elephant—infinite scroll, auto-playing videos, slot machine-like variable rewards on intermittent reinforcement schedules, algorithmically personalized content, social validation features like the "like" button, gamification and streaks—these are only a few of the design principles that are consistently and pervasively incorporated to make the devices and platforms that we use stickier.
These engagement tactics are often, by design, invisible to the average user. You'd rather put the rider to sleep rather than tip them off that the elephant is heading in a direction they might take issue with. Perhaps we may pick up on some of these habits over time if we happen to take some time for self-reflection, but most of us rarely consider the particularities of the platforms we engage with.
If, as Mark argues, we trust that the people using the tools we create are smart and should be the sole arbiter of whether a technology is useful or not, we should be upfront about not only the benefits but the risks as well. We would do well to learn from the medical and pharmaceutical industry which has a much more nuanced understanding of the technologies they develop. In medicine, we have language around the ideas of side effects and potential complications. What would it look like to have a similar conversation around our digital tools?
A nutrition label for elephants
As we enter into the age of generative AI, a different set of design principles and guidelines are needed. Our digital devices and the software that runs on them are already catering to our elephants. Much of this is done to intentionally escape our notice. The digital world that we interact with is designed with a veneer of trusting the user. The truth is that there are all sorts of mechanisms baked in that short-circuit our riders to tempt our elephants to walk in a certain way.
As a first step, we will need to hone our self-awareness. I'm preaching to my smartphone-addicted self as much as anyone else here. We all could benefit from a regular, close examination of our relationships with our devices and platforms. How are we using them? In what ways are they shaping us? Where are the undercurrents of formation escaping our detection?
To do this, I’d like to propose that we use a nutrition label for the food that our elephants are being fed. Here are a few ideas for categories:
Emotional Impact: what emotions are being evoked?
Habit Formation: Is my use becoming habitual?
Attention Capture: is my interaction intentional and active or mindless and passive?
Social Validation: how does it cause me to compare myself with others?
Dopamine Release: how are my interactions with this technology rewarded?
Distraction Potential: how frictionless is the experience?
Mood regulation: how does my use of this technology make me feel?
Behavioral Consequences: how does my use of this technology shape my connections with myself and others?
Building what people want is not necessarily a bad thing, but what desires are we catering to? Are we appealing to people's aspirations and goals? Or are we feeding junk food to their elephants?
Got a thought? Leave a comment below.
Reading Recommendations
A reflection from
that ability is a temporary state.For me, the reasoning is a bit simpler, and a bit more tautological. Ability is, more or less, a temporary state. Each of us comes into the world helpless, and we leave it helpless. Many of us—so many more than you think, more than you know—become temporarily or permanently in need of help along the way. It could happen to you tomorrow. It could happen to your loved one tomorrow. Ability is not a stable state, nor is it an indication of moral worth.
A symposium from the MIT Media Lab titled “Can we design AI to support human flourishing?” I haven’t yet been able to watch it, but have it on my list.
A thoughtful and nuanced post from
on the energy use of AI in context. This is one of only a handful of posts I’ve seen from folks looking to quantitatively compare the environmental impact of their AI work compared to other activities that we take for granted. It’s a helpful exercise.Great post from
on the hidden cost of convenience. on the role of expertise in the age of AI.Universities are not helpless bystanders to the datafication of expertise, but key stakeholders. A university education is supposed to empower students to shape their futures, including how they relate to different techno-social configurations of work. And universities have responsibilities beyond the development of a new generation of experts – for example, responsibilities to justice, equity, the pursuit of knowledge, and a viable planet on which to live – that require a critical assessment of how these technologies might reshape expertise and what harm they might do in the process.
The Book Nook
With a lot of travel and the business of the end of the semester, my reading time has taken a significant hit. Hoping for some more reading time once the summer is in full swing.
The Professor Is In
Had a great lunch at Daddyji in the Village with a few of my students last week.
Leisure Line
Made some pretty delicious lemon blueberry scones this weekend.
Still Life
The kids and I had a Saturday morning adventure this week while Mom got away for some much-deserved rest and relaxation. As part of our adventure, we stopped by Home Depot and picked up some seeds. They picked carrots and mini pumpkins. We’ll see how they grow.
Came across this AI Nutrition Label by Twilio that is interesting. It is more technical and less subjective which is good, but also is missing big and important aspects you touch on here.
https://nutrition-facts.ai/
Great stuff! Love the nutrition label idea.
I am currently reading the AI x Philosophy Seminars out of Oxford and the Cosmos Institute of you are looking for something similar to the MIT Lab For human flourishing.