Why We Build It Matters
A world with Artificial General Intelligence might be possible, but I'm pretty sure it's not the one we want
When we look at technology we should be thankful not only for its strengths but also for its weaknesses. These limitations are often blessings in disguise, helping to guide us in understanding how to use them in ways that extend our humanity instead of bypassing it. The truth of the matter is, as I explained last week using the framework of the Innovation Bargain, if you don’t see the weaknesses and tradeoffs, you’re just not looking hard enough.
When we look at what we’re building, we should ask why and what for. It’s not enough to think about the impact on the world out there. We need to ask what it says about what’s in here.
Human nature is the common denominator
Even though our specific political, economic, and societal setting is different than the past, there is still much that can be learned. We’re kidding ourselves if we think that the high-level dynamics that shaped the impact of the automobile, electricity, or the transistor are so fundamentally different from those surrounding technologies like gene editing, generative AI, or quantum computing that we can’t draw out high-level lessons that still apply.
Even if there is a case to be made that the forces surrounding the technologies of today are uniquely different than those in the past, there is one factor that remains a common denominator in the stories of all technologies: human nature. As we think about the future we want to create and the technology that we’re told will enable it, we should look carefully at the people who are driving these futures and what motivates them.
I still think a lot about this note that
shared a few months ago. In it, she remarks how stories form and deform our world through the visions of their founders.The reasons we build technology matter. They tell us not only about the motivations of the people who build them but also about the world that they envision and wish to create.
You may be an optimist, pessimist, or pragmatist. Regardless, we must be aware that every utopia has a dystopian twist, every fairytale has a corresponding nightmare, and the more polished the promised utopia, the more suspicious we should be that something is lurking beneath the surface.
Why do we want to build artificial general intelligence?
All of this brings me to the question that I’ve been wrestling with lately: why do we want to build artificial general intelligence?
First, a bit of background in case you’re not familiar with the idea of artificial general intelligence (AGI). At its core, AGI (also known as strong AI) is focused on creating a technology that emulates the capabilities we expect from humans—traits like the ability to learn, know, sense, reason, and act autonomously. It’s still a question of open debate about whether such a thing is possible in the fullest interpretation of its definition, but the race is on to find out.
You’ll notice that I used the word emulate. This reveals my position on whether such a technology is truly achievable in its fullest sense. But the real point is that whether you believe AGI is possible says something about what you think it means to learn, reason, or act autonomously.
Belief in the possibility of AGI is fundamentally a statement about what it means to be alive, and more specifically about what it means to be human. There’s a bright red line between the people who believe it is possible to create a device that can actually do these things instead of one that just appears to do them.
If it’s not clear, I’m pretty firmly in the second camp. The belief that AGI is possible is challenging (at best) to square with what I understand about the world, the nature of reality, and what it means to be human.
AGI is inherently cast as giving us superpowers. That ought to worry us.
But I’d like to set aside the concerns about whether or not AGI is possible and what we should do with it if or when it is invented. In the meantime, I’d like to focus on what the quest to build AGI says about us and ask whether or not it is aligned with creating the world that we want.
The biggest strength and the fatal flaw of AGI are the same: that it is intrinsically designed to bypass human agency. It is the ultimate version of a technology that, as Andy Crouch writes in his book The Life We’re Looking For, is designed to give us superpowers. It’s a motorcycle to a bike. A computer to a typewriter. An autonomously-operated drone to a sword. A recording of Lang Lang to a piano. These technologies are designed to create an impact with minimal input or effort from a human. They’re explicitly designed to bypass or replace human involvement instead of extending it.
In this way, AGI is part of a distinct branch of technological innovation that separates it from technologies of the past. To borrow again from Andy Crouch, we can see technologies in two main branches: those that extend human agency and those that bypass it. Andy highlights the ways that instruments—musical, scientific, or otherwise—are a model for the technologies we should hope for.
Much of what I read about generative AI seems to be envisioned as a precursor to AGI. Even though it is still full of weaknesses, technical and otherwise, we’re told to just wait because AGI, in the form of GPT-5 or something else is on the horizon.
But is this really what we want? Every technology forces us to make tradeoffs. Do we want to develop a tool to replace us or one to help us to more fully express our humanity? It’s a quest for superpowers. And it’s not good for us.
The weakness of weak AI is a gift
Ultimately the weakness of weak AI is a gift. The fact that these tools do not have superpowers reminds us of our limitations and of something very real about what it means to be human.
To be human is to be limited, embodied, and fragile. We ignore these limitations at our peril. You needn’t look far to see this is the case. History is full of examples of where our desire for superpowers have gone wrong: Adam and Eve eat the forbidden fruit, Icarus flies too close to the sun, the Titanic crashes into an iceberg.
When we look at technology we should be thankful not only for its strengths but for its weaknesses. These limitations are often blessings in disguise, helping us to use them in a way that extends our humanity instead of bypassing it.
When we look at what we’re building, we should ask why and what for. It’s not enough to think about the impact on the world out there, but what it says about what’s in here.
Recommended Reading
Here are a few pieces that I enjoyed reading this week:
Synthetic Selves
in writes about the temptation to replace ourselves with machines.Our religious impulse and deception of language
in writes about the way that LLMs deceive us.It’s only inevitable if we let it be so
in writes that technological innovation like AGI is not a foregone conclusion. In the meantime, we need to ask the people building these tools to explain why the future they are trying to build is desirable.The Book Nook
This week I just started reading Unreasonable Hospitality by Will Guidara. It’s been great so far. Lots of excellent parallels with excellent teaching as well: holding high standards while providing the support needed for students to meet them.
Beneath all of it is a clear focus on the importance of people and their experiences.
A few quotes that resonated with me:
Danny’s partner Richard Coraine would often tell us, “All it takes for something extraordinary to happen is one person with enthusiasm.”
“People will forget what you do; they’ll forget what you said. But they’ll never forget how you made them feel.” This quote, often (but probably incorrectly) attributed to the great American writer Maya Angelou, may be the wisest statement about hospitality ever made.
The Professor Is In
This week is the first week of Clinic spring presentations. It just so happens that my team is the first to present this spring. To help them prepare, I shared ten tips I think about when making slides. These are not the ten tips on how to give a great talk or even the top ten tips. Just ten things I think about when preparing my own presentations.
Show, Don't Tell
One Slide, One Idea
Titles as Concise Thesis Statements
Make It Legible
Make It Yummy
Tell a Story
Practice Enables Creativity
Get Rid of The Noise
Be Yourself
Have Fun
Check out the video above for a quick explanation of why I think these are important and let me know if any of these resonate with you!
Leisure Line
This weekend we took a quick trip away to Lake Arrowhead with a few friends of ours. We had a great time, and the highlight of the weekend was probably taking the kids tubing at Snowdrift Snow Tubing Park. Great fun was had by all.
Still Life
We also had a chance to stop by Lake Arrowhead Village during our time in the mountains and enjoy some beautiful views of the lake.
Wonderfully thoughtful; thank you.
Maybe it sounds overly simplistic, but if we didn't cause our own births, doesn't that suggest something about what our primal stance towards the world ought to be -- something something gratitude humility....
I wonder what techno-capitalism would be like if the people running things incorporated some genuine humility into their work?
Wonderful post, Josh. I'm with you on emulation is what we will have, if we ever get something like AGI. I still think we're quite far away from AGI and it is an open question whether machines could ever have consciousness in any way similar to human consciousness.
The essay of Dave Karpf's you linked to has a great account of how close human cloning seemed to many of the technologists and journalists talking about it in 1998. Those predictions seem hilariously wrong today. Predictions about AGI floating around now seem likely to read similarly in fifteen years.
For a longer take on the point in the Audrey Watters note, I recommend the essay "We're sorry we created the Torment Nexus" by Charlie Stross, which talks about dangers of taking entertaining stories written for the SF market as a blueprint for the future.