“Technology’s unavoidable challenge is that its makers quickly lose control over the path their inventions take once introduced to the world.”
Mustafa Suleyman, The Coming Wave
We’re in the midst of a technological revolution. Unless you’ve been living under a rock for the past few years, this is probably not news to you. While generative AI tools like ChatGPT, DALLE, and Midjourney are dominating media headlines, the technologies driving the revolution go beyond AI and encompass synthetic biology as well.
The wave is coming and is gaining steam day by day. The question is, how do we surf it instead of being crushed underneath it?
There’s been a lot of conversation lately about AI alignment—the process of ensuring that the output of AI programs is in sync with our morals and ethics—but alignment is only one layer of what we need to think about. What’s even more important is containment.
Simply put, containment means that we remain in control of our technologies. It’s a classic problem with technology. The example I shared last week about the vaping company Juul and the way that their e-cigarette escaped the narrow path they had charted for it is just one example of many. Technological artifacts, as Ursula Franklin, Neil Postman, Lewis Mumford, and Langon Winner among others remind us, are not neutral.
“Contained technology is technology whose modes of failure are known, managed, and mitigated, a situation where the means to shape and govern technology escalate in parallel with its capabilities.” - Mustafa Suleyman
The reason containment is so critical is that it creates space to experiment with new technologies and explore their potential to support human flourishing while protecting us from their worst possible outcomes. Unfortunately, not all technologies are equally easy to contain. And, the characteristics of the technologies in the coming and present revolution, AI and synthetic biology in particular, make containment particularly challenging.
When it comes to the cutting edge of technological development, it can be challenging to root out the signal from the noise. There’s lots of hype around new technology, especially given the influence of the Silicon Valley ecosystem, our broader economic structures, and the academic publishing industry. And yet, I’m convinced there is a strong and important signal there.
Today I want to share some lessons from someone I think has his finger on the pulse. In his recent book, The Coming Wave, Mustafa Suleyman shares his view of how what we should consider as we attempt to move through the coming wave of technological revolution. As someone deeply experienced in the space of artificial intelligence he has the technical chops to cleanly filter signal from noise. But, his technological aptitude is coupled with a critical and insightful perspective on what these tools mean for society more broadly.
The Coming Wave is a book filled with some answers, but even more questions. But the right questions are extremely important, even if we don’t have complete answers to them.
What is the coming wave and how should we respond to it? Join me as I think out loud about how I’m processing these lessons and how I want to be a part of integrating these lessons into my teaching and scholarship.
A kindred spirit thinking about technology’s impact on society
The emergence of generative AI over the past year has motivated me to become a better student of the history of technology, its impacts on society, and the ways we need to interact with technology to foster human flourishing. It unfortunately hasn’t been an integral part of my own technical education and isn’t a core pillar of many of the technical training programs I’m aware of.
I’m always on the lookout for like-minded individuals who share concerns about technology and its impact on society. Earlier this year when I heard Mustafa Suleyman in a conversation on one of the podcasts in my rotation, I knew I had found another important voice in the conversation.
Mustafa is one of the co-founders of DeepMind, the AI company that created AlphaGo, and recently co-founded a new AI company called Inflection AI after DeepMind was acquired by Google. AlphaGo made headlines in 2016 after beating the world Go champion Lee Sedol but that was only the beginning. In terms of scientific contributions, the subsequent DeepMind product AlphaFold was even more significant. AlphaFold is an AI program that predicts a protein’s three-dimensional physical structure from the sequence of amino acids. While AlphaFold hasn’t completely solved the protein folding question, it has certainly powered a huge leap forward in the field. If you have a spare ten minutes, this video from DeepMind explains more about AlphaFold and is well worth your time.
What I appreciate most about Mustafa is not his technical acumen, as impressive as it is. What really excites me is the way, as a designer and builder of technology, he strives to think critically about the impact that the technology he and his teams build will have on society. He grapples with the responsibility that we all have as designers and users of these technologies to protect against their potential negative unintended consequences in real and concrete ways.
It’s clear that Mustafa has thought deeply about these issues and I’m grateful that he shared them with the world in The Coming Wave. In it, he makes a case for what he sees as the key forces behind the technical revolution in AI and synthetic biology and offers some suggestions for what we should do about it.
The Coming Wave is a must-read book for anyone connected to technology—especially those who are interested in artificial intelligence. It helps to present a broad and balanced perspective on where we are and what we should do about it.
In the rest of the post today I want to briefly summarize the main lessons and questions I took away from reading his book and share how I’m planning to begin to address these questions in my teaching.
The complicated story of technology
Questions about technology and what it means for us as individuals and communities have become increasingly important to me during my development as an engineer. They’ve come even more to the forefront of my mind as I’ve moved from the seats of the classroom to the front of the room. Technology is complicated and even the most well-intentioned technological artifacts are inextricably tied to potential negative consequences. As Mustafa writes:
Technology is the best and worst of us. There isn’t a neat one-sided approach that does it justice. The only coherent approach to technology is to see both sides at the same time.
The problems of our world require technical excellence with a solid foundation of problem-solving and critical-thinking skills, and yet, all too often we lose sight of the end goal as we get into the weeds of designing or using the next cool tool. Our goal to create a new technology to reshape the world for the better is often tainted by our desire for money, power, or fame. We’ve been down this road before with the smartphone, social media, and now with AI. It’s easy to get caught up in the hype and constant barrage of new developments and lose track of what we’re doing it all for in the first place.
Even if we assume the best of intentions, we still need a way to think through what might go wrong and why. You know the saying about what paves the road to hell. As we move forward, we need to keep looking for the 50,000-foot view and studying history for patterns and wise ways forward.
In this respect, one chapter of The Coming Wave that I particularly appreciated was where Suleyman addresses the characteristics of the coming wave of technology, breaking it down into four elements to highlight why we should be particularly concerned about this next wave of technologies. The four features he highlights are:
Power asymmetry
Hyper-evolution
Generality
Autonomy
The components of the wave
Much of the analysis I’ve read in the press about the potential upside or downside of AI has been handwavy at best. Mustafa makes his case concrete and explains how these four characteristics combine in especially concerning ways. Next, I want to briefly break down what each of these features is and why it is important.
Asymmetry of power
The technologies of our current day are remarkable in that even cutting-edge versions of them are cheap and very accessible. Suleyman uses the example of the Ukrainian military’s use of hobby-grade drones that provide excellent performance for only a little more than a thousand dollars. The accessibility of these technologies shifts power from more centralized and well-resourced institutions like governments down to the individual.
The democratization of computational power feeds this asymmetry in artificial intelligence. While it takes a significant amount of computational horsepower to train these systems, GPT-4 can be accessed for low to no monetary cost. Furthermore, the increasing availability of open-source LLMs means that you can increasingly access and run LLMs on your personal computer with full knowledge of the architecture, training sources, and model weights.
Hyper-evolution
Hyper-evolution refers to the ability of a technology to iterate quickly. This has long been the advantage of simulation over experiment. As simulation capabilities become more powerful, more and more questions will be able to be addressed in cheap and efficient simulations without the need for costly and time-intensive experiments.
Suleyman highlights the way that this has already begun to have a significant impact in synthetic biology where computational tools can help to eliminate or better focus experimental efforts. Improvements in robotics for automating these experiments will make this hyper-evolution even more powerful and terrifying.
If you think this is on the horizon, think again. Only a few months ago, a group from Carnegie Mellon University published a paper in Nature about a system they developed called Coscientist.
Coscientist, an artificial intelligence system driven by GPT-4 that autonomously designs, plans and performs complex experiments by incorporating large language models empowered by tools such as internet and documentation search, code execution and experimental automation.
What could possibly go wrong?
The power of generality
The power of generality strikes in that technologies like AI and synthetic biology are omni-use. These types of technologies are much more difficult to contain than technologies with more restricted applications. This is in many ways a hallmark feature of the transformer architecture that powers many of the generative AI tools we see today (transformer is the T in GPT). It’s been a big part of making the AI advances of the last decade translateable across different application areas. Suleyman writes:
Technologies of the coming wave are highly powerful, precisely because they are fundamentally general. If you’re building a nuclear warhead, it’s obvious what it’s for. But a deep learning system might be designed for playing games yet capable of flying a fleet of bombers. The difference is not a priori obvious.
The versatility of a class of technology like AI to perform tasks across a variety of domains from playing video games, beating humans at Go, generating hyper-realistic multimedia, figuring out how proteins fold, and acting as a coding co-pilot highlights the breadth of what we are dealing with here. Figuring out how to contain, or even predict the possible impacts of a technology with so many potential use cases is much more challenging than a technology with a narrower intended use case.
Autonomy
Autonomy, the last of Suleyman’s four features, highlights the way that the coming wave of technology has the potential to elude our grasp. I’m not about to go full-on hype cycle here and claim that we’re a step away from these systems jumping into an iterative improvement cycle where they’ll keep improving themselves, morph into a modern Frankenstein, and turn us all into paper clips. And yet, we’ve got to be mindful of how and where we are keeping humans in the loop of what we create. The more autonomy we cede, the less confidently we’ll able to predict what happens next.
A technology containment scorecard
As I think about a way forward, the question is how to make these questions practical for scientists and engineers to apply as part of their design processes. One way to do that is a framework or list of questions that can help us interrogate and assess the difficulty of containing a particular technology.
As a first step, those of us in technology-adjacent disciplines can begin to be more curious about what we are using and developing. Mustafa’s four features are a good place to start but can be accompanied by additional follow-on questions. For example:
Asymmetry: To what degree does this technology promote asymmetry of power? Does it require centralized or scarce resources? If so, for what part of its development and use (e.g., AI systems are very expensive to train but orders of magnitude cheap to run in production)
Hyper-evolution: Does the technology have the potential for exponential growth or expansion? If so, what resources limit this evolution? If not, are there potential future developments that might unlock a hyper-evolutionary cycle?
Generality: How widely can the technology be applied across different domains? Is it limited to a particular problem or does it solve a more general problem at a deeper level that might be applied across various disciplines?
Autonomy: Does the technology require human intervention to operate? If so, how much and when? Do humans need to be involved as a co-pilot or does the technology run independent of human involvement after an initial setup phase?
Democratization: What happens when this technology is used not only by a select few early adopters but more broadly by a majority of users in a specific demographic, industry, or society? How does the impact of the technology shift throughout this adoption cycle?
Embedded philosophy: What ethical imperatives are embedded in the technology in question? What vision of the good life does this technology cast? If this technology achieves its intended impact, how does it change the world for good and for ill? If the worst parts of the vices embedded in this technology take hold, what impact might it have?
Stewardship: How well does this technology steward our limited natural resources? What negative environmental impacts might be associated with the proliferation of this technology? How are those costs factored into the overall economic picture of the technology?
Relationships: How does this technology impact human relationships with self, others, and the world? What practices does it encourage or discourage?
Of course, this list could be almost infinite and there are more dimensions to consider than I’ve listed here. But, the important part is to begin to ask the questions in the first place. Oftentimes the problem is that we have blinders on, whether or not we realize it, and don’t want to stop and reflect.
To be sure, we can’t come in to work each day and ask this whole list of questions as part of our boot-up sequence. However, a reflective process like this would be a helpful practice to regularly engage with as an individual and team on a quarterly or semi-annual basis.
The coming curriculum
The earlier we get into the habit of asking these questions, the better. Part of my goal in the next season of my own teaching and scholarship is to explore how to organically embed these questions alongside the technical content that I explore with my students in our courses.
The wave is coming, and the next generation of scientists and engineers will be a big part of how we address and respond to it. I hope to help foster these conversations among this next generation and spark the curious questions and conversations that will help us to develop redemptive technologies that serve humankind and carefully consider how these tools will impact the flourishing of ourselves, our neighbors, and our world.
Suggested Reading
In addition to Mustafa’s book, here are a few pieces that I’ve enjoyed reading this last week and would encourage you to check out:
The 3Rs of Unmachining: Guideposts for an Age of Technological Upheaval from
and her husband in describes their guideposts for an age of technological upheaval. As I shared in a note earlier this week, I appreciate their call to recognize, remove, and return, but would add a call to reenvision the role that technology might play in our flourishing as well.How Philosophy Makes Technology Better by
in was a great read in a series of essays about the intersections between philosophy and technology. I particularly appreciated the call to clarify our terminology and cultivate technological wisdom.- in shares how William James can help us understand a different way to approach our future with technology that is neither optimistic nor pessimistic. He also highlights an excellent essay from earlier this year, On technological optimism and technological pragmatism.
And if you want to read some more thoughts on the conversation around techno-optimism from yours truly, here’s my essay from a few weeks ago about an idea I’ve labeled Redemptive Technological Humanism.
The Book Nook
I enjoyed The Coming Wave and found it very helpful in filtering out the noise about the technologies that are shaping our world. I’d encourage you to pick it up. It’s not an easy or short read, but it’s very accessible.
If you’re looking for an easier introduction, I’d recommend this podcast conversation between Mustafa, Tristan Harris, and Aza Raskin from The Center for Humane Technology on their podcast, Your Undivided Attention.
The Professor Is In
This is the last week before classes start up again next Tuesday at Harvey Mudd. I’m looking forward to being on the teaching team once again for our sophomore engineering core course E80: Experimental Engineering. Lots of work going on behind the scenes this year which I’m hoping to share soon!
Leisure Line
The regular season of the NFL wrapped up last week. My team, the Tennessee Titans, didn’t make the playoffs this year and I’m a bit torn up about the fact that I may have seen our running back Derrick Henry in the two-tone blue for the last time ever.
The Titans have had a history of great running backs from Earl Campbell, Eddie George, and Chris Johnson, but none of them will ever rise to the level of #22 in my book. What I will always appreciate about Henry even more than his on-field dominance was the way that he was a class act off the field, even when he had a bad game or things weren’t going his way. This video from his postgame press conference on Sunday where he thanks a whole slew of the support staff from cooks to janitors to the uniform by name is just the latest example.
I’m hopeful that the Titans will bring him back again next year after beefing up the offensive line, but know that it’s likely wishful thinking. Thanks 22 for all the memories.
Still Life



There’s nothing quite like baking to get me in a flow state. This week, it was trying out a recipe for from-scratch cinnamon rolls. Very delicious and highly recommended.
On "The Still" section:
Gosh, I've been thinking about making hommade cinnamon rolls lately ... but for the sake of my health I probably should not for now haha. Great recipie find though, I will have to try it!
Wonderfully and thoughtfully written. I am compelled to add, however, the extreme need ("Requirement") for discernment, for discerning the times, for seeking wisdom from, let's say, Ecclesiastes 3. There is a time and a season for everything. maybe, perhaps, this is a season to "tear down and rebuild", for it strikes me that we (may be) are on a fools errand (!) if we continue to try to "fix" broken, poorly-designed systems with the same tools that designed and built them in the first place. Were they "designed" at all in any sort of long-term (kingdom) perspective? Or have things simply run amok?
What force is pressing the accelerator to drive technology to this frenetic pace? For here is an engineering maxim for our discussion and debate:
We Do Not make things we cannot control.
Or do we? And if we do, why?