A Better Future Starts with Better Dreams
A vision to rekindle Harvey Mudd's mission for the world
Thank you for being here. As always, these essays are free and publicly available without a paywall. If you can, please support my writing by becoming a patron via a paid subscription.
One of the things I want to be known for when my career is over is helping people build better dreams. The trajectories of our lives as individuals and in community together are deeply motivated by our visions of what a better future looks like. The visions of the future we want are strongly influenced by our dreams.
The power of dreams has been on my mind recently because of a new one of my own: a Center for Science, Technology, and Societal Impact at Harvey Mudd. The idea for the Center is in response to a call for proposals from a recent initiative at Harvey Mudd, the Innovation Accelerator. Spearheaded by our President and the Board of Trustees, the Innovation Accelerator is a pretty exciting idea—especially within the walls of academia where we're not exactly well known for rapid change. The accelerator is a venue for the campus community to pitch ideas for year-long seed grants of $20k each, with a process for the top ideas to receive additional funding for three years as the ideas grow and partner with the Office of Advancement to find long-term support. The proposal is in as of last Friday. Now the waiting game begins.
The thesis for the Center is anchored by two main observations. The first is that technology plays an outsized role in shaping our world. The second is that we are in desperate need of new dreams.
As we look at our world, it's almost impossible to overstate the degree to which technology has shaped and is continually reshaping our world. The things we create alter our relationships, from our relationships with ourselves, one another, and the natural world around us. We’ve all been dramatically shaped by technological innovations such as the Internet, the smartphone, social media, and artificial intelligence, not to mention the even larger scale influences of technological innovations like agriculture and electricity that so fundamentally shape our existence that we cannot imagine life without them. We're living in David Foster Wallace's technological waters. "Morning, boys. How's the water?"
We are grappling with the wide-ranging consequences of technological innovation every day. Technology has undoubtedly enriched lives and led to greater material prosperity for many communities. And yet, technology has also had many negative impacts as well: on the natural world around us, the breakdown of our social fabric fueled by social media feeds curated to stoke our worst impulses, and the deployment of algorithms that encode, systematize, and obscure bias. Discerning answers to the question of what a better world looks like are increasingly urgent. The coming years will only add to this challenge given the pace of innovation in artificial intelligence and biotechnology.
To address this challenge we're going to need to start with a DFW-level expose on the water. Revealing the depth of our immersion will require a breadth of education and learning that extends well beyond the typical STEM curriculum. A broad education in the humanities, social sciences, and the arts of this sort is particularly critical for builders. The builders, entrepreneurs, innovators, and engineers needed for a better future will depend on the wisdom that only the study of history and human systems can bring.
To what problem is our technology the solution?
As we seek to understand technology’s impact, we've got to start with a question: to what problem is <insert-your-favorite-technology-here> the solution? This framing turns the typical engineering mindset on its head, starting with the end in mind and working backward.
For much of the technological innovation happening today, the answer to that question seems to be material scarcity and the limitations of our frail bodies. We see the problems framed fundamentally through the lens of a lack of supply. The Silicon Valley ethos is almost totally wrapped up in it. At its root, they say, our problems can and will be solved by more money, energy, food, intelligence, data, etc. The answer is always more.1
I remain unconvinced that a better future is driven primarily by material abundance. To be sure, conditions of material poverty have a significant impact on human flourishing. And yet, the answer to the human problems we're wrestling with is almost surely not to add more, or at least not more of the things that big tech thinks we should. We've been rumbling down this road for quite a while now, hearing that the devices in our pockets will help us to stay more connected than ever. Indeed they have.
While there are certainly many ways technology does help us to stay connected across distance, we rarely consider the downsides of these new opportunities. We're now less likely to be connected with our neighbors, preferring to interact with our digitally-mediated network. In the dystopian science fiction future that is increasingly our current reality, we'll find ourselves connecting through digital interfaces with humans and simulations of them alike, unable to tell the difference in any meaningful way.
If not the pursuit of material abundance, then what is the path toward increasing human flourishing? To start, instead of trying to move on to a post-human age, perhaps we should get more in touch with the things that are at the core of the human experience. The relationships with our friends and family, that despite friction and frustration are deeply meaningful. Our frail creaturely physical bodies that so often remind us of our limitations and lack of power. Our experience of the world with all of its glimmers of beauty amidst deep brokenness.
Despite its many ills, technology is undoubtedly a part of our better future. However, technological innovation is not guaranteed to be aligned with our flourishing. As we build, we need to remain focused on the problems technology can solve as well as the deeper, more significant parts of life that it can only tangentially address. A better future starts with a contemplation of our past.
My hope for the future of Harvey Mudd is that we can rekindle our mission to train the next generation of leaders to understand not only the technical merits of a design but its ethical and societal implications as well. As an institution founded in the wake of World War II, the question of what a better world looks like has been at the core of Harvey Mudd’s mission from its outset. Our founding documents talk about helping students cultivate not only technical expertise but the responsibility "to anticipate the social effects of engineering activity and to design systems and advise clients in ways likely to maximize the social utility of engineering development" as well2. As the first Harvey Mudd curriculum was developing in the late 1950s, there was an urgent sense that technical potential was growing exponentially, forcing our students to “decide to what uses it is to be put and what values it is to serve." 3
The need for technical education tightly coupled with an understanding of societal impact is just as needed 70 years later. The way forward requires partnership across campus: enriching the existing curriculum with an integrated discussion of societal impact, not only in the Core Impact Course (Core099) and Clinic but throughout the curriculum alongside the development of new courses. It also means creating space for a continuing conversation on campus by bringing speakers to campus. Finally, it is an effort that will require collaboration across the college and the consortium, engaging with existing offices on campus like Career Services, the Hixon Center for Climate and the Environment, the Hixon-Riggs Forum for Responsive Science and Engineering, the Office of Civic and Community Engagement, the Division of Student Affairs, and the Claremont Colleges' intercollegiate program in Science, Technology, and Society.
Building a better future starts with better dreams. My mission is to help the next generation of our STEM leaders build good ones.
Reading Recommendations
Tyler Austin Harper, writing in The Atlantic about ChatGPT and honor codes. An interesting pairing with my piece last week on the state of the honor code at Harvey Mudd.
It is cultural because stemming the use of Chat—as nearly every student I interviewed referred to ChatGPT—requires an atmosphere in which a credible case is made, on a daily basis, that writing and reading have a value that transcends the vagaries of this or that particular assignment or résumé line item or career milestone. And it is economic because this cultural infrastructure isn’t free: Academic honor and intellectual curiosity do not spring from some inner well of rectitude we call “character,” or at least they do not spring only from that. Honor and curiosity can be nurtured, or crushed, by circumstance.
I told Benston that I had struggled with whether to continue assigning traditional essays—and risk the possibility of students using ChatGPT—or resort to using in-class, pen-and-paper exams. I’d decided that literature classes without longer, take-home essays are not literature classes. He nodded. The impulse to surveil students, to view all course activity through a paranoid lens, and to resort to cheating-proof assignments was not only about the students or their work, he suggested. These measures were also about nervous humanities professors proving to themselves that they’re still necessary.
I’m late to the party here, but “AI Is a False God” by
is well worth your time.A common understanding of technology is that it is a tool. You have a task you need to do, and tech helps you accomplish it. But there are some significant technologies—shelter, the printing press, the nuclear bomb or the rocket, the internet—that almost “re-render” the world and thus change something about how we conceive of both ourselves and reality. It’s not a mere evolution. After the arrival of the book, and with it the capacity to document complex knowledge and disseminate information outside of the established gatekeepers, the ground of reality itself changed.
That isn’t to say AI is some benevolent good, however. An AI model can be trained on billions of data points, but it can’t tell you if any of those things is good, or if it has value to us, and there’s no reason to believe it will. We arrive at moral evaluations not through logical puzzles but through consideration of what is irreducible in us: subjectivity, dignity, interiority, desire—all the things AI doesn’t have.
And if we had any questions about the societal impact of generative AI, this heart-breaking story of Sewell Setzer III leaves no doubt. A sobering reminder of the harm that our creations can cause.
There is now a booming, largely unregulated industry of A.I. companionship apps. For a monthly subscription fee (usually around $10), users of these apps can create their own A.I. companions, or pick from a menu of prebuilt personas, and chat with them in a variety of ways, including text messages and voice chats. Many of these apps are designed to simulate girlfriends, boyfriends and other intimate relationships, and some market themselves as a way of combating the so-called loneliness epidemic.
and reflecting on the role of human-created content in a world where AI is becoming increasingly hard to distinguish and the eerie experience of listening to a NotebookLM podcast generated from their work.But claims about the mental health effects of these tools are largely unproven, and experts say there may be a dark side. For some users, A.I. companions may actually worsen isolation, by replacing human relationships with artificial ones. Struggling teens could use them in place of therapy or asking a parent or trusted adult for support. And when users are experiencing a mental health crisis, their A.I. companions may not be able to get them the help they need.
Language is what makes us human. Once you abdicate part of the writing process to a soulless machine you compromise your voice. As a reader I want to read words and ideas that have been woven and crafted by a human. If I know that AI was used in the process, not only do I lose all interest, but the writer loses credibility in my eyes.
The Book Nook
Finally cracked the virtual cover of AI Snake Oil by
and this last week on my Kindle. What I appreciate thus far is the way that they help to make sense of the many different types of technology that exist under the umbrella terminology of AI. Distinguishing between predictive and generative AI helps to highlight why not all AI tools or applications are created equal.If you need any additional convincing to pick it up, check out
’s great review over at earlier this month.The Professor Is In
We’re getting to the end of the labs portion of my embedded systems class, which means that it’s time for one of my favorite labs in the class: an Advanced Encryption Standard (AES) hardware accelerator implemented on an FGPA. AES is used all over the place and it’s fun to give students the chance to implement hardware to perform a real-world application.
For a good video overview of AES, check out the video below.
And if you’re curious about encryption in general, this video on public key cryptography is a good introduction.
Leisure Line
We went to Carved at Descanso Gardens on Friday night and had a great time. Lots of cool art!
Still Life
With two family birthdays within a week, the end of October is full of kitchen science. This week was #1’s birthday. This one was supposed to be Sally’s Baking Addiction Red Velvet Cake but ended up being a bit light on the red food coloring. Nevertheless, it still tasted great!
Of course, this framing is fundamentally tied up in our capitalistic economic model. Whatever your thoughts on that may be, I would argue that it is futile to make our attempts for better dreams dependent on an entirely new economic model.
Harvey Mudd College: The First Twenty Years, p. 23
Harvey Mudd College Curriculum Study 1958, p. 15