Thank you for being here. As always, these essays are free and publicly available without a paywall. If this essay resonates, please share it with someone you think would enjoy it.
I naturally search for third ways. I want to find pathways that allow us to find common ground and highlight shared values, even in spaces where there may not seem to be any. In some situations there truly is no space for compromise and we may feel the need to draw hard boundaries. But most of life is much more defined by shades of gray than black and white, especially when we go from theoretical principles to their application in the real world. If we want to find the third ways, we’ve got to start by understanding each other. And to understand each other, we’ve got to be able to see the world through each other’s eyes, if only to articulate why we see things differently.
Most of you likely know that generative AI has been occupying a lot of my headspace over the past few years. I didn’t draw it up this way. When I started writing this blog almost two years ago, I wasn’t even aware about the advances in AI that would be unleashed on us in November of 2022 when ChatGPT was released. And yet, I quickly found that the conversations around AI resonated with me and raised questions in many of the areas that I am passionate about from education and effective pedagogy to ethics and conversations about what a flourishing life looks like.
Ethics in our modern discourse seems to have been relegated to a relatively narrow definition. For example, in most of our conversation about AI ethics, we’re talking about how to avoid wrongdoing and crossing ethical boundaries. The conversation is mostly about the ways we shouldn't go rather than a specific vision of where we should.
This deficit-focused approach is short-sighted. It’s not that we don’t need to talk about the guardrails. It’s just that a conversation about the lines that should not be crossed must be coupled with a conversation about the vision of flourishing we should be chasing. They are two sides of the same coin.
This focus on ethical guardrails is also a fundamental misunderstanding of the real power and meaning of ethics. As
reminded me in a comment several months ago that has stuck with me ever since, this narrow definition of ethics as avoiding harm or wrongdoing is quite at odds with the way that the foundational philosophers of at least the Western tradition would understand it. If we were to look through an Aristotelian frame, the question of ethics is not one of rule-following but about the vision of a good life. It’s about Eudaemonia or flourishing.As we look at the landscape of AI development, these philosophical questions cannot be ignored or dismissed as theoretical. AI has made it unblinking obvious that the deep philosophical questions about what it means to be human, what the purpose of life is, and the fundamental nature of reality truly do matter. They shape the nature of our tools and subsequently shape the contours of our lives.
The lenses of ethical frameworks are a critical piece in helping us understand what’s going on right now in technology development. Much of the big tech vision in Silicon Valley and elsewhere is motivated by a utilitarian logic and this way of thinking is a strong element of the various philosophical frameworks in vogue like effective altruism (EA) or effective accelerationism (e/acc). Unless we get a grasp on the philosophical foundation that lies beneath these movements we'll never understand them, much less be able to effectively critique them.
It's pretty obvious that there are serious ethical issues throughout the field of AI and technology more broadly right now. There are issues of labor, from those responsible for sorting through the sewage on our social networks to those who are trained to do the same for the objectionable content coming out of our internet-educated large language models. There's the environmental and human cost of sourcing materials like lithium that are used in almost all of our electronic devices today. There is also the question of what is a sustainable and responsible use of energy and carbon for powering the data centers that keep our digital lives humming. I could go on.
As we wrestle with these questions, we need to start by evaluating and understanding our own ethical and philosophical motivations. If we want to build a better world together, we've got to start by asking ourselves what a better world looks like. Until we have an articulate and thoughtful answer to that question, we ought not to try to extract the speck from our brother’s eye.
As we think about AI in broad strokes, the quest is for tools that can perform tasks that have until now required human intelligence. That quest can be aligned with human flourishing but it is certainly not guaranteed to be.
If we are to make lasting progress in understanding the ethical contours of our pursuits, we must see not only the potential harms of a given endeavor, but even more importantly the sort of vision of flourishing embedded within it.
In addition to articulating the ends of generative AI, we must also consider the means by which we get there. Even a few small doses of philosophical and ethical literacy can go a long way, not only in helping us understand our own approach to these issues but in creating a framework to understand and engage in fruitful dialogue with others who may approach the issues through different lenses. What is reasonable and justified in a utilitarian framework may be completely untenable when seen through the lens of virtue ethics. We ignore our underlying philosophical priors at our peril.
I’m aware that this is not a silver bullet solution to the very complex issues that surround technology development in and beyond AI. Economic and political factors play a significant role as well. But just these too are connected to philosophical and ethical frameworks. Until we understand the ethical frameworks that motivate us and those around us, we’re destined to talk past each other.
Got a comment? Let me know below.
Reading Recommendations
I appreciated this piece from
where he writes about how automation and AI are shaping the nature of work.Some people want to be told what to do, and they're very happy with that. Most people don't like being micromanaged. They want to feel like they're contributing something of value by being themselves, not just contributing by being a pure cog.
In a note, I pointed to Ursula Franklin (of course) as someone who we would do well to reengage with on this issue. Her framework of holistic and prescriptive technology is very applicable here.
writes over at about an ed-tech game that is more than a cheap, sugar-coated trick. His analysis gives a roadmap that other Ed-tech companies would do well to follow. writes about the experience of being a woman in male-coded spaces.I have worked hard to shake off the conditioning of my culture that says my intuitive, relational and embodied ways of knowing aren’t the “right kind of smart”. Mainly now, I’m happy to be the one who brings a poem, who cries when discussing the “problem of pain“ because it seems an obscenity to do otherwise. I have learned to ignore the YouTube commentators bitching about the over emotional woman, this stupid, tearful hand-wavy bint. Bless their fearful hearts, I think, I hope they get to really live, really feel what it is to be alive, at some point. Maybe through the love of a good woman. On my best days this is my response. Not all days.
This response to OpenAI’s “A Student’s Guide to Writing with ChatGPT” from French academic Arthur Perret is worth a read. For what it’s worth, I agree with the core of Perret’s critique which to me is fundamentally grounded in the fact that OpenAI seems to blur the line between human and machine in its guide.
What is less obvious to me is whether absent these false pretenses, large language models are still as useless as Perret argues they are. At any rate, I appreciated the thought-provoking piece! (h/t to
for bringing this one to my attention).The Book Nook
I’m hooked. I’m just over 30% into The Three-Body Problem, but I expect that I’ll be finished later this week at my current rate.
The Professor Is In
It’s the last week of classes and my students are putting the final touches on their projects. They’ll show them off this Friday at the annual demo day. This year is a fun slate of projects:
Digital Pictionary with a joystick and controls to select different brush shapes while you draw over VGA.
A system to sense your post and help you improve your squatting form in the gym
A custom DJ deck with various filters
A light controller that uses the DMX interface
A 3d-printed model of Remy from Ratatouille that you can control by moving your arms
A hands-free arm wrestling game that uses electrodes to sense the tension in your flexed muscles
An image processing system that captures images and uses an edge detection algorithm running on an FPGA to perform real-time image processing
A smart chess board that interfaces with magnetic pieces and shows you the available moves when you pick up a piece
A system to help you correct your pitch when singing
A physics simulation with colliding pixels that move around based on how you tilt the LED matrix.
I’m looking forward to seeing how all the projects wrap up and getting to celebrate the hard work that my students have put into them this semester!
Leisure Line
Christmas is in the air, even if it’s still sunny and 70 degrees Fahrenheit outside in LA. Over the holiday weekend, we took a trip down to see the big tree all decorated at the Americana in Glendale.
Still Life
Hard to beat the beauty of The Huntington. A few photos from our visit last week and our stroll through the Chinese garden.
Have you read Technology and the Virtues by Shannon Vallor? Almost exactly these points as well, highly recommend.
Interesting thoughts! What you said about technological ethics being limited to avoiding harm reminds me of the Moral Foundations theory developed by Jonathan Heidt. His book, The Righteous Mind, goes into the theory in a popular readership form. As I recall, there were 6 moral foundations - like PCA basis functions - from which moral decisions were made. Harm was only one of these axes.