Thank you for being here. As always, these essays are free and publicly available without a paywall. If you can, please consider supporting my writing by becoming a patron via a paid subscription.
Their idols are silver and gold, the work of human hands. They have mouths, but do not speak; eyes, but do not see. They have ears, but do not hear; noses, but do not smell. They have hands, but do not feel; feet, but do not walk; and they do not make a sound in their throat. Those who make them become like them; so do all who trust in them.
One of the most powerful and seductive myths of the modern world is that technology is neutral and easily aligned with our intended uses. The truth is much more complicated.
This week was a significant one in the arena of generative AI. Last Monday, OpenAI held an event where they announced the release of GPT-4o (the "o" is for omni). This new shiny model is like GPT-4 but better: faster, cheaper, and more functional. (Un)coincidentally, this was the day before Google's own I/O event where Google unleashed a slew of genAI-infused products. Sam Altman says that he "tries not to think of competitors too much." I guess OpenAI's launches just happen to land on the day before their competitor's events.
At any rate, amidst all the fanfare one thing became clear last week: the metaphorical eye of Sauron is on education. From the video of Sal Khan sitting beside his son Imran while GPT-4o guides him through a trigonometry problem to Google's 86-page paper describing their approach to “responsibly developing” generative AI for education, it's clear that big tech sees education as the first major market for these tools.
Education makes a good first target for many reasons: it has the potential for significant impact, serves a massive market, and involves fewer regulatory hurdles than other areas like healthcare (although perhaps not as few as some of these companies might realize).
But in case there is any doubt in your mind, I'll make it clear: I have concerns.
The Absent-Minded Professor is a reader-supported guide to human flourishing in a technology-saturated world. The best way to support my work is by becoming a paid subscriber and sharing it with others.
We're in danger of putting a fresh coat of paint on a sinking ship. What we need right now is a re-invigorated conversation about what education is for. Only then can we begin to think about how to embrace (or reject) generative AI in educational spaces. Not only that, but we need to have a bigger conversation about the potential impacts of embedding generative AI into educational contexts and the values inherently supported by these tools.
That conversation is critically important and one I've been tackling in various ways for quite some time now. But it's also a long conversation and one that will require us to carefully tease apart the many different roles that education plays in our lives.
My central thesis is this: education is about persons. Yes, vocational training is one part of that. To be sure, having a meaningful and economically viable career is an important outcome. And yet, if the students emerging from our courses are equipped to think about making an impact out there without recognizing the importance of cultivating their lives in here, we've done them a disservice. Education is about building economically valuable skills, but even more importantly it's about forming character and cultivating virtue.
Before I get too carried away preaching that sermon again, I want to return to a more pressing point: do not underestimate the intrinsic properties of generative AI and the way those properties will shape its impact.
I'm all for experimenting with generative AI. In fact, I believe this is imperative for educators today. The devil you know and all that. But, we must experiment with a clear view of the nature of this beast.
To borrow an analogy from
's book The Righteous Mind, we're at risk of being the small rider on top of the large elephant, deluding ourselves that we are directing the elephant when in fact, we are doing nothing other than convincing ourselves that we want to go where the elephant is already heading.To understand the direction AI is taking us, there is no more important person and idea in my mind than the singular genius of Marshall McLuhan and his phrase: "the medium is the message." Today, I want to unpack the message embedded in the medium of AI and urge us to take seriously the values embedded in this technology.
Marshall McLuhan on AI
“The medium is the message" is a reminder that the channel through which information is conveyed has a significant influence on the information itself. It's natural to think that the way information reaches us is relatively insignificant—that reading a book or listening to an audiobook is essentially the same. But the channel through which the information flows unavoidably shapes it.
Not only does the channel shape the information, but the way it shapes it is laden with value. AI is like any other technology and in the words of Melvin Kranzberg, "technology is neither good or bad, nor is it neutral." The medium not only is the message, but the medium has a message.
Putting Kranzberg together with McLuhan gives us an interesting perspective on the world we're about to be living in: on the one hand, AI is malleable and like any technology can be put to good or bad uses. On the other hand, it unavoidably shapes our intended uses.
This has particular relevance as we think about how to engage with AI and how to integrate it into our lives. Before we do anything, we must first of all clearly understand McLuhan and be aware of the specific messages in the medium of AI. If we want to apply AI in ways that support the flourishing of humanity, we've got to understand and explicitly mitigate the potentially exploitative aspects of it. If we want to use this technology to support rather than destroy relationships we need to be aware that generative AI as a class of technologies is not neutral on this point.
Unfortunately, the message of the medium is often hard to notice. The way technology shapes information is almost always invisible unless we are explicitly looking for it. We don’t stop to think about how the same underlying idea is shaped differently when it’s presented through text, image, or video.
But once you see it, you can’t unsee it. Communication is a good example. The way you interact with your best friend is very different whether you’re having coffee in person, talking on the phone, or texting back and forth. Each of these mediums acts as a filter, highlighting or downplaying aspects of the information we are trying to share. Text removes any information conveyed using body language. Speaking conveys emotion through our tone. If you want more examples, McLuhan’s book Understanding Media is full of them—numbers, housing, comics, the car, the telephone, the phonograph, and even games.
As we think about trying to bend AI toward positive ends, we need to build a clear understanding of the inherent orientations embedded in generative AI. In a recent podcast conversation with Nilay Patel, the editor-in-chief of the technology news site The Verge, journalist Ezra Klein talked about McLuhan applied to AI. The relevant part of the conversation starts about 45 minutes in. Klein muses:
I’ve been trying to think about what is the message of the medium of A.I. What is a message of the medium of ChatGPT, of Claude 3, et cetera. One of the chilling thoughts that I have about it is that its fundamental message is that you are derivative, you are replaceable.
Chilling indeed. The message of the medium of AI is pointed in directions that are not aligned with the flourishing of humanity. In addition to implying that humans are derivative, I see other messages embedded in generative AI and applications like ChatGPT and Midjourney that are built around it. Messages like:
Faster is better.
The product matters more than the process.
A polished product indicates coherent thought.
Work that can be done by a machine is not worth doing.
Friction is bad.
If we don't keep these embedded messages in mind as we are trying to use AI, we're at risk of making some significant mistakes. This doesn't even begin to touch on the many complex ethical issues related to the physical needs of AI like the amount of energy these systems require, the many questions around the acquisition of training data, or the protection of user data. It seems like there is a new lawsuit every day, whether from the New York Times or Scarlett Johansson.
Be aware you are watching theater
But let’s return now to Exhibit A: education. While OpenAI and Google are clearly shooting for education, if they’re not thinking about McLuhan, I’m concerned. Show me all the impressive benchmarks and anecdotes you want, but if the list of embedded messages above isn’t explicitly addressed in your application of AI in an educational space, watch out.
You don’t need to be an eagle-eyed observer to note that the video of Sal Khan sitting next to his son is theater, from the placement of the openAI mug on down. The only reason Sal is there is to give the initial prompt to GPT-4o before he sits back and watches his son interact directly with the AI.
The future of the AI tutor is decidedly not a father sitting next to his son: it's his son sitting alone in his room talking to an artificial intelligence that sounds and acts as if it is a human one—with a voice like Scarlett Johansson, no less!
The vision behind this sort of application is the death of the communal activity of the study group or of working on homework together at the kitchen table after school. The very fact that the AI tutor is interacting using an audio interface means that it disrupts working in community altogether. The vision of the classroom of the future looks more like sound-isolating cubicles than a room with desks.
Although it may not seem like it, I do believe AI has the potential to support human flourishing in education. I believe there are pathways for redemptive uses of AI. But we won't get there without carefully considering and mitigating the risks and thinking carefully about the many unintended consequences that are downstream of the messages embedded in the medium of AI.
In addition to his most famous phrase, Marshall McLuhan has a second line that has been making the rounds recently.
There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.
A less positive reading implies that inevitability is unavoidable unless there is a willingness to contemplate.
I will close here with a quote from one of my favorite thinkers in this space, Canadian author, educator, and physicist Ursula Franklin. This passage is taken from her 1989 CBC Massey Lectures: The Real World of Technology.
The early phase of technology often occurs in a take-it-or-leave-it atmosphere. Users are involved and have a feeling of control that gives them the impression that they are entirely free to accept or reject a particular technology and its products. But when a technology, together with the supporting infrastructures, becomes institutionalized, users often become captive supporters of both the technology and the infrastructures. (At this point, the technology itself may stagnate, improvements may become cosmetic or marginal, and competition becomes ritualized.) In the case of the automobile, the railways are gone — the choice of taking the car or leaving it at home no longer exists.
The New York Times ran a piece last week about the library at the center of OpenAI's headquarters. I'd like to suggest that a few copies of Ms. Franklin's lectures be added.
Recommended Reading
Why A.I. won’t solve loneliness from Jessica Grose in The New York Times.
Why I worry about chatting with bots as a potential solution to loneliness is that it could be an approach that blunts the feeling just enough that it discourages or even prevents people from taking that step off the couch toward making connections with others.
A piece from one of my must-reads on Substack,
, reflecting on Apple’s latest ad.on the power of thinking small.These appear to be the two paths presented to us: one in which the device paradigm colonizes more and more swaths of our experience and we are increasingly reduced to swiping along a glassy surface of endless content, or one in which we refuse the lure of limitless and meaningless consumption and reclaim focal things and practices along with the skills, satisfactions, and community they generate.
Stop worrying about what happens in some gatekeeper’s office in New York. The action right now is happening at the grass roots level. And even if you make that decision to get down and dirty in the grass right now, you’re still getting in early.
A close watching and critical commentary on of the Khan GPT-4o video from
.My point instead is simply that, on its own terms and as deployed in the most favorable educational environment we can possibly imagine – namely, instructing the son of one of the country’s most well-known educators on a relatively straightforward math concept – GPT-4o’s pedagogy is far from perfect.
Jerry Seinfeld’s commencement speech delivered to the Duke class of 2024 is worth a watch.
The Book Nook
I’m re-recommending a book I’ve already recommended previously, Understanding Media by Marshall McLuhan. If you want to dive deeper into some of the ideas I wrestled with today, this is the place to go.
The Professor Is In
My first student started last week and five more started yesterday. I’m excited to be working with them and looking forward to a fun and productive ten weeks of research together. I plan to share some more detailed updates of what we’re working on this summer soon, but you can get a quick taste on the lab webpage.
Leisure Line
Walking to lunch last week I stumbled on a friend on the sidewalk. Snapped this quick picture before moving him safely to the grass nearby.
Still Life
Our bougainvillea on the front porch is in full flower these days and a vibrant red. Enjoying the pop of color it brings to the front yard!
LOVE the 5 underlying beliefs! Succinct and accurate. So important to name and recognize.
Great article! #FarrahNaykaAshline