The Conviction That In The End Nothing Matters Except People
Examining generative AI for education through the lens of Ursula Franklin
Thank you for being here. Please consider supporting my work by sharing my writing with a friend or taking out a paid subscription.
“We have to teach technology not as if but because people matter. We cannot use technology as a way to teach that people do not matter.” - Ursula Franklin
This is the first of what I am planning to be a set of posts exploring the potential for technology in general, and generative AI in specific, to be good for education. While I lean optimistic about the potential for technology to have a positive impact on learning when applied wisely, there are many potential missteps. This week, I’m trying to take a shot at a sober-minded articulation of some of those missteps and articulate some ways forward.
The question I am posing to myself is something like: “What exactly does good educational technology look like, and can such a thing exist?” This feels like an important question for all educators to grapple with in this moment. I’m certainly under no illusions that I’ll be able to answer this satisfactorily in a couple of blog posts, but hopefully there is something helpful for you if you’re wrestling with similar questions.
What a time to be an educator. Although I’ve only been officially teaching full-time since I started my faculty position in 2019, it’s hard for me to imagine a more disruptive moment in education since the invention of the personal computer and the widespread availability of the Internet. Looking back even further, there may be an argument to be made that the disruption of our current moment rivals that of the printing press.
The connection with these past moments of disruption is less about the specific mechanisms of disruption than the ecological impacts of new ways of accessing and repackaging information. The impact of the particular use cases of generative AI is part of the story, but in the end, they are only a small part. The much bigger conversation is about the second and third order effects.
In trying to understand these impacts, the nature of the technology itself matters. For example, generative AI is built on the stochasticity of Large Language Models and surrounds that central engine with an array of other, more deterministic software tools. Another important feature that has been widely discussed is the vastness of the information embedded and accessible through them. These models, for better and for worse, have been trained on almost any piece of information that could be made legible to them, in essence, the entirety of what is available, both publicly and otherwise, of the World Wide Web. Even though tools like search engines and the Internet both provide ways to shortcut learning, the specific features of generative AI make those potential shortcuts more accessible and tempting by making them even more frictionless.
A big question I am wrestling with—and the one where I sense a growing divide amongst both my colleagues in education and the general public—is the question of whether or not technology can actually help humans to learn more effectively. Perhaps the most significant movement here in recent years has been the push led by Jonathan Haidt and his collaborators to get smartphones out of schools.
While smartphones are an easy boogeyman, and often rightfully targeted as a primary source of harmful impacts on our ability to concentrate and do the kind of focused work that is required for learning, it would be a mistake to paint with to broad of a brush here. It’s not quite so easy to say that technology, as a whole, is harmful to learning. So much of it has to do not only with the technology itself, but the way that it is implemented and used, both in theory and in practice.
After all, technology, in the words of Ursula Franklin, “is just the way that we do things.” Understood properly, our classrooms are already full of technology, even if there are no devices in sight—tables and chairs, pens and pencils, paper and whiteboard, syllabi, curricula, readings, books, and worksheets—all of these are technological artifacts that shape the way we learn.
But the question of our moment isn’t about whether technology in this broader sense can help students learn. The answer to that question is yes. It’s clear that tools like pens, pencils, paper, and books can help us learn. The question today is more pointedly about the potential for digital technologies like a generative AI-powered chatbot-as-tutor can actually be helpful. The answer there is much more nuanced.
Get the problem right first
The recent history of digital technologies for education isn’t all that encouraging. One of the most popular arguments against the potential for digital technology to help learning are the comparisons of generative AI chatbot tutors to Massive Open Online Courses (MOOCs). It’s a comparison worth looking at in more detail, not because MOOCs and AI chatbots are the same from a technological perspective, but because they may both fail by misidentifying the core problem to be solved.
The thing with MOOCs is that they were based on a certain understanding of the problem that was fundamentally flawed. The problem solved by MOOCs is making high-quality lectures and course materials widely available. The theory was to get the best educators to create their courses in a format that could be easily distributed to anyone with an Internet connection.
Insofar as this was the actual problem, the solution worked. There are now tons of courses, created by some of the most renowned educators in the world, available on demand, wherever and whenever you may want to watch them. But there was a fatal flaw, one that is commonplace. It was a mistaken diagnosis of the problem.
The core challenge of education is fundamentally motivational. It’s about helping students to cultivate the curiosity and grit they need to repeatedly engage with new information and to build the skills required to understand and flexibly use that information in a variety of context. I use the word cultivation intentionally. Learning is more like growing a garden than it is like manufacturing a widget. It requires time, patience, regular tending, and occasional pruning. It’s about learning how to build consistency, to keep going and keep improving, and to attend to the conditions that will enable you to grow.
MOOCs did not attend to this challenge. They provided the content assuming that students would be interested and able to engage with it over sustained periods of time. But it didn’t pan out that way. It is pretty clear in the data: if you finish a MOOC, you are in the very, very small fraction of the population of people who signed up. Hardly anyone (~3% of participants) ever finishes them. I, for one, am one of the 97%. You probably are too and have most likely started a dozen of them, watched a few lectures, maybe even done a few homework problems, and then inevitably fizzled out and moved on to something else.
We’re on track to make a similar misdiagnosis as we try to use the latest shiny new thing in education with generative AI.
The survey says....
While the rapidly shifting landscape of generative AI tools is making it particularly hard to finish a study so that the technology used is still even remotely current, there are some examples beginning to appear in the literature. Just this last weekend, I stumbled across a 2025 paper published in PNAS titled “Generative AI Can Harm Learning“ from a group of researchers at the University of Pennsylvania.
In this study, the authors performed a randomized controlled trial with a group of one thousand high school math students in Turkey, separating the students into a control who received no intervention, a group that was provided with a version of GPT-4 with no modifications (”GPT Base”) and a third group that was provided with a tuned version of ChatGPT specially tuned to walk students through solutions step by step and avoid giving the answer directly (”GPT Tutor”). The headline making the rounds on X—with a figure that doesn’t seem to actually be in the paper itself, I might add—is that while the generative AI tutor helped students to perform better on the practice sessions, both groups that used the generative AI tutor performed worse than the control group on the actual exams where they didn’t have access to the AI.
It’s pretty clear to see why the post has blown up. It seems like this is yet another example of the best intentions of an educational technology gone wrong. Not only that, but there is evidence here that the intervention is actively harmful, causing students to develop a false sense of their own learning, as evidenced both by the huge discrepancy between their performance on the practice problems compared to the actual exam questions and also some qualitative responses that the authors discuss in their paper.
However, the reality here is a bit more complex than it might look like on first blush. What this study demonstrates is not that generative AI cannot be used to enhance learning, but that when used in this way, it did not enhance learning. The authors discuss that they found that many users were shortcutting their own learning by simply prompting the AI to give them the answer.
In a more recent paper, “Effective Personalized AI Tutors via LLM-Guided Reinforcement Learning“, published just earlier this month, a subset of the same authors from UPenn performed a different randomized controlled trial with a group of high school students in Taiwan working on learning how to code in Python.
This time, they found that the intervention improved performance by 0.15 standard deviations. The difference is that they focused not on improving the generative AI chatbot tutor itself, but used genAI to personalize the sequence of practice problems to the individual student, based on context from that individual student’s responses.
Here is a relevant paragraph from the paper:
A key novelty of our system is how it selects the sequence of problems for students to solve. A solution requires two components: (i) a method for estimating the student’s current knowledge state—which summarizes their mastery of the skills in the current module—based on observations of their performance (e.g., the rate of correct solutions or the time taken to solve problems), and (ii) a policy for selecting the next optimal problem to give the student, based on the estimated knowledge state.
In other words, they adjusted their strategy here from trying to improve the chatbot itself to redesigning the role that generative AI played in the learning experience altogether. Instead of relying on the chatbot to do the instruction itself in a free-flowing dialogue (which didn’t work in the first study), they use generative AI to analyze student responses and appropriately sequence the next set of questions for the student to engage with.
The wisdom of Dr. Franklin continues to echo through history
To begin to try and wrap our heads around what is going on here, I’d like to turn once again to the words of one of my favorite thinkers and writers on technology, Ursula Franklin. In a talk that she gave in 1996 titled “Using Technology as if People Matter,” Franklin diagnoses much of what I think is going on here. It’s so good and relevant that I strongly considered scrapping this whole post and just telling you to go read her talk instead. You can find it as one of the chapters in the collection Ursula Franklin Speaks.
Franklin begins by echoing a theme that permeates much of her writing by broadening our understanding of what technology is. Technology, she says, is simply “the way we do things.” While we often think of technology as a set of devices or machinery, technology is also about organization, the division of labor, and presupposed knowledge. It is as much about the things themselves as the system that surrounds them.
Building on this, she divides technology into two different categories, what she calls “work-related” and “control-related” technologies.
Many technologies, especially early technologies, are related to what I call “work-related technologies”: ways of doing things so they become easier for those who do the work. Shovels, buckets, pumps, steam engines – these all have very profound work-related components. However, there are also what I call “control-related technologies,” which do not really make the work easier, although they are sometimes advertised as doing so; instead, they make the control of the work easier...I’ll leave it to you to look at your own administration to see whether the technologies into which you feed – including computer records, list-making, and recordkeeping – are really there to improve education or to enhance control.
When we think about this through the lens of education, the challenge is that “the work” of education is often not well served either by work- or control-related technologies. While work-related technologies are very helpful in allowing us to efficiently do a certain task, they are often actively opposed to helping us learn the fundamental skills and knowledge needed to do the task itself. Work-related technology is useful, but only after the learning has happened.
The control-related technologies should also seem familiar to us in education. The list of examples that Franklin gives—computer records, lists, records—rhymes with many of the tools we use in our classrooms—curriculum, syllabi, rubrics, grades, and the like. These are technologies that are not exactly about making the work itself easier, but enabling us to better control the work, and in classroom settings, to control the behavior and compliance of our students with a set regimen of activities that we believe to be valuable.
The warning to us as we think about generative AI is that both work- and control-related technologies are counterproductive in many cases to the things we want our students to be doing. We want them to do the work, not to outsource it, and we want them to build the desire—especially at the collegiate level—to do the work for its own sake without needing to be compelled with a set of control-related technologies.
The two studies I highlighted before illustrate this point well. In the first one, the genAI intervention allowed students to outsource their learning, using the chatbot to do the work for them. They performed better on the practice exam, but those gains evaporated when they took the real test.
In comparison, the second study showed how generative AI can be used in the background to coordinate a sequence of problems. This helped to scaffold the work, not to enable students to shortcut it. You still had to do the problems; the main variable was just what order you were doing them in. The second study shows what technology-as-scaffold looks like institutionally, but what does it look like at the level of individual student formation?
Connecting the dots to generative AI
As we think about technology in our classrooms, we should think primarily not about how our students should directly use a given technology, but how that technology can be used to coordinate and facilitate the set of things that they are doing. We want to prevent ways for students to shortcut their learning (the first study) while continuing to design our curricula and learning experiences to optimize the way that students encounter and practice materials as they grow.
To do this, we need to think about our technology differently. What we want our students to do in our classes is not primarily to build a set of skills, but a certain set of character traits that enables them to push themselves to grow. The skills matter, but they pale in comparison to the deeper care and motivation that we should be helping them to cultivate.
As Franklin writes near the end of her talk,
Those of us who teach with the help of sophisticated devices need to give time to those things that, in spite of the devices, remain the human task: the assessment, the estimate, the general overview. Precisely because the technology does, in fact, take on more and more of the intellectual human tasks, it really is essential to set aside time and exercises to do what the devices do. The goal is not to substitute for devices but to keep those parts of the task functional so one can do the back-of-the-envelope calculations, can express what needs to be said even if the spellcheck or the grammar check fails, and can read somebody else’s handwriting....We cannot allow ourselves, because of the richness of devices, to leave students’ hands and minds so undeveloped that they become dependent on devices to an extent that their own ingenuity and resourcefulness is undermined to the point of being dysfunctional.
Let’s not allow our students’ hands and minds to become undeveloped such that their own ingenuity and resourcefulness become dysfunctional.
What then?
Given all this, the question remains: what exactly should we do instead? On the one hand, the second study gives an indication that generative AI can be helpful if applied throughfully. But I think the first study also illustrates an important point by counter example. Even if we thoughtfully design a particular system, the biggest determining factor is the attitude that students bring to it. This is where we should focus our efforts.
College as a monastery
In recent months, I’ve been drawn to the idea of college as a monastery. Monasteries are places that provide an environment to live a certain kind of life. They are focused not only on a specific set of practices but on creating the environment required to allow their inhabitants to cultivate those practices.
As we think about the role that technology is playing in education, how might we learn from monasteries to create an educational environment that will allow us to thrive in a world where we are pressed on every side by technology? If we want our students to thrive in a world where they are using technology, we need to help them to become formed so that they can use it wisely and take care that they are not deformed by it.
As I’ve been pondering this, I’ve been considering a set of practices, what the monastics would call a Rule of Life. In a traditional monastic environment, the Rule of Life would be focused on practices to help you draw nearer to God. Practices as part of the Rule might include rhythms of prayer and solitude, fasting, and Sabbath, sleep, and regular exercise. These practices and rhythms are designed to help you remain centered, focused, and aligned with your deepest values.
Perhaps instead of focusing so single-mindedly on delivering information to our students, we should help them build their own Rules of Life. Here are a few suggestions of elements to consider:
Structured Rhythms: Help students to learn how to reflect on where they are spending their time, building sustainable schedules with calendars and time blocking that allow them to honor realistic pacing, avoid cramming information, and help them maintain balance
Input Restriction: Consider building a practice of digital device fasting, intentionally limiting access to devices and the Internet, and using these limits to embrace solitude and contemplation as a way to cultivate attention.
Contemplative Reading: Schedule times to engage slowly with ideas, creating the space to annotate, reflect, and journal about things you are engaging with.
Learning in Community: Design curricular and co-curricular programming around dialogue and peer learning, avoiding the tendency to see class time as a means for information download.
Regular Examination: Schedule times for regular reflection on whether these practices are actually serving your formation. How are you being shaped and formed? How do you see your attention to the world changing and deepening?
The Rule of Life is itself a technology. It is a way of doing things. In our modern world, where we are so often overwhelmed with an always-everywhere way of life, perhaps this set of practices might help us not only to live more fully in the moment and make the most of our education, but also develop a way of finding our way into creating a flourishing life across all the dimensions of our lives.
I’ll close here with some last words from Ursula Franklin.
In conclusion, if you ask me, “How should we think about technology?” I think we should think about it carefully, we should think about it analytically, and we should think about it with the conviction that in the end nothing matters except people.
Got a thought? Leave a comment below.
Reading Recommendations
An essay from my HMC colleague and friend Jon Jacobsen articulating an idea that he calls “acoustic mathematics.” Good lessons for all of us.
I have recently found myself returning to moments like this, searching for language to describe a way of doing mathematics that feels increasingly out of step with the current moment. I am tempted to call it acoustic mathematics. The term is intended to distinguish this mode of practice from the increasingly electric nature of modern mathematical tools. Much as an acoustic instrument relies on natural resonance rather than electronic amplification, acoustic mathematics names a form of mathematical activity where intuition is formed through direct personal engagement, before it is extended or amplified by external systems. Often this work happens without electricity at all, with nothing more than pencil, paper, and time, but the term is meant to be more expansive than that. What matters are the unhurried moments of mindful meandering and rummaging about, often without success, through which mathematical intuition takes shape. Acoustic mathematics names a genre of mathematical activity in which sense-making remains, first and foremost, the work of the human mathematician.
Crazy days in the internet security world these days. Almost every day, we’re hearing about a new massive supply chain breach. Last week it was LiteLLM, this week Axios. All of it is very not good. This post is a good breakdown of how it is happening.
The Book Nook
It feels fitting to recommend Ursula Franklin Speaks as my suggested read this week. It really is a wonderful book for any of you interested in these things, and I find Dr. Franklin’s way of engaging with these ideas to be very thought-provoking.
If you’re looking for more on Usrula, I wrote a post last year about my appreciation for her and her work.
Ursula Speaks
Thank you for being here. As always, these essays are free and publicly available without a paywall. If my writing has been valuable to you, please consider sharing it with a friend or supporting me as a patron with a paid subscription. Your support helps keep my coffee cup full.
The Professor Is In
Hard to believe we are less than two months away from our AI conference at Harvey Mudd! Registration is open now, free for 7Cs community members and $250 for external registrants. We also have an open call for poster and discussion session abstracts.
I’m really thrilled to have an excellent set of speakers to spark the conversation for us, including Alexander Hartemink, Dylan Baker, Marc Watkins, and Isabelle Hau. If you are interested, I hope you’ll consider joining us and sharing with others in your sphere who may find it valuable.
Leisure Line
Had a fun time at the Empire Strykers indoor soccer game on Sunday at the Toyota Arena in Ontario, CA. The Strykers won 5-4 in sudden-death overtime!
Still Life
A shot of the NYC skyline looking south from Little Island by Pier 57 on a chilly Manhattan evening last Tuesday night. Can you find the Statue of Liberty?















The problem with both studies cited is that they are focused on keeping the same learning goals and studying the effects of technology on learning those things. However, technology like AI enables us to do completely new things that weren't possible before, and in the new world those new things should be the learning goals. Conversations about using AI to change HOW we educate miss the big picture. The real conversation should be about how AI changes WHAT we educate.
This is really beautiful: "Learning is more like growing a garden than it is like manufacturing a widget. It requires time, patience, regular tending, and occasional pruning. It’s about learning how to build consistency, to keep going and keep improving, and to attend to the conditions that will enable you to grow."