We're The Lab Rats Now
The only sustainable way approach to AI and education is to recognize that our technological world is an operant conditioning chamber
As 2024 draws to a close I’ve been thinking again about what I’m trying to do with my writing here. In many ways, I use this space as a place to work out what I think and invite you all along for the ride. Much of that thinking has been focused on AI and how we should understand its influence on our vision of human flourishing and, perhaps more practically, on the work we are doing as learners and teachers. Today’s musing is a variation on a familiar theme: to clearly understand what is going on with AI, we need to widen our aperture to see AI in the broader social and historical context of technology. As we enter the new year, we need strategies to stay grounded and sane in the midst of an ever-evolving landscape. I hope this piece can help you do that.
As always, these essays are free and publicly available without a paywall. If you can, please consider supporting my writing by becoming a patron via a paid subscription. If the standard $50/year subscription is more than you can afford, I’ve got a special offer set up now through the middle of January for $35/year.
It’s hard to overstate the degree to which the work of Harvard psychologist B. F. Skinner has shaped our current reality. Skinner is most well known as a pioneer of modern behaviorism, a field of study geared toward understanding why humans act the way they do. His work has proven very useful (and lucrative) for our modern technological infrastructure, significantly influencing modern user interface and experience design. But more than any specific research finding, the most significant influence Skinner has had on our modern world is the paradigm of operant conditioning—using rewards and punishments to shape our voluntary behavior. It's everywhere.
Operant conditioning was a central part of Skinner’s work. It involved manipulating and measuring the behavior of lab animals. In Skinner’s case, this mainly focused on rats and pigeons. To study these animals, Skinner used what is known as an operant conditioning chamber, or, as it became colloquially known, a Skinner box.
The chamber itself had a variety of sensors and actuators that could be used to interact with the animal inside it, providing ways to dispense rewards and mete out punishment. For example, when used with rodents, the Skinner box was equipped with buttons and a lever for inputs, lights, a loudspeaker, a food dispenser for rewards, and an electrified grid on the floor to deliver small shocks as punishment. The box itself was used to study behavior change in response to certain stimuli. Of course, it was also used to encourage certain patterns of behavior by doling out positive or negative consequences to reinforce a particular set of actions.
AI, Education, and Indoctrination
So…what does the Skinner box have to do with us? Quite a lot I think. Especially when it comes to AI and education. This is most prominently evident in two main areas: the values toward which education is pointed and the way it is pursued. In other words, the ends and the means. First, let’s talk about the ends.
It’s no secret that education is designed to foster a certain kind of behavior. At its core, educational institutions are all designed with a set of values. We want students to think critically, to embrace curiosity, and to question the status quo. Every educational institution, classroom, and assignment is designed with a particular goal in mind, motivated by either explicit or implicit values.
As we think about the work we do in our classrooms, we must be clear about the distinction between education and indoctrination. While both are interested in cultivating a certain behavior, education protects and respects human agency. Indoctrination, on the other hand, is designed to instill a certain set of beliefs without concern for the individual humans involved.
When it comes to AI and education, and personalized education in general, we've got to keep this distinction between education and indoctrination squarely in focus. Any time we delegate some part of the educational process to a machine we jeopardize the respect of human agency.
As we think about AI, we must ask: to what degree does AI help us achieve our educational goals? And not just the learning goals of a specific assignment, but the broader overarching goals of our programs—the character traits and virtues we want to instill in our students. Are the major selling points of enhanced “efficiency” and “productivity” offered by AI assistance really the values we want to instill in our students? I think not.
Critical questions like this help us to keep a steady course through troubled waters. As we try to stay aware of the new developments in AI, asking questions like this can help us filter the glut of information that threatens to overwhelm us. And if it doesn’t, it might be a wake-up call that we don’t understand our values as clearly as we ought.
Using the means to shape the ends
If it were as simple as understanding our values, that would be one thing. But the truth is that our values are always changing, even if that change is just a deeper and better understanding of what actually matters to us.
In light of this we must also be mindful of how external forces may influence and manipulate our values. Every new AI tool is looking for a user base and so it should come as no surprise that many of these tools are marketed to suggest the need to incorporate AI into one’s classes in order to stay relevant.
Let me be clear: you do not need to incorporate AI into your classes to stay relevant. Choosing a certain tool is never the right place to start. Instead, we must first establish what it is we care about. Then, and only then, can we make wise choices about how a particular tool might help us achieve that goal.
Here’s my advice to educators in 2025 about AI: re-examine, re-evaluate, and re-establish your values and learning goals. Then, and only then, get curious about the new tools that are being developed. Once you have clarity on your values and goals, you’ll have a foundation from which to make wise choices, whether you integrate or reject AI tools. A set of criteria will make you much better equipped to make the right choice for you and your students. More importantly, you’ll have the thoughtful analysis to communicate those decisions and their rationale to your students and colleagues.
The Skinner Box we're all living in
As we try to wisely navigate the path ahead, the image of a Skinner Box can help us. Perhaps we are not living in a literal box like a lab rat in an experiment, but there are so many ways in which we are unwitting subjects in an experiment that we did not opt into.
The AI playbook is straight out of Skinner's playbook. Dole out some treats or punishments to elicit a certain behavior and then once you've sufficiently engrained a certain behavior, you have the power to shape and mold it.
Skinner's insights went beyond that of Pavlov and his salivating dog. Where Pavlov focused on conditioning involuntary responses (e.g., salivating in response to a ringing bell), Skinner designed tools to shape and form voluntary behaviors. You tell me: which sounds more lucrative if you are a technology company looking to maximize engagement?
If you look around you'll start to see this pattern everywhere. We needn't look far. Every so often, a new feature or product is released—like OpenAI's recent release of their new o3 "reasoning" model. This unleashes a new wave of write-ups and reviews about how the world has once again been forever changed. Once the hype dies below a certain threshold, the big players release a new model, product, or application to once again kick up the dust. Rinse and repeat.
This cycle is exhausting. On the one hand, it's obvious that the innovation in AI over the past number of years is significant. Much ink has been spilled about how the transformer architecture was the linchpin that unlocked the explosion of new tools and the mainstreaming of generative AI, but even as it seems that pre-training is reaching its limits, the human attention behind AI development is turning to other ideas.
The big tech playbook has always been a combination of technological innovation and a carefully curated and measured packaging and release of that innovation to the public. We need to look no further than the most successful physical product of our lifetimes, the iPhone. Apple has always been a masterclass in marketing, carefully designing the new annual iPhone release to be just enough better than last year's model to convince a large enough fraction of the user base to upgrade.
But just like the year-over-year advances in our iPhones, we risk focusing on the wrong thing if we are not careful. What we are sold with any new product is a vision of what our life could be. Just like the newest iPhone may help you do some things you already do even better, AI might help you to do some of your work better as well. But just like any new innovation, there is a price to pay. There is no free lunch. For every new "now you can..." there's a "you can no longer..." and "now you must...".
In 2025, let's pay attention. Yes, with curiosity and criticism about the new developments in AI. But more importantly, to the humans around us—our students and colleagues who make the work worth doing in the first place. It's their agency and humanity that we must respect as we seek to help them learn how to pursue truth, knowledge, wisdom, and a flourishing life.
Reading Recommendations
If you’re interested in learning more about B. F. Skinner and his influence on personalized education, there’s no better resource than
’s Teaching Machines. I would highly recommend you add Audrey and her weekly newsletter Second Breakfast to your feed on AI. Her critical takes on educational technology are always worth grappling with. and his newsletter are another valuable source for thoughtful commentary on the AI and education conversation. Marc’s recent year-end summary post is a good introduction to his work over this past year.There are not many Subtacks that I manage to read every single post of, but
is one of that small number. His most recent post riffs on a line from a Lewis Mumford essay.Mumford’s claim is a provocation for us to consider what might be essential to a life that is full and whole, one in which we might find meaning, purpose, satisfaction, and an experience of personal integrity. This form of life cannot be delegated because by its very nature it requires our whole-person involvement. And by delegation, I take Mumford to mean the outsourcing of such involvement to a technological device or system, or, alternatively, the embrace of technologically mediated distraction and escapism in the place of such involvement.
This post from
’s newsletter on the power of moving fast is genius. My favorite paragraph is this line from about the power of a deadline to help you outrun the inner censor.“There’s one way of defeating the inner censor that’s reliable,” [Sasha] explains. “Just outrun it. If you’re writing 500 words in an hour, there’s just no time for paralysing cogitation.” It doesn’t only apply to writing, though. Virtually any idea, for any kind of useful or worthwhile action to take, proves easier to accomplish, and goes better, when I can let myself take the first steps right there and then.
The Book Nook
It’s a classic, but I’ve never read it. I’ve been reading and enjoying Jim Collins’s Good to Great. Lots of good things in here, but The Hedgehog Concept is worth everyone thinking about.
The Professor Is In
I’ve been enjoying winter break and some time away from campus. Classes at Mudd don’t resume until the Tuesday after MLK Jr. day on Jan 21 which means I still have a few weeks to get things shipshape before we’re off.
This upcoming semester I’ll be teaching two courses that I’ve taught before, our sophomore-level experimental engineering course (E80) and digital design and computer architecture (E85). I really enjoy both of these classes and I’m also excited that I’ll be team-teaching both of them with colleagues.
Leisure Line
When I’m not on campus running experiments or building things I get a little restless. Many times, I turn my creative energies toward some kitchen science. Here’s one of my favorites, homemade cinnamon rolls. Way easier than you might think and you can make them the night before so you can just pull them out of the fridge for an hour the morning of and pop them in for warm, gooey goodness.
It is not too late to make some of these for you and yours for New Year’s morning!
Still Life
One of the many treasures of living where we do is the many wonderful botanical gardens within a fifteen-minute drive. One of our favorites is Descanso Gardens in La Cañada Flintridge. The photo above is of a rose I spotted on a recent trip.
I teach BF Skinner’s utopian novel _Walden Two_ in my AI in Fiction class each Spring: my students mostly come out of that book hating the man, and not just because of the quality of his prose. What’s interesting is that even students who call Skinner “evil” concede that were the same utopian community offered for commercial purposes, rather than as a collectivist commune, it would seem “normal” to them. In other words, we’re cool being lab rats so long as we have to pay for the honor.
We’ve been conditioned well! :)
“Once you have clarity on your values and goals, you’ll have a foundation from which to make wise choices, whether you integrate or reject AI tools.” Love this.
Thank you for sharing your thoughts. They’re very useful as I try to puzzle my way through the rapidly changing technological landscape.