AI Didn’t Make Homework Ineffective
It just clearly revealed what’s always been true: our engagement is the most important ingredient
Thank you for being here. As always, these essays are free and publicly available without a paywall. If you can, please consider supporting my writing by becoming a patron via a paid subscription.
There is perhaps no voice as well known in the AI × Education space, at least on Substack, as
. A professor at the Wharton School of the University of Pennsylvania, Mollick caught the wave of generative AI early, picking up on its potential shortly after ChatGPT's public release in the fall of 2022. I imagine that nearly all of you reading this have at least heard of him if not read several of his pieces.I'm a fan of Mollick and his work. He's insightful. Even better, he’s a prototyper. Whatever you think about his takes on AI and its impact on education, Mollick puts his money where his mouth is: he's not just talking about AI and education, he's trying things out and sharing reflections on how his experiments turn out.
His willingness to share his takes in public is valuable because it creates opportunities for dialogue. Earlier this year I argued that Mollick's centaur and cyborg models of working with AI are missing another less attractive potential mode of interaction, minotaur mode. In this mode, working with AI will tempt us to cede our agency to the machine in ways that are ultimately unhelpful. Today, he’s created the spark for another reflection, this time about AI’s impact on homework.
In his post last week, Mollick reflected on an essay he originally posted in July of 2023 articulating the coming arrival of The Homework Apocalypse, what he labels as "the coming reality where AI could complete most traditional homework assignments, rendering them ineffective as learning tools and assessment measures. My prophecy," he says, "has come true, and AI can now ace most tests."
I agree that AI has caused a Homework Apocalypse. But I don’t think it’s an apocalypse in the way that Mollick describes it.
Apocalypse means revelation
Despite its ties to scenes of the end times, destruction, and the world ablaze, apocalypse means "revelation." The Homework Apocalypse is here not because AI has eliminated homework's usefulness but because it has revealed something that has always been true but is rarely forthrightly articulated: engaging with what homework asks of us is a choice.
One could argue, like Mollick does, that AI has made the traditional essay, problem set, and assigned reading ineffective, but this presupposes that we will choose to outsource the work to AI. Cheating has existed long before AI came on the scene. Generative AI is new. Trying to navigate around the work of learning to claim the credential without acquiring the skills is not.
An alternate perspective on the homework apocalypse is to replace our visions of fire and brimstone with one of a fog lifting. Generative AI has had an impact on education, but what it is revealing is something we’ve known for as long as homework has existed—if we aren't willing to engage in the hard work of learning, then there are plenty of ways to get around it. In this light, the real impact of generative AI in education is not making homework ineffective, but making unavoidably clear what has always been true.
The conversation I’m having with my students
As I talked through my syllabus on the first day of class last week, my message to my students was clear: let’s find ways to build trust with each other. Instead of trying to build more barriers to make it harder for my students to work around the intentions of my assignments, I was honest with them. I told them that it's likely that generative AI will play a role in their future as engineers—but the exact contours of that role are still a big question mark. There are certain things that LLMs can do very well. Some of the things we do in this class are included in that list. For example, as the most sophisticated autocomplete engine we've ever built, LLMs are extremely helpful coding assistants. There's no argument—a programmer with an LLM will be significantly more productive than one without.
But does the fact that an LLM can “do” a particular assignment mean that the assignment is, as Mollick argues, ineffective? Perhaps, but only if students are unwilling to engage or if the skills that the work helps to build are useless in the world of the future. I find myself doubtful about both, at least for many of the core learning objectives of the assignments in my classes.
If we want to, we can find a way around our homework. This is not a new problem. In my embedded systems class, students can easily find solutions from previous years floating around on campus or find similar examples online.
Generative AI has given us a gift. It's freed us from playing the game of cat and mouse with our students. It no longer makes much sense to try and hamstring our students by devising essentially unsearchable problems. Similarly, the types of draconian policies needed to police our students' behavior to the degree needed for them to prove their integrity to us are likewise untenable.
What generative AI has revealed to us is what has always been true in education: our time is much better spent breaking down walls between us and our students rather than trying to erect new ones. Trying to defend our existing pedagogy by making it harder for our students to cheat themselves out of learning is a battle that's lost before we start to fight.
Instead of making our assignments harder to game, what if we took a different tack? What if we modeled vulnerability and honesty with our students to help them understand the reasons for the work we're asking them to do? What if we helped them to understand that they are the primary victim of a decision to sidestep the work of learning? What if instead of acting in a way that shows our students that we don’t trust them, we instead spent time building trust with them as we ask them to engage with the work in our classrooms?
I’m not arguing that we don’t need to adapt to AI. In many situations, the capabilities of AI will require us to rethink our practices and pedagogy. We'll need to spend a lot more time articulating the value of the work we are doing rather than simply articulating the work itself. We'll need to make ourselves vulnerable and realize the limits of our control. But driving in this lane will ultimately free us—allowing us to do more of the work that education is really about—helping students to grow and realize their potential in the world.
The homework apocalypse has arrived. Something is up in flames, but it's not the homework itself. It's the illusion that trying to force students to engage in work by trying to control them is ultimately a losing battle. This fact is not new, but AI has made it obvious. Homework is only as effective a tool for learning or assessment as we as teachers and students are willing to make it. If we aren't convinced that the effort of learning will be worth it and that the feedback from our failures will help us grow, that is the true apocalypse.
Reading Recommendations
’s post last week on LLMs and cheating was a great read.What’s the path forward with assignments and evaluations in the age of LLMs? I, like all of my colleagues, remain unsure. I guess my answer is something like this: Doing the exercises is more like learning guitar than it is taking a standardized aptitude test. When I assign problems in a math class, I’m asking you to do them for skill acquisition. The goal of my graduate courses is to help you learn how to be a good researcher. I am giving you a path to master a subject that’s along the lines of how I did it. Part of that is being able to search for existing solutions. Part of that is being able to reason through when those solutions are correct. And part of that is being able to synthesize new problems and solutions entirely. They are all important and connected, and LLMs so far only give you tools for one of the three.
I’m finally digging into some Jacques Ellul and Ivan Illich (
will be so proud!). As an intro to Ellul, last week I read his essay “Ideas of Technology”. I appreciated the abbreviated introduction to his concept of Technique. Ellul has a decidedly dour outlook on technology but he is an important voice to consider regardless of how you think about technology and its impact.The Book Nook
Coupled with his shorter essay, I started digging into Ellul’s longer work, The Technological Society. I’m only a chapter or two in, but already feeling put on notice by Ellul as an engineer and technology builder. I’ll be interested to unpack more of his critique and its applications to our current technological movements.
The Professor Is In
One of my responses to AI this fall is to prototype new ways for my students not only to share the work that they are doing but also to reflect regularly on their learning.
One of these ideas is to have every student in my embedded systems class make a portfolio website where they will publish the work that they do in the class. They’ll also have a blog where they’ll be encouraged to share casual reflections on the work they are doing and to explore ideas related to the course material that may not fit within the assignments themselves.
With their permission, I’m hoping to share a few of them later this semester with all of you. In the meantime, if you’re interested in checking out how you can easily build your own personal portfolio website and blog with Quarto, feel free to check out the tutorial that I developed.
You can also check out the source code or a working demo site.
HMC Nelson Lecture Series: Learning in the Age of AI
A quick reminder that the first of our 2024 Nelson Lecture Series talks at Harvey Mudd will be held one week from today, on September 10th.
I’m excited to have Sal Khan on campus and particularly looking forward to having the opportunity to sit with him on stage during the fireside chat portion of his talk in the evening. I’m eager for the chance to pull on some threads from his book and TED talk, with a particular focus on how he sees tools like Khanmigo impacting higher ed and understanding how he sees some of the tensions and tradeoffs that exist in building an AI chatbot tutor. It promises to be a thought-provoking conversation!
As a reminder, the lectures are open to the public in person and will also be live-streamed online. You can register for either here. I hope that you’ll join us!
Leisure Line
Labor Day was hot in LA. September always catches you by surprise with the heat. This upcoming weekend it’s supposed to break 100 ˚F! So, we spent the day down at Huntington Beach with the family. The kids love the beach.
Still Life
With the end of summer, the yellow jackets are again out in full swing, much to the chagrin of the missus and the children. Luckily, these yellow jacket traps work like gangbusters. I put it out around 5 pm and here you can see how many I had captured by the next morning.
Hi Josh, I’m enjoying your ongoing reflections about AI and education. (I really appreciate the audio version, too!)
I’ll be curious to hear your thoughts on Jacques Ellul as you read The Technological Society. The perspective of actual engineers is often conspicuously absent in these kinds of conversations.
I love this set of thoughts. Thank you for sharing with the world.
I work on AI for one of The Big Ones, and I see it as a hammer (albeit a *somewhat* sophisticated hammer). Does everyone need to know how to use it to its ultimate potential? Nah, not necessary. Does it behoove everyone to be able to perform basic household tasks with it? Yes. Would I just hand it to someone and say “wield away!” with no conversation on safety or technique? I sure hope not! Are there other tools that should be in the box alongside the hammer? DEFINITELY.