Thank you for being here. As always, these essays are free and publicly available without a paywall. If you can, please consider supporting my writing by becoming a patron via a paid subscription. I don’t write for the money, but it sure does mean a lot to have folks be willing to part with it to say thanks and to show that they get something out of it!
T-minus seven days for me. School is upon us and, perhaps for many of you, it’s already here. As of last Friday, my tenure packet is in, so now I can turn my focus more fully to the fall semester. This week I'm in renovation mode, making all the last-minute edits and upgrades to my syllabi and courses. It's that fun time when nothing in your class has gone wrong, you haven't given any bad lectures yet, and you still don't have any grading to do.
As I revise my courses this fall, I'm also in the middle of prepping a few workshops to facilitate for my faculty colleagues at Harvey Mudd. On Wednesday I'll spend a couple of hours with my friends in the Math department and on Thursday I'll do the same with my own department in Engineering.
As I shared last week, I've got AI on the brain for this fall, but not exactly in the ways you might expect. Ever since this whole thing got kicked off when ChatGPT went live to the public in late 2022, I've tried to keep a realistic perspective with a hint of optimistic "what if?" in the mix. I still believe the biggest silver lining of all of this conversation around generative AI is that it will force us to return to meaningful conversations about values and purpose.
LLMs expose existing pedagogical weak points
The power of LLMs has made a certain type of pedagogy untenable. But the good news is that it doesn't kill good pedagogy. There are certainly exceptions, but the pedagogical strategies that the availability of LLMs obliterates are those fundamentally based on the distrust of students. It has not killed writing, but it has killed a certain kind of way of motivating writing. If your students didn't already see the value of writing as a process by which you think, then of course they will be curious about farming the labor out to an LLM. And if writing (or any other task for that matter) is truly only about the product then why not? If the means to the end are unimportant, then why not outsource it?
It’s easy to get the ends and the means mixed up if we’re not paying attention. The means to the end often is the end itself. When we ask students to do any cognitive labor, that labor should be worthwhile and edifying. Just like the sort of healthy diet full of vegetables might not always taste as appetizing as the handful of candy—Swedish fish and sour watermelon being my particular Achilles heel—the process of learning is not enjoyable for every moment along the way. But this is true of almost anything in life that's worth doing at all.
The way that generative AI has exposed the weaknesses of many of our assignments and the way that we communicate them is a blessing in disguise. I remain hopeful. It will create more work for us, for sure, but that was work we should have been doing all along the way. It's the work of building rapport with our students. It's the "trust me, and let me show you why this is the way." It's the way of being honest with our students and showing them what a flourishing life of the mind looks like.
That life is not one of floating along without expending effort. It's a life full of productive struggle, friction, and a striving towards a goal that will never be reached. It's a life where the products matter, but not nearly as much as the process. It's a life where we find joy in the work of our hands and satisfaction in the artifacts that we create.
Respect the process
Thinking, whether through writing or otherwise, is in a real sense a sacred human activity. We ought to treat it with reverence and respect. ought to, quite literally, set it aside in our minds as a beautiful part of what it means to be human as we exercise our capacities to reflect and create new things.
If our students are looking to take our classes to shortcut the process to get the product, be that a grade or anything else, then they've missed the point. As one of my colleagues was fond of saying "if you want the A, I'll give you the A. Is that all you want here?"
If our students are coming in looking for the easiest way to get through the class with whatever grade they want, then the game is already lost. Trying to make your assignments more AI-proof is a waste of time. What you should do instead is to help get your students engaged in the work they are doing. To tell them why it will be valuable and why it's worth their time. To tell them that your goal in giving them feedback, and yes, grades when you give them, is to help them grow and reflect on how they might further develop themselves to create the type of life that will help them to flourish along with the people around them.
I wrote last week about what to do in the classroom instead of integrating AI. I've been fighting back against the language of integration because it subtly buys into the rhetoric of inevitability. Integrating AI (or any other tool or approach for that matter) without first understanding how it works and why it will help us to achieve our goals in the classroom is backwards. Integrating in this way is just a way of keeping up with the latest developments for the sake of keeping things fresh. It does no one any favors.
Instead of integrating, we should be engaging with AI. As I will share later in this post, I do think there are some interesting use cases for generative AI and some truly new opportunities that it enables, even if AI can't think, read, hear, speak, reason, or do any of the things that we humans do—and let’s be clear, in its current state it cannot do those things, at least not in the way that we do.
In response we ought to embrace a posture of curiosity and question-asking; an adventure that we invite our students to join us in where we first articulate guiding values, a mission, and a vision for our quest and then, through that lens, decide how we might thoughtfully experiment with AI.
On their face, the most obvious (and popular) ideas for using generative AI are foolish. Nobody in their right mind thinks sending ChatGPT a screenshot of your math problem and asking it to solve it or asking an LLM to write an essay for you is the way to go. These are clearly applications motivated by a desire to shortcut the learning process. And yet, there are ways to be even just a little more thoughtful about even those surface-level foolish ideas that can make them intriguing as a learning exercise.
Just because foolish uses exist is not a reason to conclude that there aren't ways that AI can be useful in helping us learn more effectively. We've just got to be clear about what the point is and get on the same page with our students.
How to engage AI
My goal in the remainder of the piece is twofold:
To articulate principles
To propose prototypes.
First, I want to highlight a few fundamental principles that we should be aiming toward as we engage AI. Think of these as the why behind the what. If our use of AI is undermining these, then we should consider abandoning or modifying that particular use of AI.
After the principles, I'll share a few ideas that I think are productive ways that students and faculty alike can engage with AI this fall. I would welcome your thoughts on these regardless of your affiliation with either group. Students should have the chance to weigh in with their instructors' use of AI in the same way that instructors should be able to speak into how they want their students to use it. Teaching and learning are two sides of the same coin and the more we can recognize that the work we are doing in the classroom is a shared enterprise, the better.
Without further adieu, here are some principles I've been considering.
Principles
1. Build Agency
If our use of AI is undermining rather than unlocking our agency, then we've got a problem. To borrow
's centaur and cyborg analogy, we need to be wary of different modes of working with AI. Working in centaur mode where we create strict boundaries between the work we delegate to the AI vs. the work that we do may not be the most frictionless way to work with AI, but it is the most fruitful way, especially when we're trying to learn a new skill. If we're not paying attention, centaur mode becomes cyborg mode becomes minotaur mode and now the AI is driving and we're not really sure where we're going and why. I wrote about why this concerns me a few weeks ago.To exercise and build agency when working with AI, we should be clear about what we want to do with AI before we start working with it and be reflective about our experience of the interaction. Approaching AI this way helps to ensure that we are keeping our hands firmly on the wheel as we experiment.
2. Encourage reflection
The theme of encouraging reflection is one worth digging into a bit more. If you or your students are experimenting with AI this fall, why not create a template for reporting out your experience? In my classes, I encourage students to experiment with AI. My only requirement is that they provide a short after-action report with three pieces of information:
What you plan to do with AI
Why you think the use of AI is supporting and not undermining the goals you are trying to achieve
How it went and what you learned
Embracing this sort of template and creating a venue for students to submit them is an excellent exercise to build community around experimenting with AI as a learning community. In doing this we also help to build a norm of disclosing the use of AI. Especially as we begin to see AI seep into more and more places, disclosure is a first step to building literacy and fostering communication. For more on this, see
' recent piece in the Chronicle.3. Explain what learning looks like
Do the students in your class understand how learning happens? Most of the time, this sort of metacognitive exercise is hidden from them. They may have some idea that their instructors intentionally designed their courses to leverage learnings from cognitive science about the ways that we learn, but they are likely unaware of most of those design decisions.
There's no reason to hide this from them! You don't need to spend tons of time telling them how the sausage is made, but every once in a while it's well worth your time to explain the principles behind the design of your assignments and explain why you've chosen a particular set of assignments or assessments to help them learn. You might even consider sharing a document with them like
’s doc about the educational hazards of AI.4. Make values and learning goals explicit
In this same vein, make your values and learning goals explicit. This is another great opportunity to build a shared understanding with your students. Invite them to participate in creating classroom norms or learning goals for the course. The norms you could create using a framework like Brene Brown's BRAVING inventory which I recently learned about from
.Another thing I'm making explicit this fall is asking my students to develop their own learning goals for the course. I come into the class with the things I am aiming to teach you, but every student has their own motivations for taking the class on top of those. Encourage them to write those down and share them with you so that you can incorporate those themes into your course content and follow up with them as you have the opportunity throughout the course.
5. Find ways to show your students you trust them
All of these are ways to build trust. They are ways for you as the instructor and the one who holds the most power in the classroom to share that power with your students. In doing so, you offer them an opportunity to join you in the work of learning. This move also helps students understand that they bear the ultimate responsibility for their learning and is an explicit reminder and demonstration of the ways that you want to help them embrace the challenge of learning new things.
6. Build socio-technical thinking
Whatever your take on generative AI, it's also a great onramp to more wide-ranging conversations about technology and its influence on us and our world. One framing that I like is to think about what technology says about our relationships in several different spheres. Applied to generative AI, we can think about how generative AI impacts our relationship with...
Ourselves: Technology shapes the way we think about ourselves. There are countless examples of the ways that technologies have changed the way we live in ways good and bad and often both at the same time.
Our neighbors: We've seen visions of how builders are attempting to use generative AI to replace human relationships. The loneliness crisis is real. But to think that AI will solve it is a gross misunderstanding of what it is to be human and the root of the problem.
The natural world: Generative AI and the computational horsepower needed to make it go consume a great deal of electricity and use substantial amounts of water for cooling. Every technology we use imposes on our resources. As we consider the potential for new tools to improve our lives, we should carefully consider the downstream consequences of our choices.
Truth, virtue, and our vision of the good life: We are still in the early stages, but the combination of generative AI with our existing channels for online communication is bound to create significant challenges in the years to come. We are slowly but surely moving toward the prospect of algorithmically-curated, personalized worlds that increasingly bear little resemblance to those of our neighbors.
Prototypes for Students
In this section, I want to share a few ideas about how students may use generative AI this fall. In most cases, I'm focusing on LLM interfaces like ChatGPT or Claude, but the principles here can apply to other more bespoke, targeted apps as well. In all cases, we need to be carefully asking how our engagement with these tools is impacting not only our experience of learning and building knowledge and wisdom but how it is impacting our learning community as well.
1. Homework Helper
If you weren't aware, all you need to get generative AI to give you an answer to a question is to send it a screenshot. In prepping for a few workshops this week, I tried this out. I was impressed—not necessarily with the quality or correctness of the answers—but with the syntax and form of the response.
This is perhaps the first use that most students will think of. With this in mind, we should make sure that we do everything we can to build a shared understanding and buy-in about the purpose of homework.
We should also clearly communicate the ways that our students’ work will be assessed. Will a human be reading and responding to their work? If so, what responsibility do you as a student have to communicate how you've done the work that you've done? On a fundamental level, this is about being respectful to the person who is engaged in grading your work. But beyond that, if you chose to use AI to help you solve the problem, how might you disclose and explain your use in a way that would help the grader to give you targeted feedback with that in mind?
2. Interactive Encyclopedia
A second popular use in addition to homework helper is an interactive encyclopedia. Many of us will increasingly turn to LLMs rather than other people, search engines, or physical resources to explore ideas.
But once again, as we do this, we must ask ourselves questions:
What are the tradeoffs that I make in replacing a peer or instructor with an LLM? What is the relational cost incurred?
What trust can I put in LLMs knowing their propensity to generate incorrect information and often in a way that is hard to detect?
What is the value of a trusted textbook that has been vetted over many years and by many people? How might I understand and appreciate the trust the text has earned due to its careful construction?
3. Ideation Partner
This one is one of my personal favorites. LLMs can be great ideation tools since they can easily generate dozens and dozens of suggestions that can be subsequently filtered. But even though I consider this a relatively good use of LLMs, there are still questions to be asked.
What is the impact of having an interaction with a machine instead of another human?
How might my interactions with an LLM further obscure blind spots in my design that another human being might detect?
Prototypes for Instructors
Lastly, here are a few ideas about how I'm using this as an instructor.
1. Pedagogy Coach
Upload your syllabus to your favorite LLM and ask it to review your course through the lens of Bloom's taxonomy, identifying how various aspects of your course support various types of learning and thinking. In reviewing the results, consider whether or not you agree and how the information you've gathered might spark new ideas. This is one place where explicitly asking the LLM to format the output in a specific way, like in a table, can be particularly effective to quickly scan the output.
2. Active Learning Activity Generator
Want to figure out how to make your class more engaging? A lot of active learning looks surprisingly like play. Once again, feed your syllabus to an LLM and ask it to suggest 3-4 active learning activities for each lecture. I've found that when I do this, I get a lot of duds but with the seeds for a number of great ideas mixed in.
Again, the point here is not that the LLM is creating anything for you—you are the expert who knows your students and class. But the LLM can help you to think more creatively about the work you do and give you ideas that you might experiment with.
3. Feedback Summarization and Analysis
It's great to get feedback from students, but it can be time-consuming to try and filter through the feedback to identify and quantify themes. One way to get more quantitative feedback is to ask students to respond with a numeric answer on a scale of 1 to 6 (pro tip: don't give them a neutral option!). But we all know that the written responses that students give us can communicate so much more than a single number.
What if after reading through the responses of an after-class survey you dropped them into an LLM and asked it to summarize key themes and takeaways? You could even consider building a full course evaluation platform in a tool like NotebookLM by creating a single notebook for all the course feedback and then querying it across different surveys.
Prototyping Is Your Friend
It will come as no surprise to the regulars here that I'm a big fan of prototyping. You don't need to have it all figured out. If you think you do, you probably don't.
What a perfect way to engage with AI. Get clear on your core values as a community and then explore with curiosity and a reflective mindset. Try things out, reflect on how they went, adapt to your learnings, and then try again.
I would love to hear if anything on this list resonated with you. If so, please leave a comment below or shoot me a note. And if you decide to try any of these ideas in your class—please let me know how it goes!
Reading Recommendations
Here are a few great pieces I read this week.
“Why We Need Amistics for AI”
If there’s one piece you need to read on AI this week, this is it. In his article “Why We Need Amistics for AI” for The New Atlantis, Brian J. A. Boyd argues that we should be more intentional about how we integrate technology into our lives.
Our tech debates do not begin by deliberating about what kind of future we want and then reasoning about which paths lead to where we want to go. Instead they go backward: we let technology drive where it may, and then after the fact we develop an “ethics of” this or that, as if the technology is the main event and how we want to live is the sideshow. When we do wander to the sideshow, we hear principles like “bias,” “misinformation,” “mental health,” “privacy,” “innovation,” “justice,” “equity,” and “global competitiveness” used as if we all share an understanding of why we’re focused on them and what they even mean.
An L. M. Sacasas Interview
You all know that I love
and his work. So I was delighted to see another podcast conversation with him hit my feed this week (with Harvey Mudd alum, no less!). This is a great conversation that has got me thinking quite deeply and is well worth a listen. If you’re not already subscribed, go over to Michael’s Substack, The Convivial Society, and sign up and check out Daniel’s work over at as well.A Trojan Horse Critique
Lastly,
wrote a great piece that ostensibly serves as an open letter/critique of AI pioneer Andrej Karpathy’s new venture. But if you look a bit more closely you’ll see it’s a roadmap for how we might consider better defining the real problems we ought to be solving.A large reason why these edtech startups do not work well for the majority of students is that the majority of students are not particularly interested in reading academic text or watching sequences of explanatory videos. If they were, we would have solved mass education many centuries ago with the printing press. Thomas Edison would have been correct in 1922 that “the motion picture is destined to revolutionize our educational system and that in a few years it will supplant largely, if not entirely, the use of textbooks.”
The Book Nook
The Last Murder At The End of the World by Stuart Turton is the next book for our murder mystery book club. I’ve got a lot of catch-up to do before we meet on 8/30, so wish me luck. I loved the previous Turton book we read, The 7 1/2 Deaths of Evelyn Hardcastle, so I’ve got high expectations for this one.
The Professor Is In
The AI workshops are in progress and I guess they had better be getting close to done since I’m running the first one tomorrow! Anyway, if you want to see what I’m presenting, feel free to take a peek over here. As is my practice, I’m putting this stuff up on the web for free for whoever wants it.
I’m particularly having fun with this page where I’m playing around with how ChatGPT does trying to solve math problems. I have a hunch that I’m gonna blow some socks off when I show my colleagues what this stuff can do, but we’ll have to see. Stay tuned for a playback of the reactions next week.
Leisure Line
My other Substack,
, has been the unloved second child recently. But that doesn’t mean I haven’t been making pizza. Just that I’m slacking on posting about it!On Saturday I got back in the saddle and cranked out a batch of pies that I was really proud of. Went with a 63% hydration, bread flour dough and a simple tomato + oregano sauce with a pinch of salt. This was some good pizza, my friends. Just trying to make Frank Pepe proud out here on the west coast, many miles from New Haven.
Still Life
Caught this little guy in the backyard last night and boy was he cute. So small!
I’m also realizing I’m officially old now that hummingbirds at the feeder outside our window are the highlight of my breakfast.
Excellent piece, Josh. I am sure I was channeling you talking about prototyping as I developed a plan to have an LLM augment an online discussion board in my history of higher education class. The experiment is designed to see if narrowly tailored responses by an LLM encourages students to write more in their discussion posts, which will give their classmates more to respond to, which will help everyone, including me, prepare for class. I kept reminding myself, this is just a first try...doesn't need to be perfect and it may not work. And, since the point is for the class to learn something about LLMs, it will succeed even if it fails to confirm the hypothesis.
I especially appreciate the trust angle on this — I feel like there’s always more excitement to take academic/pedagogical freedom more seriously when it’s clear professors trust you and are excited about seeing what you learn. My friend Sheon wrote this great profile of Manuel Blum which has a similar read on why he was so effective as a mentor: besides being incredibly encouraging, he seriously wanted to learn something new from his students! https://www.technologyreview.com/2023/10/24/1081478/manuel-blum-theoretical-computer-science-turing-award-academic-advisor/
Also, I don’t know if you’ve ever spoken with James Kreines from the CMC philosophy department, but he’s been thinking about some of these pedagogical questions and playing around with the tools for a while.