AI × Education: Some Thoughts on How I'm Approaching My Courses This Fall
Spend your energy building relationships and explaining how AI tools work rather than trying to prevent students from using them.
The last few months have had me scratching my head reading some of the approaches that folks are using to address ChatGPT and other AI tools in their classrooms. Some schools decided to block ChatGPT on their networks (although they’ve since reversed course). Others modified their assessments to make it more onerous or difficult for students to use the tool, like moving to handwritten exams. Most egregiously of all, I’ve read about teachers dumping student submissions into ChatGPT and holding the chatbot at metaphorical gunpoint asking: “Did you write this?!?” leading to harmful accusations. I can only imagine this torched whatever tenuous trust existed between these educators and their students.
As I see it, there are a few main issues at the root of all of these situations:
Technological Literacy: What is a large language model (e.g., GPT-4/ChatGPT, Bard, Claude, etc.) and how does it work?
Relational Capital: What level of trust exists between teachers and their students?
Effective Assessment: What is the goal of the assignment in question? Is the assessment well aligned with the learning outcome?
As I think about my own classes this fall and how the new crop of AI tools might impact them, I’m thinking along these three tracks. My main takeaway: spend your time trying to build relationships rather than trying to prevent students from using these tools by fiat. Here are a few thoughts on how I hope to prototype that this fall.
1. Learn how these tools work with your students.
When we return to school this fall generative AI will be embedded everywhere. The most popular word processors already have features in beta that offer for you to seamlessly interact with a generative AI tool to help you write. It’s hard to imagine that these features won’t be shipped to the masses by this fall.
Unfortunately, in many cases, it’s going to be unclear where and how generative AI is being used. If Google’s approach so far is any indication, the invitation will simply be to “help me write” without acknowledging what’s going on beneath the hood. I’m also not particularly hopeful that these magic buttons will be accompanied by information about how to use these tools effectively or warnings that will help users to avoid potential pitfalls like the tendency of these tools to provide incorrect information.
Some summer homework
This summer all teachers should do some homework on what generative AI is and how it works. It’s a bit challenging because of the speed at which things are developing, but there are best practices that will translate even as the underlying technology changes. Some examples include
’s Substack, the resources Anna Mills is compiling, and the insightful takes that is cranking out in his newsletter and his published work.After spending some time to understand the basic parameters of the technology, hopefully we can avoid making some silly mistakes like feeding student work into ChatGPT and asking it “did you write this?”. This idea sounds clever until you know something about how ChatGPT works. Then you can see how using ChatGPT like this has no chance of giving you reliable answers.
Given the speed things are moving, we’ll need to approach this issue with some humility with our colleagues and students. It’s unlikely that most of us will be able to teach with a great deal of authority on generative AI. But, we can invite our students to try out the tools and help them to reflect on their experiences. And partner with our colleagues to experiment and report back on what we’ve learned.
2. We have a fixed amount of time to interact with students. Let’s use it to build trust, not traps.
Even if we had a tool to perfectly detect AI use, should we use it? Perhaps it’s worth removing the hypothetical and using the current case of plagiarism detectors. These tools, unlike the ones for detecting AI-assisted content, are much more accurate since they are just looking for direct copies of pre-existing content.
I understand the motivation to use tools to prevent cheating. It’s important for fairness and also for protecting the integrity of grades. Doing well in a course should indicate that you have developed a certain level of competency with a topic. I also realize that my perspective on this is strongly influenced by the culture and setting at my institution: small class sizes, reasonable teaching loads, and a culture of close teacher-student interactions. In my context, my classes are small enough to know each of my students pretty well and so it is easier to build relationships with them.
I realize that this doesn’t apply to many others. However, I still wonder if there is time better spent trying to establish policies and build relationships to discourage cheating rather than building policies to try and prevent it.
One example of a structure that helps to discourage rather than prevent cheating is an honor code. I am glad that Harvey Mudd has an honor code and want to support it however I can. An effective honor code gives both students and faculty the opportunity to focus their attention on learning and opens up the opportunity for freedom and flexibility that would be much more challenging to implement without it. That’s not to say that honor codes are without their issues—they certainly are. For it to be effective it must be respected by the whole community. But I do think it is an example of the type of approach that can help to lay the foundation for building trust and an environment in which learning can thrive.
There’s a lot more to say here, but I will end here by saying that my main concern is that as we add more and more of these digital layers between teachers and students, you’re bound to weaken the level of trust. Much better to spend the time motivating the course content to promote student buy-in than setting up hoops for students to jump through just to make it harder to cheat. I think this essay from
on last summer does an excellent job of making a case for building relationships with students.3. Maybe AI is just exposing preexisting weaknesses in our assessments.
If ChatGPT breaks some of our assessments, is it a blessing in disguise? Oftentimes we default to a certain assessment because it is easy, not because it is necessarily aligned most closely with measuring the actual learning objectives we care about.
Again, I’m revisiting previous examples of technological disruption in education. Examples like the impact of the calculator on math education. The adoption of the calculator didn’t make knowing the concepts of trigonometry and calculus any less important, but it did change how it was taught and assessed. I think we may need to consider a similar re-envisioning of how we teach topics that are affected by AI.
It may be that this is just the prompt we need to more fully embrace or at least experiment with alternative grading strategies. It’s easy for students to be so immersed in the framework of numeric and letter grades that they forget what the point is. GPA and grades are information but are equivalent to the scenario of using a single measure (e.g., the mean or standard deviation) to describe a statistical distribution. And while it feels like grades have been around forever, they’re a relatively recent invention.
If you want to learn more about alternative grading, I highly recommend checking out
, a Substack newsletter by and which unpacks some of the basics of alternative grading techniques and shares ideas on how to build it into your own courses.The bottom line
AI is here and is going to influence education. Whether we like it or not, we’ll see large language models like GPT-4 continue to seep into the places we write. The question is not if we should address it, but how and when.
The Book Nook
This last week I picked up The Good Enough Job by
. I really enjoyed this one. So many great questions about what we look for in a job.The book is structured as a series of stories from interviews that Simone conducted with a variety of folks from different careers and backgrounds. Each chapter addresses another component of some things that you’ve heard you’re supposed to find in a job.
What I appreciated most was the way that Simone asks questions instead of suggesting answers. Each story that he tells encourages us to embrace curiosity and think with an open mind about what we are hearing.
The Professor Is In
I’m excited to start work with my summer research students this week! They’ll be working on three different projects: leveraging scattered light to investigate plant root structures, building 3d-printed microscopes for teaching optics, and building a new laser speckle module for the openIVIS system.
Glad to have Zoe, Ellie, Jose, Fred, and Ashrit around for the summer!
Leisure Line
This last weekend we were in Pittsburgh, PA visiting my brother and sister-in-law. For breakfast on Friday morning we went to Pamela’s Diner. Sorry, there are no pictures of my breakfast of eggs, bacon, sausage, and banana walnut hotcakes—they didn’t last long enough!
Still Life
We had nice, sunny weather in Pittsburgh on Friday and took a trip up the Monongahela Incline. Beautiful views from the top of the river and downtown Pittsburgh.
Hi Prof. Brake. We were musing about this very topic earlier this year, while I was enjoying some away-from-work time in Japan visiting my daughter. I believe that you have a very sane assessment of the situation regarding the rising tide of AI tools that we will soon be swimming in. I was just confronted with my own situation involving ChatGPT, which, as I had earlier told you, I do not wish to ever use if at all possible. I am no Luddite, but I also do not even want to talk to or tell my phone what to do! I'm helping a friend to (re-)write a screenplay that he would like to actually market. I'm very certain that he put forth a brilliant story concept with a truly unique cast of characters, but the actual screenplay which he handed me would not make its way past the 1-minute mark were a reader to pick it up. I am committed to helping my friend succeed, which could also translate to my own benefit. He has been waiting for me for over a year to unwind myself from a very busy life so that I can finish this screenplay. A few days ago, he casually mentioned that I should just have ChatGPT finish writing the script and then let him see how far he gets trying to sell it. I am still working through the stages of fully developing (describing) each of his 8+ main characters, something which he did not even attempt to do himself (for anyone's benefit), and coherently outlining and weaving together the story arcs for each of them. Writing the actual movie dialog for this action comedy will require some inspiration and a lot of originality in order for all this to become everything that I believe it can become. Well, well, do I feel the heat under pressure to cave in and allow ChatGPT to finish up for me what I myself eagerly desire to finish? Story to be continued! Of course, I am eager to watch and see how this initial "year zero" of AI and generative tools alters the educational landscape at my college alma mater. (Or is it "year one"?).