Thank you for being here. As always, these essays are free and publicly available without a paywall. If you can, please consider supporting my writing by becoming a patron via a paid subscription.
As we grapple with generative AI and how it will shape our world, we're understandably grasping for metaphors and analogies to help us.
, who writes one of my favorite Substacks on AI, , has used the metaphors of centaur and cyborg to describe two ways we can work with AI. We work as a centaur when the work done by the machine and the human is easily separable and like a cyborg when the contributions of human and machine become too interdependent to clearly untangle them.I imagine that many of you, like me, find these metaphors to be helpful as we consider how to integrate AI into our work. However, I think there's a more sinister reality that is not captured by either the centaur or cyborg modes of working with AI. I'm going to call it minotaur mode. The difference between being a centaur or a cyborg and being a minotaur is about who maintains the agency. Unfortunately, if we're not careful, I think it is all too easy to move from a way of working with AI where we use it to overextend our expertise and sacrifice our agency to the machine. In this mode, we work with AI in a way that gives the appearance of expertise without its substance.
As we think about the future of expertise, we need to consider how we can use AI like a centaur and a cyborg. But we also need to be aware of how we may end up relinquishing too much of our agency to the machine in our pursuit of increased efficiency and productivity.
The Absent-Minded Professor is a reader-supported guide to human flourishing in a technology-saturated world. The best way to support my work is by becoming a paid subscriber and sharing it with others.
What exactly does AI democratize?
One of the core arguments for the potential upside of generative AI is the democratization of expertise. This is at the heart of
's excellent essay that was recently published in Noema Magazine. The democratization of expertise sounds great, but there are some important caveats. Expertise is developed, not given. It is cultivated, not minted.If we look at what is actually happening with the way generative AI is being used, it seems to me that it is democratizing the appearance of expertise, not expertise itself. What it actually democratizes is more like raw power than expertise. Power is about the ability to act whereas expertise in its truest form is power combined with wisdom. Which one we're shooting for matters.
AI enables you to (over)extend your expertise
AI enables you to extend your expertise, but only to the degree that the extension is within the range of the foundational capabilities you already possess. Sure, ChatGPT can help you build an app even if you don't know the first thing about programming, but to the degree that you're running code without understanding it, you're wholly at the mercy of the machine. Using AI in this way might look similar to extending one's expertise, but it is much more dangerous.
Mollick's centaur and cyborg metaphors can help us here. As a centaur, the division between the AI and the human is clear. In these situations, we are carefully choosing the ways to use AI. This allows us to consider the particular strengths and weaknesses of the tools and use them accordingly. On the other hand, once we start to more liberally include AI in our work without thinking through why we are using a particular AI tool and whether it is well-suited for the task, we shift into cyborg mode. In this mode, we lose the clarity between our contributions and those of the machine.
Both of these modes suggest at least some thoughtful engagement with AI as a technology, considering why it might provide value in a particular use case. However, I suspect that a third mode will be much more prevalent as these tools become mainstream: minotaur mode. In contrast to the centaur which maintains the head of the human with the body of a horse, the minotaur retains the body of a human with the head of a bull. In minotaur mode, we'll outsource our thinking and agency to the machine. In working with AI as a minotaur we'll retain the appearance of holding the reins, but in reality, the AI will be calling the shots.
This way of working with AI is much more concerning not just for those who embrace it, but for all of us. We're currently seeing the way the internet is being flooded with AI-generated garbage. Right now it's mostly being generated by humans. Imagine how much worse it will get once we can spin up tons of AI agents to accelerate the process.
As an educator, I'm concerned that students and emerging experts are particularly prone to being co-opted by minotaur mode. Figuring out how to work well with AI is important for all of us, but it's especially important for those who are still in the early stages of developing their expertise. For them to extend their expertise with AI they've got to have some in the first place. Minotaur mode threatens to undermine the formation of expertise in significant ways.
The limits of YouTube for white-collar professionals
The view of working with our AI co-intelligences as a centaur or cyborg is attractive because in both situations we retain at least some agency over our AI helper. It's clear why these two modes exist: as we grapple with what generative AI can do, we are comparing it to existing domain-specific experts and evaluating its ability to perform tasks we already know how to do or at least know what it looks like when done well.
But what happens when the AI is used not at the hands of an expert, but by a novice?
As I re-read Autor's essay from this last week one analogy he made in passing stuck with me. He wondered if generative AI might be the equivalent of "YouTube for white-collar professionals." I'm sure that I'm not alone in turning to YouTube as one of my first stops on the web to pick up information on how to do something that is a bit outside what I already know how to do. Want to know how to clean out your washing machine, fix something on your car, or figure out how to set up a web server? With only a few clicks and minutes you will probably be able to find someone kind enough to have recorded a tutorial to guide you.
But the sort of problem-solving skills we learn on YouTube are not the same as the expertise we should be building in our K-12 and college classrooms. YouTubeU skills are pretty fragile. If you're working with the same system as in the video, then you're probably ok. Miss a detail and it's pretty easy to rewind to get another look to figure it out. But the further away the problem in front of you is from the solution in the video, the more trouble you'll be in.
This is the minotaur effect in operation.
What's happening when we learn things from YouTube is that we are building depth without breadth. If you look at experts across almost any field from art, science, sports, or comedy, consistent and repeated success is built on a solid foundation. Learning things by example without understanding the foundational concepts is inherently unstable. It's walking the tightrope. As soon as you lose your balance and the example you saw in the video diverges from reality, you're lost without any way to get back up.
One of the places I am most concerned about minotaur mode is in teaching and learning writing. You can generate syntactically flawless writing with AI. To the degree that you are willing to consider poking the LLM to iteratively revise you might even be able to get somewhere half decent. But being a good writer is not about producing syntactically-correct text. Excellent writing is downstream of excellent thinking. Good thinking is not a sufficient condition for excellent writing, but it is definitely a necessary one.
One of, perhaps the, reason that we spend so much time teaching writing is that it is one of the best ways to teach thinking. What rocks about writing the thoughts out as opposed to simply keeping them in your head is that it allows you to critique, analyze, share, and edit. This is one of the most important lessons I've learned over the last few years of writing here each week.
To say that AI will replace writing is to have a very narrow definition of what writing is. It may replace the generation of text, but it cannot replace the core of writing: the thinking of the human author. For those of us who have already acquired some expertise as writers, this is clear.
What I worry about as generative AI continues to expand and seep into the lives of our students is that the promise of YouTube for white-collar workers will come true. The shortcut to expertise is the temptation of the minotaur: the body of a human with the head of a bull. Those of us with enough experience and existing expertise can see the tradeoff, but without it, we are just as likely to take the bait.
Use AI, but keep your hands on the wheel
As we think about how to engage with AI we've got to be mindful of what we want out of it. In my experience, LLMs are legitimately useful, but most of the time they're most useful in situations where they enable me to do something I already know how to do, but faster.
For example, last week I wanted to make peer review assignments for a report due in the class I'm teaching this semester. The peer review would be performed by individuals, but the report was a team assignment. This made it a bit trickier to create the assignments as I needed to make sure that teammates didn't get assigned to review each other. In the past, I probably would have either just done this manually or tried to write up a script to do it for me, but now my first move for tasks like this is to throw it to GPT-4.
ChatGPT wrote a very nice little Python script for me and after some iterative prompting to iron out a few edge cases, it ran perfectly. It randomized the assignments and checked that no one got paired with someone else on their team.
But in this example, I was firmly working in centaur mode. After I got the assignments from the script, I pasted them into a spreadsheet where I built some simple error checking to confirm that the assignments did indeed follow the guidelines that I set. They did and so this step proved to be unnecessary, at least in this instance.
The temptation is that as we use these tools more we will trust them more. This is the first move toward minotaur mode. I've seen folks saying to treat LLMs as interns and to always check their work. But as you keep working with an intern and giving them feedback you begin to trust them as they learn the ropes and figure things out. LLMs are like an intern with a persistent touch of randomness. The same tendency to make things up is what makes them useful for brainstorming, but will fundamentally limit the amount of trust we can put in them.
Don't become a minotaur
Working with AI as a centaur is palatable. The human remains in control as the head, thoughtfully leveraging AI for more power. Cyborg mode may also be appropriate in some situations as we share more equal footing with AI as a co-intelligence, integrating it more deeply into our work.
But as generative AI systems continue to improve we must be mindful of the temptation to move into minotaur mode. I hope that we can be thoughtful about these choices for the sake of the budding experts in our classrooms.
Got a curious or generative thought to share? I’d love to hear it. Let’s chat in the comments.
Reading Recommendations
Revisited this post from
on the concept of “scenius” this past week. An excellent reminder of the power of community.Individuals immersed in a productive scenius will blossom and produce their best work. When buoyed by scenius, you act like genius. Your like-minded peers, and the entire environment inspire you.
This latest memo from Howard Marks on “The Indispensability of Risk” is quite good and feels like it’s got some
mojo as well.You have to put it all out there. You have to take a shot. Not every effort will be rewarded with high returns, but hopefully enough will do so to produce success over the long term. That success will ultimately be a function of the ratio of winners to losers, and of the magnitude of the losses relative to the gains. But refusal to take risk in this process is unlikely to get you where you want to go.
Loving this most recent post from
.[S]elf-promotion only paid off when the audience was distracted enough to remember the information but forget the source. Otherwise, they saw right through it. If you were that great, you wouldn’t need to boast about your greatness.
The Book Nook
After one of you recommended it in a comment on one of my recent posts, I picked up Klara and the Sun. Not very far in (cue excuse about the end-of-semester crunch), but what I’ve read so far has intrigued me.
The Professor Is In
This week is the week between the end of classes and final exams at Harvey Mudd which means that we have a few special days. Yesterday we had senior thesis presentations, today is Projects Day for our Clinic program, and tomorrow will be full of class presentations. I’m looking forward to a great week of celebrating all the hard work that our students have done throughout the year!
Also looking forward to a day of workshops on Thursday centered around generative AI. Glad to be part of putting the program together with a group of other faculty who have been part of an AI Fellows program sponsored by the Claremont Colleges Center for Teaching and Learning.
Leisure Line
Went to the LA Zoo over the weekend and got to see Marshall the rhino, one of the recent additions. Such a beautiful and cool animal!
Still Life
One of the cool things about our neighborhood is that we have wild peacocks that roam the streets. This past week on the way to work I spotted this group of them, including an albino.
I love that analogy.
"An intern with a touch of randomness." - Legit!