The Moral Hazards Of AI Are Closer Than You Realize
We're walking on a knife edge and we need each other if we want to make it across
Thank you for being here. Please consider supporting my work by sharing my writing with a friend or supporting me with a paid subscription.
This week I’ve been thinking about one of the scariest hikes I’ve ever done. It’s got something to teach us about the moral hazards of our current moment with AI.
Growing up, I spent two weeks each summer at Deerfoot Lodge in the middle of the Adirondack Park in Upstate NY. Most years were spent on or around the main camp location, but one year, I went on a two-week trip as a Voyager and spent 10 days canoeing down the Allagash River in Maine. What a trip.
After making the drive up to Maine, the trip started with a hike of the tallest mountain in Maine, Mount Katahdin. It was a tough hike, and I wasn’t the worlds best hiker at that point in my life. I remember having a heck of a time at the top of the mountain, fighting cramps in both legs.
Hiking Katahdin isn’t exactly a walk in the park. There’s a particularly challenging section called the Knife Edge, a narrow 1.1-mile stretch of the trail leading to the summit. True to the description, it has sharp drops to either side.
In our current moment with AI, we are walking on a metaphorical knife edge. The AI tools being released into our world are being designed and marketed in ways that don’t rule out positive applications, but certainly don’t guarantee them either. There is a path to use AI in positive ways, but we are largely unaware of the many moral hazards it is creating all around us. The worst part is that many of these moral hazards are flying beneath the radar, hidden by the overwhelming number of new models and tools being released. We are falling off the trail and are not even aware of it.
The ethical quandaries that used to live in the realm of the hypothetical are now squarely in the domain of the possible. Some have even become probable.
Our current reality
The continuous improvement in AI tools creates a dynamic landscape that is hard to keep up with. Time and time again, we’ve seen theory reduced to practice. What once seemed impossible is now available for free or with subscription costs of less than a dollar per day. While the narrative around AGI and machine super intelligence continues to grab headlines and attention, LLMs are quietly getting wired up to our existing Internet architecture wherever anyone thinks they might be even slightly useful.
But even outside of new LLM-powered apps, the existing chatbot interfaces that we’ve grown accustomed to like ChatGPT and Claude have more going on under the hood than most of us realize. While ChatGPT looks about the same on the outside in 2025 as it did in 2022, what’s happening after you type in a prompt and hit enter is much more sophisticated today. Largely this is due to the tools that the LLM now has access to—tools like web search, code execution, and a growing variety of other services made accessible using the Model Context Protocol (MCP) pioneered by Anthropic.
All of this is an important reminder to keep our eyes looking toward where the puck is heading, not just where it is today. What we’re seeing in AI right now is part of the roadmap to pursue “agentic” behavior. The definitions for what exactly an AI agent is are often unhelpful or misleading, but I’m partial to
’s): “An LLM agent runs tools in a loop to achieve a goal.”In essence, this means that AI agents use an LLM to decode the intent in a user’s prompt and then choose tools to retrieve additional context that can be helpful for delivering a relevant continuation. A simple example of agentic AI behavior is when ChatGPT decides to do a web search to respond to a query instead of directly prompting the LLM. More sophisticated examples are just now emerging into the mainstream through the growing availability of MCP servers that can be directly wired into Claude.
More disruption for education
The development of agentic AI tools has many implications for education. First of all, it means that saying something like “use ChatGPT to…” is vague and unhelpful. Not only does the capability of the underlying models (GPT-5, GPT-4o, Sonnet 4.5, etc.) vary widely, but the frontier of what tools are available will create even bigger differences. But even more significantly, the frontier of agentic AI development will continue to undermine attempts to create AI-proof assignments. This has always been a dead end, but agentic AI tools are putting the final nails in the coffin. Approaching your relationship with your students as if it were a game of cat and mouse is a dead end.
As a specific example that educators need to be aware of, consider Comet, the new browser from Perplexity. I wrote briefly about Comet a few weeks ago, specifically noting the way that it is being marketed to students as a tool that can “help you get your homework done faster.” If alarm bells aren’t going off yet, they should be.
Comet works by deeply integrating an LLM into the web browsing experience. On any webpage, you can call up Comet’s AI Assistant and ask it to interact with the content on the page. This week I took it for a spin. First up: see what Comet can do on an online learning management system (LMS).
I started by logging in to Canvas and asking Comet to create and take a quiz. I gave it a prompt and sat back and watched as Comet took control of my browser, created a simple multiple-choice quiz, and then proceeded to go into student preview mode to take (and ace) the quiz. You can watch the video above to see how it works (roughly 1 minute long at 20× speed).
Until this weekend, you needed a $20/month Perplexity Pro subscription to use Comet. Now, it’s free for everyone. So glad that Perplexity has democratized the ability for anyone to get help “getting their homework done faster.”
But the moral hazards are not for our students alone. They never have been. Not only can Comet take the quiz, it can create one too.
On the one hand, this is good news. I’ve never heard anyone argue that spending time clicking around on a web interface to create an online version of a quiz is time well spent. But the potential for misuse here is right next door, and it’s hard to imagine that most won’t be tempted. Not only can Comet be used to re-format an existing quiz in Canvas, it can also be used to come up with the questions and topics. In the example above, all I asked was to create a 5-question multiple-choice quiz about AI. Comet took care of the rest.
Protecting ourselves against moral hazards
The core challenge is not the emergence of new moral hazards. Given enough time and money, there have always been ways to take shortcuts. No, the challenge we are facing now is that these moral hazards are now much more accessible. No longer do you need to convince a friend to take your quiz for you or pay an online papermill service to write your term paper. You can simply download a free, AI-powered web browser to do it for you while you scroll social media in a different tab.
To make matters worse, the speed at which these new capabilities are rolling out is making it even more difficult to think clearly about them. The domain of the possible continues to expand. We often act on the “can” without stopping to think about the “ought.” At the moment when we most need moral clarity and wisdom to navigate these questions, we find ourselves moving through the world at breakneck speed, struggling to find the time to contemplate what we are doing and whether it is wise.
It’s not that there isn’t a pathway for us to use AI to support human flourishing. It’s just that the path is much narrower than we realize. We are walking on a knife’s edge, unaware of the danger lurking nearby.
How should we protect ourselves from these hazards? As I’ve wrestled with these ideas myself, I’ve come to the conclusion that the best defense against the moral hazards of AI is relationships.
I was reminded of this last week when discussing a recent AI project I’d been working on. I was working on building out a custom version of a new app and had fed a screen recording of the app to Claude to generate a prompt for me to use to build out the basic structure.
As I was talking about the idea and how I built it, my friend pushed back on me. Was it really ethical to feed a screen recording of someone else’s app into an LLM with the express intention of creating what is little more than an unauthorized clone of it? While it’s hard to make black and white judgments when it comes to artistic inspiration, I agreed with my friend that this was too far. Perhaps if I had experienced three or more similar interfaces and then used that information to write a prompt to generate a similar app, that would be acceptable. But directly feeding in a screen recording of all the visual assets and UI/UX seemed to be close enough to a black moral area for me to conclude that it wasn’t an ethical way to build using these tools.
As I’ve been reflecting on this experience over the past few days, I can’t help but recognize how easy it was to fall into a trap like this. These are the kind of decisions that are easy to make when moving quickly and not thinking. This is without even stating the obvious that we are already making a moral judgment by using these tools at all, knowing that the training data is full of intellectual property that was taken without permission. These are decisions that just a few years ago couldn’t be made because the technology did not exist. And yet, those decisions today were almost frictionless. Just grab some data and throw it at your favorite AI tool.
None of us is above the moral hazards posed by this new technology. We will all make mistakes, and that is ok. But if we aren’t careful, we won’t even realize it. We will need others along the way to help us reflect on the decisions we are making and to offer words of correction as we search to find wise pathways forward.
As we try to figure out the wise path forward in a rapidly evolving landscape, the best resource we have is each other. If we want to cultivate wisdom, we must resist the urge to hide the new ways we are experimenting with AI tools. Instead, we should be intentional about telling others about how we are using AI tools and asking them to help us to see our blind spots.
Just like when you’re hiking on Katahdin’s Knife Edge, it’s unwise to go it alone. If we want to avoid the moral hazards, we’ll need each other.
Got a thought? Leave a comment below.
Reading Recommendations
+1 to this piece from
.Many of the best changes you can make in life are on the other side of some cost you’re painfully familiar with, but where the payoff is a kind of joy you can’t yet comprehend.
The Book Nook
I’m finally in the home stretch of finishing John Grisham’s The Guardians. The Mrs. says the fact that I define Grisham as light reading confirms that I’m a nerd. If that’s true, so be it. It’s good!
The Professor Is In
The team over at the MIT Teaching + Learning Lab shared a link to the recording from my talk a few weeks ago. This was a fun talk to give! Slides are here in case you want to take a look, and the related blog post I wrote the day before the talk is here.
Leisure Line
Magnatiles have been the hot toy in the Brake household of late. Here’s one of the most recent creations.
Still Life
I like this shot that the team at the Notre Dame Institute for Ethics and the Common Good shared of me in action on a panel discussion at the conference they recently hosted. I’m probably going on about the Norway Spruce or Frog and Toad. In case you need identity verification…
YETI coffee mug? ✅
Pocket notebook and Papermate Flair? ✅
Serious expression and waving of hands? ✅










One of the problems with having a personal moral code regarding AI is that it has so little impact on AI use generally. So I appreciate the call to work together here. The economist's definition of moral hazard is when the costs of an activity is borne not by the doer, but by someone else. And the problem is that incentives to take on risks can get misaligned. If it's someone else's skin in the game, then people take more risks. I like the moral hazard analogy and used it to refer to AI companies' behavior: https://annettevee.substack.com/p/the-moral-hazard-of-ai They're risking our data, our cognitive abilities, our jobs--not theirs.
The browser issue is going to a challenge. I think the one upside is a human being can easily detect it by looking at the time stamps on the assignment submission. I can view the time a student accesses and finishes an assessment on the LMS and it is a pretty decent give away for seeing who is using a tool to automate their assessment. Now, if they build an AI that mimics human writing speed, then that shuts the door on that method.