18 Comments
User's avatar
Annette Vee's avatar

One of the problems with having a personal moral code regarding AI is that it has so little impact on AI use generally. So I appreciate the call to work together here. The economist's definition of moral hazard is when the costs of an activity is borne not by the doer, but by someone else. And the problem is that incentives to take on risks can get misaligned. If it's someone else's skin in the game, then people take more risks. I like the moral hazard analogy and used it to refer to AI companies' behavior: https://annettevee.substack.com/p/the-moral-hazard-of-ai They're risking our data, our cognitive abilities, our jobs--not theirs.

Expand full comment
Josh Brake's avatar

Thanks Annette. I wasn’t aware of that definition, but it’s really helpful (and fortunately doesn’t contradict the way I was using it!). Thank you for sharing.

It’s really eye-opening to start to think more comprehensively about what is going on here. In truth, modern digital technology is full of moral hazards. In almost every situation there is a play where the risks fall on the end user in ways they don’t fully understand. My hope is that at least we can raise awareness for users to consider what the moral hazards are all around us so that we can do what we can to mitigate them.

Expand full comment
Annette Vee's avatar

yes, your post really helps for thinking about the risks we may be unintentionally taking on with AI! And you're right that so much of modern technology forces us into these positions. Without effective regulations, we're on our own. But we're better off in community, as you note.

Expand full comment
Marc Watkins's avatar

The browser issue is going to a challenge. I think the one upside is a human being can easily detect it by looking at the time stamps on the assignment submission. I can view the time a student accesses and finishes an assessment on the LMS and it is a pretty decent give away for seeing who is using a tool to automate their assessment. Now, if they build an AI that mimics human writing speed, then that shuts the door on that method.

Expand full comment
Michael James McGinnis's avatar

You are right - as educators we aren't going to win this as either the cat or the mouse. We'll have to get ahead of this into the formation space with students, which as you say, is in the relationships that we make with students, the ones that they make with others including their peers, and the relational skills we help them develop. That and giving them a compass that points them toward true north, toward real meaning and purpose. I write about this a lot (as you know) over at https://storiesfromatshapedengineer.substack.com

Expand full comment
Austin Morrissey's avatar

Sounds like a good friend !

Expand full comment
Josh Brake's avatar

Indeed!

Expand full comment
Rob Nelson's avatar

I've been thinking a lot lately about analogizing the adoption and adaptation of AI tools to the slow diffusion of electrification a hundred years ago. Similarities abound, and it is easy to get carried away (and I do). One difference is that electricity has obvious and immediate risks in the form of electrocution and fire. Generative AI seems harmless in comparison...what's the danger of synthetic words and images? It is the more subtle risks of cultural technology that have us on this knife's edge, the moral and social risks are obscure, yet just as important to consider.

Expand full comment
Josh Brake's avatar

This analogy is an interesting one, Rob. I wonder if there's a way to make it even more concrete by comparing electricity to the AI infrastructure (compute, LLMs, applications) and then specifically looking downstream of both to consider the societal impact. In many ways, there is a strong argument to be made that electricity is also a cultural technology, given its wide-reaching implications for the way that we live (not to mention the way it was a precursor for the Internet).

LLMs are more obviously in the lane of cultural technology because of the way they are so entangled with human language, but I'm not sure that this is a fundamental distinction. The reality is that all technology shapes culture; it's mostly a question of how and how much.

Expand full comment
Rob Nelson's avatar

I've been thinking about what distinguishes what gets called "general purpose technologies" from one another. And, I agree that sharp distinctions among "cultural" or "social" or "information" or "industrial" technologies can blind us to the variety of uses. We so easily mistake useful categories for fundamental structures. and let the distinctions blind us to understanding how technologies always work together to create new uses. The classic example from the nineteenth century is steam-powered engines on steel tracks coordinated with the telegraph.

By definition, "general purpose technologies" can be put to a range of purposes. The invention of that more specific GPT, the generative, pre-trained transformer, has harnessed that earliest and fundamental technology, human language, to the existing constellation of recent industrial and information technologies. We are barely beginning to make sense of the potential.

What I value about your essays is the way you encourage your readers to see that technology shapes culture, but that culture (acting on beliefs about what is right and what is important) can and should shape technology.

Expand full comment
Joseph Thibault's avatar

Aren't the risks just less physical and more intellectual and emotional (insert any references to derangement/psychosis or self harm or suicide)?

Expand full comment
Rob Nelson's avatar

That seems right, but that means they are risks to other people. Few people consider themselves susceptible to psychological harm, whereas fire and electrocution feel as though they could strike anyone.

Expand full comment
Dr. Carmen Lagalante's avatar

Sure, I would be happy to do that. I'll send you my contact info on a message.

Expand full comment
Steve Peha's avatar

Great work here @Josh Brake. Thank you for bringing morality into the discussion. We are so hung up on capabilities and use cases right now that virtually no one is talking about morality. I am trying to insert more value-based discussions into my work. And this helps a lot. But getting to the core of moral judgement is still far too scary for most organizations to deal with. Specific, the notion of moral hazard is occurring just as you suggest: under the radar because our radar is only scanning for capabilities and use cases! Thanks for getting the ball rolling here.

Expand full comment
Dr. Carmen Lagalante's avatar

I completely agree. I’ve also written about how essential genuine human relationships are in this age of AI. They remain the only way to discern the truth of what we see and hear, the only real cure for loneliness, and the only moral compass that isn’t programmed simply to affirm our own views.

Expand full comment
Steve Peha's avatar

You’re right, @Dr. Carmen Legante. Authentic relationships are the true shibboleth of the AI age. And there’s nothing number of teens and pre/teens forming relationships AI bots is most troubling. What do you think we can do about that issue specifically?

Expand full comment
Dr. Carmen Lagalante's avatar

I wish I had a perfect solution for that, but I can share what we’re doing at my high school. Last year, we implemented a complete phone ban and established strict consequences for infractions. We also launched a house system to help students build genuine friendships and mentorships, fostering human connection and reducing instances of bullying or social isolation. Finally, we transitioned from a BYOD (Bring Your Own Device) model to a 1:1 device program so we can better manage student access to AI tools during school hours.

Expand full comment
Steve Peha's avatar

Carmen!

Thank you so much for the details here. What you’re doing sounds fabulous.

I’m just starting a new Substack. It’s called Questions Worth Asking About Education…

My first goal is to write 20 long-form posts each one with a single research-backed answer (with many examples) about a big issue in schools—mostly big tech issues but probably a few literature and curriculum questions too.

Question #6 is: What Should We Do About Phones?

Im not just looking for answers like “Lock them away!”

I’m looking for schools like yours that have grappled with the issue enough to come up with holistic solutions that really make sense.

Would it be IK with you if I kept in touch? I should begin drafting Question #6 in about ten days. If it’s not too heavy a drag for you, I’d really appreciate perhaps a 20-minute Zoom interview on what you’ve done and what you’ve accomplished.

Thanks,

Steve

Expand full comment