Discussion about this post

User's avatar
Annette Vee's avatar

One of the problems with having a personal moral code regarding AI is that it has so little impact on AI use generally. So I appreciate the call to work together here. The economist's definition of moral hazard is when the costs of an activity is borne not by the doer, but by someone else. And the problem is that incentives to take on risks can get misaligned. If it's someone else's skin in the game, then people take more risks. I like the moral hazard analogy and used it to refer to AI companies' behavior: https://annettevee.substack.com/p/the-moral-hazard-of-ai They're risking our data, our cognitive abilities, our jobs--not theirs.

Expand full comment
Marc Watkins's avatar

The browser issue is going to a challenge. I think the one upside is a human being can easily detect it by looking at the time stamps on the assignment submission. I can view the time a student accesses and finishes an assessment on the LMS and it is a pretty decent give away for seeing who is using a tool to automate their assessment. Now, if they build an AI that mimics human writing speed, then that shuts the door on that method.

Expand full comment
13 more comments...

No posts