12 Comments
User's avatar
Still lighting learning fires's avatar

Have to be honest -- my first thought is that if you had asked one of the AI apps for a diagnosis to begin with you would probably have avoided the problems that came with your tinkering. It was also interesting that when you got stuck you decided to do some "digging online". I would also suggest that supervised "tinkering" with AI is actually a good way for students to begin to figure out what it can do, what it can't do, as well as what it should do and should not do. We usually insist in class that they do what WE want them to do, solve the problems that WE want them to solve, and deprive them of finding out what THEY can do.

Expand full comment
Josh Brake's avatar

Interesting hypothesis. Not a bad idea, but unfortunately ChatGPT also gave the wrong order (said to loosen non-drive side first) https://chatgpt.com/share/68766c36-7790-800b-b88d-c09a8138e7d3

We want to get our students to explore their own curiosity, for sure, but good teachers are already trying to do that already. It’s not an either or scenario, but the grey area of the both-and is not entirely straightforward.

Expand full comment
Still lighting learning fires's avatar

A valuable learning experience -- to see (once again) that ChatGPT still hallucinates. Just curious if you tried other AIs. I certainly agree -- getting into an argument formed around "AI: Good or bad" is the wrong argument. It's both Good AND Bad and they need to know how and why -- and that's where the teacher comes in!

Expand full comment
Timothy Burke's avatar

Great analogy.

Also good to see Illich! I find it odd that he's disappeared from so many conversations where his work is relevant.

Expand full comment
Josh Brake's avatar

I’m really jonesing to teach a class for my engineering majors that introduces them to the technological critics of the 20th century. They feel in so many ways to be the voice that we need in our modern moment, particularly with AI.

Expand full comment
Katherine Goldstein's avatar

Thanks for mentioning my work!

Expand full comment
Stephen Fitzpatrick's avatar

I tend to agree. I think of it more as intentionality but it’s all an experiment at this point. I think it’s also important for students to do some sort of reflection after the activity, ideally right after, to capture the data. But purposeless use of AI is not likely to give you any kind of meaningful results, especially if there is no goal or outcome you are aiming for.

Expand full comment
Stephen Fitzpatrick's avatar

Yes. My use with HS students has been very deliberate and, almost always, under fairly strict monitoring and supervision. Another thing that does not get much discussion is that, technically, ChatGPT is not a tool that has been authorized or managed by schools so kids really can’t be “required” to use it. Unless a school invests in some kind of an AI wrapper platform, it’s really the Wild West, especially because some kids may have access to paid vs. unpaid models. The logistical issues alone make tinkering a bad idea. Ideally, it would be great to test any AI activity with colleagues before an actual class.

Expand full comment
Josh Brake's avatar

Thanks Stephen. I agree that student reflections on the use of AI tools in the classroom alongside instructor reflection is an important piece—which probably also brings into focus why we these new tools should largely not be used (or must be very carefully scaffolded) with younger students who aren’t likely to be able to do that metacognitive work. We need to articulate some suggested prerequisites before engaging with these tools in the classroom can be fruitful. Metacognitive reflection certainly feels like a good one.

Expand full comment
Jane Rosenzweig's avatar

Great distinction! If you have some examples of either how to tinker or how to experiment that would be good for The Important Work audience, it would be great to publish you over there. Let me know...

Expand full comment
Josh Brake's avatar

Thanks Jane. I’ll likely flesh the ideas out a bit with some more examples and suggestions as I continue to work on my talks/workshops for this fall and it would be great to share those ideas with The Important Work community. At the root, what I’m really suggesting here is essentially a design process. You’ve been right on this for a long time by emphasizing that we’ve got to start by asking “to what problem is this AI tool the answer?”

Experiments/prototypes know the problem that’s trying to be solved ahead of time. Tinkering is trying to figure it out on the fly.

Expand full comment
Myles Werntz's avatar

Two notes:

1) Glad that you're reading Illich. He's a literal life changer, or, he has been for me.

2) I've been tapped by my old university to design an AI Ethics course, so entering the wild west. I think you've been involved in a group looking at AI in education, so would love to chat sometime.

Expand full comment