8 Comments

I'm a Tesla owner and all of us were gifted a month of full self driving (FSD) back in April 2024 and then two months in October-November 2024. So I can give you a sense of what it's like to have FSD in place for longer periods of time.

FSD was an amazing thing to experience for that time. It was astonishing to watch it work, and 90% of the time it absolutely performed as well or better than a human driver. But that other 10% I was convinced it was trying to get me killed. I clearly remember one morning having it drive me to my favorite coffee shop in another town, 20 miles away and tucked into the downtown area. It navigated there flawlessly and even located, and then parallel-parked in, a prime spot on the street right next to the shop. Amazing. Then, on the ride home, it ran a stop sign and almost got me t-boned.

To me this is the perfect analogy for where we are with AI. Most of the time it's a tool that is simply mind-blowing in what it can do. And when used from the standpoint of augmenting human abilities, it's one of the few technologies that can inspire hope. I do believe that one day we will view self-driving capability the same way we now view seat belts and air bags, unthinkable to drive on the road without them. (I definitely appreciated FSD driving home exhausted and bleary-eyed after playing gigs that ended at 2:00am.) But when it's not astonishing, it's terrifying, and it's hard to know when or where that switch will flip.

FSD and AI are getting better all the time and so the question of where the human sits in the loop becomes more and more salient all the time. (In the end, I decided FSD was cool but not worth $100 a month -- it doesn't really solve a problem that I have.)

Expand full comment

Thanks for sharing your experience, Robert. Agree with you that the analogy hits the nail pretty squarely on the head.

"But when it's not astonishing, it's terrifying, and it's hard to know when or where that switch will flip." Indeed.

Expand full comment

Very glad to see new rhythms emerging alongside established routines.

Assuming your many references contain references of their own, there's nearly a semester's worth of thoughtful reading here!

"Writing is more than extruding syntactically correct text. Both the product and process matter." - John Warner's new book is out --

https://www.hachettebookgroup.com/titles/john-warner/more-than-words/9781541605510/?lens=basic-books

Expand full comment

Yes! John snuck me an advance copy when he visited Mudd in the fall that I regrettably have not yet had the chance to read. I'm very much looking forward to it and it's been great to see it making the rounds and getting glowing reviews!

Expand full comment

I teach writing, mainly to electrical engineers and computer scientists but also to undergraduates, and I was writing about that same article yesterday, in a continuing effort to get my university to stop being so complacent about AI and especially text-generating LLMs. Here’s the bit on that:

One study in the Conference for Human Factors in Computing Systems finds that “GenAI . . . can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem solving” in “knowledge workers” (Lee et al, “The Impact of Generative AI on Critical Thinking”). One explanation is that it may “deprive the user of the routine opportunities to practice their judgement and strengthen their cognitive musculature, leaving them atrophied and unprepared.” Imagine the way people in the driver’s seat of self-driving vehicles are supposed to oversee the vehicle so they can override any safety breaches: they become unsure of their decisions, agree with the car more and more, and stop paying attention altogether. Something like that is happening at desks. But instead of thinking of it as happening in cars, on streets, in offices, or in memos, think of it happening in people’s brains.

Expand full comment

Yes indeed. We've gotta think more critically about the downstream consequences of a paradigm where we shift more heavily into an oversight role. To think that it won't have consequences on our abilities to do those things we automate is just naive.

Expand full comment

I think the hype around self-driving cars and human cloning (remember Dolly the sheep?) is instructive. It was always almost here. And then, it wasn't. What we got in the way of automation was a real change, but nothing like what the enthusiasts imagine. And, as you point out, such automation comes with trade offs. When it comes to education, thoughtful consideration of those trade offs should be part of the discussion.

Thanks for doing more than your share of such consideration, Josh. And for pointing me to Brock and McGilchrist, whose work I didn't know.

Expand full comment

I am always grateful for your weekly comment, Rob. I appreciate that I can count on you to consistently pop in with something thoughtful, even when I sometimes forget to respond!

You are exactly right that we always overestimate the power laws with these sorts of progress. Getting the first 80% of the way there is easy. The last 20% takes a lot longer than almost everyone expects.

McGilchrist's The Master and His Emissary is on my list and I've heard quite good things about it. Unfortunately currently on the "one day soon" list along with many other worthwhile reads...

Expand full comment