6 Comments
User's avatar
Carol Ann Logue's avatar

Great article! Thank you for making the points with such relatable examples! Having willpower requires understanding the consequesces of our choices and being willing to be patient and push through to the long term positive gains.

Reminds me of an another article I shared last week on LinkedIn. https://www.theepochtimes.com/health/the-cognitive-debt-were-accumulating-every-time-we-use-ai-5889854?utm_source=Morningbrief&src_src=Morningbrief&utm_campaign=mb-2025-08-01&src_cmp=mb-2025-08-01&utm_medium=email&est=a0rZLQhRIrbJeVgAVCS%2B2t0SzbufTILqRFvSncJbPad5HhEqfJ2z%2FR8jiyAqAKdJKf3v'

Expand full comment
Josh Brake's avatar

I saw that article, very sharp. Thanks for commenting and sharing the link.

Expand full comment
Mike S's avatar

"If it were true that men could be taught and tamed by machines, even if they were taught wisdom or tamed to amiability, I should think it the most tragic truth in the world. A man so improved would be, in an exceedingly ugly sense, losing his soul to save it. But in truth he cannot be so completely coerced into good; and in so far as he is incompletely coerced, he is quite as likely to be coerced into evil." --GK Chesterton

Expand full comment
Stephen Fitzpatrick's avatar

This is exactly why I'm very skeptical study mode is going to be used by students of their own volition. The moment they hit a roadblock (most - not all, but most) will quickly revert back to the default model. And that's assuming they even choose to use it in the first place.

https://fitzyhistory.substack.com/p/encirclement-and-attrition-chatgpts

Expand full comment
Josh Brake's avatar

Yup, agreed. I share your skepticism.

Expand full comment
Justin Reidy's avatar

Thanks as always for the thoughtful piece, Josh.

I'm with you on the importance of willpower. But with AI, it's going to be a lot harder than hiding the cookie jar.

I've been studying Robert Ainslie's Picoeconomics recently, and what it says about willpower is not good.

To cut out a lot of detail, Ainslie defines willpower as a function of recursive self-prediction. "If you realize that your current choice is a test case for how you can expect yourself to make similar choices in the future, [you create] a bundle of your expected outcomes. If you defect to the [short term benefit] this time, your best guess will usually be that you will go on doing so...."

The solution is a (pre-meditated) "bright line" that bypasses the entire loop and avoids the short term reward entirely.

But with AI, if you're not going to avoid it entirely (and you probably shouldn't), then you have very fuzzy lines. And given how much an LLM chatbot is always encouraging you to use it for more things, and how convincing its arguments can be, and how the UX is engineered for engagement-maxing... we have a far greater problem than anything we ever experienced with social media.

I don't have an answer. I'm navigating these challenges myself. But a key point is going to be the social norms we align upon to guide our usage of these tools.

Expand full comment