I’ve spent a lot of time up in the Adirondack mountains of Upstate New York. Over the years I have come to love one of the most beautiful features of the Adirondack region, the 46 Adirondack High Peaks. Hiking them taught me some lessons about struggle and technology that resonate with me in our current moment of learning and AI.
The 46 High Peaks are a collection of summits in the region that were originally surveyed to be 4000 feet above sea level or higher with at least 300 feet of prominence or three-quarters of a mile of distance from the nearest peak. Hiking these things is tough. If you’re only familiar with west coast hiking, you’re in for a surprise. There are no consistently graded trails with regular switchbacks here. When you’re not hiking up a stream, you’re dealing with knobby overgrown trails with lots of false peaks. Not only that, but many of these hikes have significant overland mileage even before starting to pick up elevation toward the peak.
Each peak has its unique character, but one peak stands out: Whiteface Mountain. Whiteface is one of the northernmost mountains in the 46 High Peaks, located near Lake Placid, NY within view of the southern border of Canada. It is one of the taller mountains in the collection, but its height is not what sets it apart.
What separates Whiteface from all the others is that it has a road to the top.
Take the trail, not the road
The image of a mountain with a road to the top is a perfect metaphor for the landscape of education today. As we get more and more tools that help us to learn, we’ve got to discern: are they helping us hike the trail more effectively or just driving us to the top?
Whatever learning is, struggle is an integral part of it. It’s our natural human impulse to avoid pain, but we often misidentify struggle as pain. Struggle is uncomfortable, but if we eliminate it, we eliminate learning.
As we enter a new era where generative AI and technology are increasingly embedded into our education system, we need to think carefully about this point. Unfortunately, there are lots of ways to get this wrong. Today I want to talk about some of the ways I could see it going wrong and offer a few ideas about how we might do better.
Struggle is fundamental to learning. To the degree that we use any tool to help us struggle more effectively, it will help us to learn more effectively. And to the degree that we use it to avoid struggle altogether, we’ll get a life of comfort and ease, but not one full of learning, purpose, or meaning.
Take the trail, not the road.
The Absent-Minded Professor is a reader-supported guide to human flourishing in a technology-saturated world. The best way to support my work is by becoming a paid subscriber and sharing it with others.
Tradeoffs are inevitable. Don’t ignore the downsides.
I keep trying to get to the root of my misgivings about the approach that big tech companies like OpenAI, Microsoft, and Google have been taking as they make the case for integrating their generative AI products into the lives of students and teachers. While there is certainly an economic motivation, it’s not purely a quick get-rich scheme. Many (if not most) of the people working on developing and implementing these technologies believe their work has the potential to revolutionize learning experiences for students. In many ways, I agree. But as history has shown us over and over again, the revolution we expect may not be the revolution we get.
There is no either/or binary here. Any technology that we design has inherent tradeoffs. One way to think about this is through the framework of the Innovation Bargain that I wrote about a few weeks ago. The challenge of designing effective tools for learning is no different than designing technology for any other application. It’s relatively easy to come up with a tool that does what you want it to. What’s much harder is to prevent the side effects.
What if we thought about our technology more like medications? Instead of just focusing so myopically on the positive things that technology can bring us, what if we were careful to put these benefits in the context of the side effects? Furthermore, what if the most effective remedies weren’t medications at all but rather proactive, albeit boring, habits and practices? This might not make more money, but it’s likely to cultivate more effective learning.
A nutrition label for technology
Every food and drug released to the public comes with a list of ingredients grouped in categories to help us decide whether or not we should consume it. What if every new consumer electronics device came with similar documentation alerting us to the way that this device may shape us? Something as simple as a list of the components of a technology and an analysis of some relevant “macronutrients” contained within them? I’m under no illusions that this is a magic bullet, but making the information more visible would be valuable to encourage conversations about how we engage.
This tech nutrition label should include both positive and negative aspects. With a little research this weekend, I stumbled on a website with a few examples of this idea.1 Inspired by that site and some additional thought, here are a few potential items to put on the list:
Mental health: can using this technology instill addictive behaviors?
Educational value: to what degree does this device help us learn?
Physical health: how does this technology encourage us to engage our bodies?
Community: does this technology draw us into or pull us away from community with others?
Information like this would help us address public enemy number one in our attention economy: dopamine addiction. We needn’t look far to see that this is a big issue. I’m thinking here about
and how he’s learning to interrupt his dopamine-fueled habits with the notebook rule. I’m also thinking about and the banger of an essay he wrote this last week. In it, he cuts right to the heart of the issue and shows how distraction is eating us and our culture for lunch. At the heart of it all is a hijacking of the dopamine system.Can dopamine addiction be harnessed for learning?
It’s coming for education. More accurately, it’s already here and continuing to come. Seeing the success of social media’s attention grab, educational technology companies everywhere are asking: “What if we can leverage the same patterns for good?”
It’s a valid question. Duolingo models one approach. In a recent TED Talk, Duolingo co-founder Luis von Ahn shares how Duolingo uses strategies like streaks and carefully timed push notifications to keep you engaged in the same way that social media apps keep you hooked. The figure above from his slides shows the basic idea which is to try to trick us into doing something good for us. But can we win the dopamine game when TikTok is just a swipe away? How much internal motivation do we need and does this leave enough fun there to make us competitive?
Cal Newport had a great segment on this question a few months back in an episode of his podcast. His take is that we need to engage a different pathway in our brain, engaging processes like episodic future thinking (aka mental time travel) rather than the dopamine system. I think he’s right. Hoping that we can become addicted to learning all too easily turns into addiction to whatever is adjacent to learning on our devices. If you think I’m too pessimistic, remember the lesson from Juul. Their argument about addiction to vaping being better than smoking cigarettes didn’t turn out so great in the end for lots of users.
What we need most right now is to get on the same page about what learning is, how it is best supported, and how the tools being developed will support and undermine that process. We already know a lot about how to do this, but it looks like a lot of work and is not nearly as attractive as new toys or chatbots. Most of it is focused less on the nuts and bolts of learning and more on all the human elements surrounding it. It’s stuff like working with students to understand how well they understand the fundamentals, helping them build the confidence to try and fail and try again, and getting to know them on a personal level to help them build effective learning habits and practices.
Struggle is at the heart of the learning process. Not pain, mind you, but struggle. I frequently tell my students that they should think about their learning in the same way they would think of physical training: progressive overload, pursued under the watchful, caring eye of a coach, built on top of the foundation of good nutrition and sufficient rest. Giving students yet another dopamine-powered technology ain’t it folks.
Be clear about the distinction and give students the truth
I realize this likely has got me sounding grumpy and old. That would be a legitimate criticism if I stopped here. But, what if, considering these many potential issues, we thought about how we might design technology for education to support learning with an explicit emphasis on exposing and mitigating these potentially damaging side effects?
It starts by getting clear on what learning is and what education should be about. This is getting increasingly complicated as words and their meanings are hijacked in service of promoting the capabilities of the tools around us. AI is particularly guilty of this anthropomorphic shift in meaning.
We’re told that the artificially intelligent agents interact with us, learning from the data that they were trained on and imbued with the ability to remember our previous conversations. What we mean when we say learn, remember, and intelligence matters. Learning is about gaining knowledge through experience, study, or by being taught. While you could argue that a machine can do these things in some way, you should feel the need to put the whole thing in scare quotes. As much as a machine “learning” is modeled after our understanding of how the human brain works, the two are not equivalent.
What we should do instead is tell students the truth: explore ways to struggle better, but don’t try to avoid the struggle. Consider that learning is hard but worthwhile as it opens up opportunities for us to flourish, grow, and lead others to greater flourishing as well.
As educators, we have a responsibility to do our part too: to make sure that the work we are assigning to students is worth the struggle we are asking of them. This probably doesn’t look like taking our tests and moving them back to in-class exams to try and escape the fingers of technology. It probably looks a lot more like consistently revisiting our learning objectives and revising our course materials to be responsive to our technological moment but holding fast to the core objectives that remain valuable and important for our students.
Take the trail up, but maybe the road on the way down
I have a confession to make. I took the road. But only on the way down when my hiking friends and I hitched a ride with a friendly couple with room in the bed of their pickup truck.
There is a lesson to learn here as well. Just because you choose to hike up the mountain doesn’t mean that you always need to hike down too. Sometimes technology can help you to struggle better on the way up the mountain while avoiding pain on the way back down (in this case, literal knee pain).
I’m sure the purists will still say that I can’t check Whiteface off on my quest to hit all 46. I guess I’d agree and put myself at 33, or more accurately 33.5 since I didn’t hike down Whiteface.
As we approach learning in the age of AI, let’s hold the tension between embracing the struggle even as we look for ways that technology can eliminate the pain.
The Book Nook
I listened to a podcast interview with Amor Towles a few weeks ago on How I Write by
. It was a great episode of a podcast that has quickly become one of my favorite regular listens. I’d read Towles’ book A Gentleman in Moscow a few years ago and really liked it. The podcast inspired me to pick up and start The Lincoln Highway. So far I’m enjoying it and listening to the interview with Amor has given me some new insights and allowed me to see the book through the author’s lens in a fresh way.The Professor Is In
Last Friday was the final Friday writing section for E80 this semester. We’re a few sessions short this year due to some funny scheduling and the inauguration of our new president this Friday.
After a short abstract writing exercise, we spent the rest of the 2-hour session talking about the final project. As part of this, we got the students up and talking about their final project, which is focused on designing and instrumenting an underwater robot platform to launch out in Dana Point in April.
In the picture above, you can see the students scattered around the lecture hall working on some ideation activities with sticky notes as they came up with wild ideas to think about for their robots. Very fun!
Leisure Line
I was a bit late since these got made the weekend after Valentine’s Day, but they still tasted great. My new secret with cookies is to pay a little less attention to the times on the recipes and do it more by feel and by eye. I pulled these sugar cookies out a little earlier than I normally do which I think was the ticket. Same recipe from Sally’s Baking Addiction that you’ve seen many times before!
Still Life
This year I discovered Cara Cara oranges and I’m not sure I’ll ever look at an orange the same way. It’s like a grapefruit mixed with a navel orange. I’ve never been a huge orange guy, but these are so delicious. I can’t get enough of them. If you find them in your supermarket (I get mine in 10 lb increments from Costco), try them out!
Hat tip to
who put me on to https://datanutrition.org which ultimately helped me discover https://technutrition.org.
There's a lot of evidence in various natural processes that inefficiency IS efficient--that the effort involved in coordinating efficiency is energetically expensive and voids a lot of positive incidental or unintended outcomes. AI is being sold almost entirely as an efficiency product that eliminates friction, in learning and otherwise--a way to reduce time investment, the cost of expert teachers, etc. Not only do we know enough now to know that capitalism will produce new forms of friction in the seemingly frictionless new technology, we also know that frictionlessness is on some level the devil in terms of the outcomes of education and life itself. Even when there's some new speed added, it's the speed of a rocketship trapped in the gravity of an event horizon, where it accelerates towards a point where nothing will ever be changeable from the perspective of the people inside and where they will be cut off from the rest of the universe in the process of falling faster and faster to that destination.
I'm trying to work through this question for a specific case right now. There is now an AI enabled reader that will read academic papers for you. From what I can see, it is quite good and has lots of features that allow you to skip citations in the middle of paragraphs, etc. I am a doctoral student with very mild dyslexia and have always felt like I am behind when it comes to knowing the literature. I fought hard to learn to read faster and still struggle.
On the others side of this technological fence, I see green pastures of speed reading huge numbers of papers while getting work done at the lab bench. No more struggling with trying to parse the complex sentence structure that meaning seems to always be buried beneath in academia. There's a catch though. Will I keep reading? If listening is so much easier for me, will I continue to improve my reading skills? Even if they will never be at the same level as my colleagues, would it be better to have the best reading skills I can than to let them atrophy?