Degenerative AI
How Albert Borgmann's device paradigm can help us think about the impact of AI
Thank you for being here. As always, these essays are free and publicly available without a paywall. If you can, please consider supporting my writing by becoming a patron via a paid subscription.
![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc3823699-e2de-4e04-84b7-a390bb5b43b7_2000x1400.png)
I'm finally getting around to reading philosopher of technology Albert Borgmann. I started by cracking the digital cover of Technology and the Character of Contemporary Life on my Kindle a few weeks ago. Although Borgmann wrote it in 1984 when AI of the kind we are seeing today was still squarely in the realm of science fiction, we would do well to consider the analytical frame he builds. Applying his device paradigm to AI can help us understand the way that it can be an extremely powerful and beneficial tool in specific areas while simultaneously eroding other important areas of our everyday lives—activities that Borgmann calls focal practices.
What makes Borgmann so insightful—and deserving of a place among the great critics of technology like Jacques Ellul, Wendell Berry, Ursula Franklin, and Ivan Illich—is that he is able to take ten steps back to look at technology at a level of analysis that sits above any one particular invention. As we engage with AI and the head-spinning pace of innovation and new releases, this level of analysis is the only one that will enable us to stay sane.
How to stay sane in the age of AI
Staying sane in the AI sphere these days is challenging. I'm writing this in the wake of the release of OpenAI's new o1 model which they claim is the first of a new series of models "designed to spend more time 'thinking' before they respond" (scare quotes mine). Swimming in the sea of generative AI is like trying to survive periodic campaigns of shock and awe. Companies stoke the hype, and then after waiting for the flame to be sufficiently suffocated by critique and familiarity, they release a new model or interface to once again pour gas on the fire.
Setting aside my irritation at the intentional prevarication in the use of terms like "thinking" and "reasoning" to describe what these models are doing, it's very hard to not be impressed, and even amazed by them. But once we get a handle on what they can do, the question is what we should do with them. To answer this question, Borgmann has advice for us.
One of the central ideas in Borgmann's analysis of technology is the idea of devices vs. focal things and practices. Technology is designed to isolate a particular product and commodify it, making it, in Borgmann's words, "instantaneous, ubiquitous, safe, and easy." To do this, it extracts a desirable product from its physical and social context. This enables the gears of technological progress to crank, incrementally improving the quality of the output and lowering its cost. But there are sacrifices along the way.
To illustrate, Borgmann uses the example of technological innovations to create warmth. What is lost in creating a technology that can commodify warmth?
A thing, in the sense in which I want to use the word here, is inseparable from its context, namely, its world, and from our commerce with the thing and its world, namely, engagement. The experience of a thing is always and also a bodily and social engagement with the thing’s world. In calling forth a manifold engagement, a thing necessarily provides more than one commodity. Thus a stove used to furnish more than mere warmth. It was a focus, a hearth, a place that gathered the work and leisure of a family and gave the house a center. Its coldness marked the morning, and the spreading of its warmth the beginning of the day. It assigned to the different family members tasks that defined their place in the household. The mother built the fire, the children kept the firebox filled, and the father cut the firewood. It provided for the entire family a regular and bodily engagement with the rhythm of the seasons that was woven together of the threat of cold and the solace of warmth, the smell of wood smoke, the exertion of sawing and of carrying, the teaching of skills, and the fidelity to daily tasks. These features of physical engagement and of family relations are only first indications of the full dimensions of a thing’s world. Physical engagement is not simply physical contact but the experience of the world through the manifold sensibility of the body. That sensibility is sharpened and strengthened in skill. Skill is intensive and refined world engagement. Skill, in turn, is bound up with social engagement. It molds the person and gives the person character. Limitations of skill confine any one person’s primary engagement with the world to a small area.
Seen in this light, AI is the quintessential device. Despite their regular releases of tools like ChatGPT, this is not Open AI’s end goal. Take it from their own mission statement: "We are building safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome." AGI is the technology to conquer all other technologies, enabling us to overcome our limitations and accelerate the move from hearth to warmth across every imaginable application.
If the hype plans out (still a big if), there is great potential for AI to do what technology always does but with even greater speed and strength. If AI enables us to accelerate innovation in science and engineering, we could imagine it helping us to develop our capabilities more rapidly and supercharge our ability to explore, understand, and produce artifacts to improve human health, build clean energy solutions, and better steward our environment. There is, of course, the remaining question of what it will take to get there and what it will cost.
The device paradigm does not downplay this power and its potential upside. After all, technology is quite a wonderful gift in many situations. Medical interventions save, extend, and improve our lives. The tools around us help us to avoid toil and extend our humanity. To take Borgmann's own example, we quite like being able to push a button—or these days, say a few words to our computer—to adjust the temperature in our houses. These technologies make our lives better and more comfortable.
But we can't lose track of the fact that there is always something lost in the process of innovation and progress. When we replace the hearth with the heat pump, we get the benefits of warmth on demand while we lose the center of the house, as Borgmann calls it.
As we think about the impact of generative AI on our lives and work, I can see this game plan playing out ad nauseam. Don't want to write? Use generative AI to extrude better and better essays without the need to work as hard. Feeling lonely? Isolate the act of typing or talking to something to deceive yourself that you are not alone. Want to feel like an artist? Use generative AI to create stunning imagery at the push of a button without even having to deal with a pencil or paint.
As we increasingly use generative AI to extract things from their context, we should keep this in mind. If we're not careful, the promise of a future powered by generative AI may in fact become a degenerative present.
Reading Recommendations
If you’re looking for another analysis of the device paradigm, I will again recommend The Life We’re Looking For by Andy Crouch. Andy’s articulation of device vs. instrument, which I’ve written about before, and of the difference between personal and personalized is swimming downstream of thinkers like Borgmann.
The Book Nook
As you might have guessed after reading today’s essay, I’m quite enjoying Borgmann. I would recommend Technology and the Character of Contemporary Life to you. I’ve found it to be accessible and digestible thus far.
The Professor Is In
![](https://substackcdn.com/image/fetch/w_720,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5e34b2a1-01dc-43cb-b083-2f651c947e2e_768x1024.jpeg)
![](https://substackcdn.com/image/fetch/w_720,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1b0c0f4-4302-488c-aca0-fa190e7e585d_1024x768.jpeg)
![](https://substackcdn.com/image/fetch/w_720,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F374dac98-7513-4b08-a8af-286bf79f88af_1024x768.jpeg)
![](https://substackcdn.com/image/fetch/w_720,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F21d18e85-da23-4a1b-ab77-03e0edaa53eb_1024x768.jpeg)
Sal Khan’s visit last week to Mudd was a great success. It was a delight to get to moderate the conversation with him on stage and to poke and prod at some of the points that concern me most about the tack that he and the team at Khan Academy have been taking on generative AI. Much of this is connected to themes that will not surprise any of you—worries centered on the intentional anthropomorphization of tools like Khanmigo and the downstream consequences on students, teachers, and future teachers. We followed up the lecture with a small curated dinner conversation which was full of good food and rich conversation.
I’m not sure that a recording of the event will be available to the public, but I’ll make sure to share it here if and when I can.
With the first talk in the rear-view mirror, I am looking forward to John Warner’s event on October 8th which will be moderated by my friend and fellow organizing committee member, Kyle Thompson. As with the first event, you can register to attend in person or online via Zoom here.
Leisure Line
Gotta love Randy’s donuts. And glad that the patio in Pasadena is reopened! The pumpkin spice donut is delicious.
Still Life
![](https://substackcdn.com/image/fetch/w_720,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb0dcbbc-4717-4529-b176-56784ddf1a2d_480x360.jpeg)
![](https://substackcdn.com/image/fetch/w_720,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ae9d2d8-9cb1-4a23-9781-afa7cbff8446_2048x1536.jpeg)
![](https://substackcdn.com/image/fetch/w_720,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe8cb47d6-d364-4bcc-8616-7ffdf55c0abc_2048x1536.jpeg)
![](https://substackcdn.com/image/fetch/w_720,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa8d5e981-492b-429b-83a4-1f2ba6ac50ca_2048x1536.jpeg)
Descanso Gardens is a treasure in our backyard. They recently renovated their railroad and added a whole new outdoor model railroad exhibit. The kids love it and I finally got to see it this weekend. It’s delightful.
in re: commodifying warmth and loneliness
I have noticed that the vast majority (if not all) of your writing on AI is focused on generative AI or large language models, but I think it is important to bring in other types of AI into these conversations as well. While we should recognize the potential harms of new technologies such as ChatGPT, those are perhaps easier to spot, as they are fresh in the public mind. I think it is important, if you write about AI, to also acknowledge existing AI that are already damaging our society.
For example, you write about commodifying warmth and the potential dangers of using AI interaction to replace social interaction. It is true that some people are already doing this, for example with the software Replika, but it is not being done on a large scale. In the conversation around generative AI, I feel that many people have forgotten that AI has already wreaked havoc on our social lives through social media algorithms (ironic, because I am posting this on a social media platform). I actually think social media algorithms are more damaging than generative AI, because it is hard to imagine how one can get addicted to ChatGPT (though maybe that just reflects a lack of imagination on my part).
I agree that technological advancement can come at a great loss. The best way to use technology is with full awareness of its benefits and harms, so you can maximize your gains and minimize your loss. Unfortunately, current technology is designed in a way that maximizes the shareholder's gains, so you have to try a lot harder to optimize technology usage for yourself.
I'm a frecuent reader of L.M Sacasas The Convivial Society newsletter, so Borgmann is someone I've been trying to read for a long time. Thank you for reminding me that I need to dive into his thinking asap
There are many AI products marketed as tools for efficiency in tasks that, I think, don’t have such a degenerative effect (making PowerPoint presentations with one click, writing generic LinkedIn posts, and others). But obviously, I see the problem when AI is applied to more abstract (human?) tasks such as the development of critical thinking, writing, or reading.
The thing that I find most troublesome is that the need for metrics and clear “objective” results is real, and with this frame of mind, the deployment of AI is really tempting. I’ve worked as a philosophy teacher for young students, and it is difficult for me to explain and pinpoint how reading and deep diving into a text is a rich experience and one that needs to be defended. I don’t think that all philosophy NEEDS to be useful and productive; I do believe that meditating on a question is necessary.
But as I said, the deployment of these AI products happens in the context of metrics and the search for objectivity and results when evaluating these tasks. And in that sense, I do wonder how we can defend these kinds of “slow” practices in a persuasive manner in this age of AI.