2 Comments

in re: commodifying warmth and loneliness

I have noticed that the vast majority (if not all) of your writing on AI is focused on generative AI or large language models, but I think it is important to bring in other types of AI into these conversations as well. While we should recognize the potential harms of new technologies such as ChatGPT, those are perhaps easier to spot, as they are fresh in the public mind. I think it is important, if you write about AI, to also acknowledge existing AI that are already damaging our society.

For example, you write about commodifying warmth and the potential dangers of using AI interaction to replace social interaction. It is true that some people are already doing this, for example with the software Replika, but it is not being done on a large scale. In the conversation around generative AI, I feel that many people have forgotten that AI has already wreaked havoc on our social lives through social media algorithms (ironic, because I am posting this on a social media platform). I actually think social media algorithms are more damaging than generative AI, because it is hard to imagine how one can get addicted to ChatGPT (though maybe that just reflects a lack of imagination on my part).

I agree that technological advancement can come at a great loss. The best way to use technology is with full awareness of its benefits and harms, so you can maximize your gains and minimize your loss. Unfortunately, current technology is designed in a way that maximizes the shareholder's gains, so you have to try a lot harder to optimize technology usage for yourself.

Expand full comment

I'm a frecuent reader of L.M Sacasas The Convivial Society newsletter, so Borgmann is someone I've been trying to read for a long time. Thank you for reminding me that I need to dive into his thinking asap

There are many AI products marketed as tools for efficiency in tasks that, I think, don’t have such a degenerative effect (making PowerPoint presentations with one click, writing generic LinkedIn posts, and others). But obviously, I see the problem when AI is applied to more abstract (human?) tasks such as the development of critical thinking, writing, or reading.

The thing that I find most troublesome is that the need for metrics and clear “objective” results is real, and with this frame of mind, the deployment of AI is really tempting. I’ve worked as a philosophy teacher for young students, and it is difficult for me to explain and pinpoint how reading and deep diving into a text is a rich experience and one that needs to be defended. I don’t think that all philosophy NEEDS to be useful and productive; I do believe that meditating on a question is necessary.

But as I said, the deployment of these AI products happens in the context of metrics and the search for objectivity and results when evaluating these tasks. And in that sense, I do wonder how we can defend these kinds of “slow” practices in a persuasive manner in this age of AI.

Expand full comment