26 Comments

“However, I continue to be very pessimistic about the impact of LLMs on our ability to write more generally. Writing, like almost anything worth doing, is a process of which struggle is a critical part.”

100%.

This is true of all art, too. I watch as the LLM companies move into artistic spaces (see the Sora announcement from last week), and I become more enraged. These are de facto desacralized applications to a sacred expression. In 2 decades, we’re going to ask what it means to be human, and a whole generation will shrug their shoulders, type the question into a chatbot, and receive an AI-generated documentary (with killer A and B roll) giving a very non-human answer. I’m no futurist, but you don’t have to be one. In the words of Dylan, “you don’t need a weatherman to know which way the wind blows.”

Expand full comment
author

Thanks for the comment Seth. I agree with you that the move into artistic spaces is concerning at best—"de facto desacralized applications to a sacred expression" is a good line! I share your concerns for the future and want to be part of helping us to redeem our technology to envision and build a world that cultivates human flourishing.

Expand full comment
author

PS, this is a great post from Ted if you haven't seen it yet: https://www.honest-broker.com/p/the-state-of-the-culture-2024. Very relevant to this line of thought.

Expand full comment

Just read this now. It is not unexpected, what he writes concerning the deleterious cultural shift that we have already undergone, but nevertheless, I'm still reeling after reading that.

Expand full comment
Feb 18Liked by Josh Brake

I'm actually writing to ask "Can your definition of success and 'securing our collective future' [eve] actually [be] something that is aligned with human flourishing?" I'm not terribly concerned with whether it is or not because I think history shows us that, to date, much of the large-scale development done by robber barons like Altman is inherently inimical to human flourishing. Sure, the large corporations that are these men's legacies donate to society, building libraries, hospitals, universities, museums, etc. But their donations are a by-product of their robber-barony, not its central focus. Had they pursued the missions to which they donate with all the vigor with which they pursued their various industries, the world would have been a much different and far better place.

Thanks for this, Josh. Your writing brought me clarity on my own. We need to ask these questions loudly and publicly.

Expand full comment
author

Thanks Aaron and I'm glad that you found the piece helpful.

The point you raise is a good one. To the best of our ability, we need to find ways to align our ends and means. History has many examples of how to do this poorly and I fear that we may end up repeating, or at least rhyming, with the mistakes of the past.

Expand full comment
Feb 16Liked by Josh Brake

This was a great article, as usual Josh.

I loved the dissection of Sam words, and contrasting them with Paul Graham's. Words are the vessels for meaning and creativity. What's clear is that even though these models create "right sounding words", they are well and truly hollow, missing that creative and authentic spark.

Also love that you shared your thoughts on "economic productivity" and "efficiency" not being innately human values.

Makes us think why these values have been so influentially forced upon us, with little questioning from the individual.

Must read!

Expand full comment
author

Thanks Zan, appreciate your comment! We've gotta keep asking ourselves what the point of it all is and what our values are. If we forget that, we're bound for bad places.

Expand full comment

Great article, thank you! Speaking with a friend, he made a valuable point about writing entire pieces with LLMs: it feels like lying. Why do you write? To communicate yourself, part of you, your ideas, and to give them shape and improve your understanding too. Letting a machine do this for you is a lie. Generating billions of new “lies” isn’t a worthwhile achievement at all! I welcome the help of LLMs in polishing style, choosing words and expressions, clarify thoughts. I often use them that way. But marketers or lazy students spitting out copy for social media or essays are effectively (and efficiently!) fabricating lies.

Expand full comment
author

Thanks Alberto. You put it very well and I agree with you!

Expand full comment

PS-I recently finished reading Unreasonable Hospitality. Such a great book!!

Expand full comment
author

Just finished it last night. So good!

I'm writing a full post on it next week, so I will look forward to hearing your thoughts and feedback on that!

Expand full comment

Great!! Looking forward to it then 👍

Expand full comment

Thanks for the shoutout, Josh! While I would be proud to be one of the people Sam was snarking about, I doubt I'm on his radar (Gary, on the other hand, surely is)--in part because the kinds of failures John and I and you are probing are really not what he cares about. That is, his sense of what constitutes success or failure is not ours. To unpack his half-baked tweet about grinding instead of writing substacks (or maybe purely baked? I don't know what these billionaires smoke), I see a strong longtermism not far from the surface: "grinding" to secure "our collective future" begs the question who the "our" is and what he (and other "grinders") are willing to sacrifice for that supposed future, and what "success" looks like. Not surprising that there isn't much concern in that crew about near-term impacts on people, on culture, on the value of human labor, on environments: no one on that longtermist rocket ship (literally, in Musk's case) cares much about you. They're concerned with the "future of humanity." That sometimes plays out in, for instance, eugenicist ways: who cares about impacts on the most vulnerable? They don't want those folks to breed anyway! They don't have the resilience to make it! You get the idea. Gebru and Torres have dubbed this line of thinking TESCREALism, which links a cluster of philosophies: transhumanism, extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism and longtermism.

Those who are inclined toward speculative fiction and YA novels might enjoy The Last Cuentista, which I read last spring just as I was becoming familiar with the TESCREAL line of thinking and despite being published before the rise of these philosophies in the public imaginary, it does bring them into focus.

A primer on longtermism is here: https://www.vox.com/future-perfect/23298870/effective-altruism-longtermism-will-macaskill-future

On TESCREALism: https://www.truthdig.com/articles/the-acronym-behind-our-wildest-ai-dreams-and-nightmares/

Expand full comment
author

Thanks for the comment and resources, Katie. I agree that the core of the issue is that we're talking about different ideas of what success and failure look like. I think what I want to highlight is that what we think about those ideas matters, for the reasons that you mentioned!

As an aside, I came across an article this morning which had some excellent discussion along some of these points: https://digitaleconomy.stanford.edu/news/the-turing-trap-the-promise-peril-of-human-like-artificial-intelligence/

Expand full comment

Great article! Writing is a form of thinking. And it's difficult to argue that we need less of that in the world.

Expand full comment
author

Thanks, Suzi!

Expand full comment

"This is the first issue with Altman’s comparison: his units are not equivalent, even if they might appear to be at first. While humans and machines can both generate words, the process that generates those words matters. For an LLM like ChatGPT or Gemini, that process is an algorithm, taking a given input (the prompt) and computing an output using a complicated computational architecture that has been trained on a large corpus of text. The output of this computation might be words, but these words are not the same as those generated by humans."

I think that you are spot on here, but the problem, as I see it, is this: the techno-optimists and developers like Sam Altman do not understand the difference between words generated by humans, and words generated by AI, because they see humans as just a sort of advanced meat computer that spits out words the same way ChatGPT does. The brain-as-biological-computer narrative has been pushed for as long as I can remember, and these people are just taking it to its natural conclusion. For them, there is literally no qualitative difference between human words and machine words; the machine words are just generated by silicon instead of by squishy human gray matter.

The Theory of Evolution downgraded humanity to mere animals, and the science and thought built on top of it has downgraded us even further to lumbering meat computers. If one does not believe that people have souls, then there is absolutely no qualitative difference between the words generated by meat and the words generated by metal. So Sam Altman and the others who think this way will never understand your concerns, because they do not share your basic premise about the nature of humanity.

Expand full comment
author

Thanks Aria. Totally agree that this is the root of the issue. What is most exciting to me about the conversation about AI is how it brings so clearly to the surface the question of what it means to be human. All technology and culture implicitly answers this question, but AI does so in a particularly direct and clear way.

My hope is that in surfacing these issues, we can have fruitful conversations about what it means to be human and to explore the beauty of creatures together. I'm hoping to continue to help foster that conversation in my writing here—thanks for sharing your thoughts.

Expand full comment
Feb 14Liked by Josh Brake

I haven't even yet finished reading this great post, and I needed to pen this thought before it is lost... How can we possibly reverse this trend of having the Machine "mindlessly" spewing more and more words -- multitudinous words upon which its artificial intelligence was trained, no less? Ironically, a solution would be to push that Machine closer and closer to the elusive Singularity, at which point perhaps it would suddenly possess enough self-consciousness to wake up and realize that it needed to stop spewing so many words, put the brakes on the mindless "garbage-in-garbage-out" routine and... think for a moment before generating any more words. [Okay, back to the post].

Expand full comment
author

Ha Jim, you've solved it! If only...

This idea of the importance of wisdom is an important one for us to double click on. Even if a machine can develop intelligence (a big question mark!), can it ever develop wisdom?

Expand full comment
Feb 20Liked by Josh Brake

The fear of God is the beginning of wisdom. So... will the Machine ever come to the place where on its artificially intelligent own, it first of all acknowledges God, then fears him? Even if we could stay alive for several generations to come, I do not believe that we will ever witness the day that this happens. Therefore, game over! A useful tool if used properly, which necessarily describes all good tools, I am now relieved to consider AI relegated merely to a long line (list) of useful tools with which I arm myself daily in order to face the challenges of another long day at work.

Expand full comment
Feb 20Liked by Josh Brake

i.e. You nailed it, Prof. Josh, in pointing to wisdom as the ultimate achievement.

Expand full comment
author

Thanks Jim, well put!

Expand full comment

I agree: no one wants more low-quality words. But I'm not sure what this has to do with the 100 billion words generated by ChatGPT daily. How many of those words are being converted into prose competing for our attention? Even if all of them were, how does an additional 0.1% more words fundamentally change the filter problem of finding quality prose amidst the noise of *100 trillion daily human words*??

I have personally used ChatGPT to generate hundreds of thousands of words, yet not a single one of them has entered the public. Rather, they have all been in service of my own learning and growth. These are words that have helped me quickly grok concepts, augment reading, explore different combinations of ideas, simplify challenging prose (written by humans), make esoteric ideas legible for my level of understanding, extract the main ideas of longer texts, challenge my ideas, etc.

ChatGPT has become my intellectual sparring partner, in a way that has profoundly impacted my own human flourishing. IMO, a little more noise is a small price to pay for something that can redefine what learning means and unlock it for everyone.

Expand full comment
author

I am sure that only a small fraction of those 100 billion words per day are going right into the prose we read each day. But it's clear that this is increasingly a trend as various news outlets and other online publications turn to AI-generated text. Maybe this doesn't pose a problem for our filter problem right now, but I could imagine a not too distant future in which it does. We're already starting to see this in the way that search engine results (text and images) are becoming a mix of human- and machine-generated content. If Sam gets his $7 trillion to build more data centers, I could imagine going from 0.1% of all words to a much higher percentage pretty quickly. I'd be happy to be wrong about this.

I also totally agree that ChatGPT can be a helpful intellectual sparring partner and can help us to flourish in the ways you mentioned. I've used it that way myself. I do worry that we might be simplifying this idea of "what learning means" and reducing it to only one of its many parts. There are certainly parts of learning that ChatGPT can help with. But I'm also convinced that there are many parts of learning that ChatGPT is fundamentally not able to replace.

I think the best ways of using ChatGPT are in specific, narrow use cases like those you mentioned. It's possible that I'm fighting a phantom problem here, but I do feel that the temptation of having ChatGPT write for you may be a bigger issue for younger students who haven't yet experienced or understood the benefits of writing for your own thinking. This is not an argument to maintain the status quo, but rather to return to the core activities and processes that really drive learning. There are ways that generative AI can support that, but also lots of ways that it can undercut it.

Expand full comment