The Metaphor is the Message
Pay attention to what they are smuggling
Thank you for being here. Please consider supporting my work by sharing my writing with a friend or taking out a paid subscription.

Metaphors are smugglers. While we need them to grapple with generative AI in our current moment, our metaphors are causing us headaches. Not only are metaphors shaping how we think about what large language models (LLMs) are, but they are shaping what generative AI will become.
There are plenty of metaphors being used today to describe AI, but today I would like to highlight just a few to draw attention to how they smuggle in more than meets the eye. It’s worth paying closer attention, especially when the metaphors in question are being used by Silicon Valley executives attempting to advance a certain narrative aligned with their bottom line.
Let’s start with a few examples.
1. Altman: It takes a lot of energy to train a human.
First up is Sam Altman, the CEO of OpenAI. Reporting on some recent remarks from Altman at the AI Summit in India, The Guardian reports:
“People talk about how much energy it takes to train an AI model – but it also takes a lot of energy to train a human,...It takes about 20 years of life – and all the food you consume during that time – before you become smart.”
In making this statement, Altman is drawing a comparison between the energy required to train an AI model and the energy required to raise a human. To be sure, it requires a great deal of time, energy, and money to raise a human child. But in making this comparison, Altman is attempting to downplay the costs of training a model. Notice that he doesn’t make any attempt to compare the energy requirements in any quantitative terms, nor does he attempt to note the many other differences that exist between an AI model and a human person.
This metaphor is smuggling in the suggestion that AI models deserve a significant investment because other things that matter to us (e.g., human persons) also require significant investments. While nothing Altman says is untrue on its own, the conclusion he is gesturing toward is misleading. Even if the resources required to train an AI model are equivalent to those required to train a human person, this doesn’t mean that the two are equally valuable.
2. Amodei: Powerful AI is like a country of geniuses in a data center.
Let’s turn next to our friend Dario Amodei, the CEO of Anthropic. “A country of geniuses in a data center” is a metaphor Dario is fond of using and one that he coins in his essay Machines of Loving Grace.
In the essay, Dario coins this phrase as a metaphor for powerful AI, which he defines with a list of features including the ability to effectively solve very challenging and general problems, access and control all the interfaces available to a human working at a computer, the ability to work autonomously on tasks for long periods of time, and the ability, despite no physical embodiment, to access and coordinate action in the physical world.
Whether or not this bar of powerful AI has been crossed as we sit here in early 2026 (it seems to me that most, if not all, of the boxes on Dario’s list have been checked), my purpose here is not to adjudicate that claim, but rather to interrogate the metaphor.
When he talks about powerful AI as a “country” of geniuses, he is playing on the idea of a governed body with some sort of polity and coordination. Why not just a “collection” of geniuses? And what exactly is the measure of a genius? Playing fast and loose with units should always invite us to look closer.
In this example, we can see the power of careful word choice and the subtle implications of a metaphor. To say that there is a “country” of geniuses implies that there is some sort of governance, sovereignty, and citizenship. In reality, there is no such thing (although Anthropic does seem to really believe it).
3. Nadella: AI reads and learns like a human
Last, but not least, let’s talk about a phrase from Microsoft CEO Satya Nadella. In the discussion about whether using copyrighted materials to train AI models should be considered “fair use,” Nadella has compared training AI models to humans reading and learning from textbooks. In a short but telling snippet from The Times:
“What’s copyright?” Nadella asked. “If everything is just copyright then I shouldn’t be reading textbooks and learning because that would be copyright infringement.”
Once again, the metaphor is doing a lot of heavy lifting. Let’s start with the fact that copyright has never been about whether you can read or learn from anything. Copyright is about the limits of reproducing that content or repackaging it in derivative works. Training the AI on copyrighted material is not the problem; it’s the generation of new data from the trained model that creates the rub with copyright law.
While there may be some similarity between the way an AI is trained and the way a human learns, the similarities probably end after gathering the resources. At that point, the AI model being trained consumes the data, reformulating it into a set of weights, whereas the human engages with the material, combining it with their own embodied experience in a way that shapes much more than the information stored in their neural pathways.
Here, once again, a metaphor is being used as a sleight of hand, smuggling in the appearance of equivalence when the reality is quite a bit more complicated.
The common element of the cargo
Metaphors are powerful precisely because they enable us to map something unfamiliar onto something familiar. We need them as we try to grapple with AI, but we have to be mindful of what is bundled up with them as we use them (or have them used on us).
It’s especially important that we probe below the surface because the use of metaphors in talking about generative AI is more than just a way of grappling with something new and unfamiliar. It also shapes the way that we think about what we might build with it and how we ought to interact with it. Because generative AI can produce something akin to the product of human thought, it is hard to avoid taking a shortcut and talking about generative tools as if they are functionally equivalent to another human mind. Metaphors like AI as your coworker, intern, always-on assistant, or ever-patient tutor are less descriptions of what AI does or is, and more accurately a prescriptive statement about how we should use and think about generative AI tools.
If you look at all three of these metaphors I explored earlier, you’ll notice a common thread. One of the major rhetorical strategies used in each of these metaphors is to compare AI systems to human persons. In each case, the metaphor implies that this is an apples-to-apples comparison. But it is not.
Writing in Wired just a few days ago under the headline “AI Will Never Be Conscious,” Michael Pollan incisively critiques the use of metaphors. He writes:
Metaphors can be powerful tools for thinking, but only as long as we don’t forget they are metaphors—imperfect or partial analogies likening one thing to another. The differences between the two things are as important as the similarities, but these differences seem to have gotten lost in the enthusiasm surrounding AI. As cyberneticists Arturo Rosenblueth and Norbert Wiener noted years ago, “The price of metaphor is eternal vigilance.”
Eternal vigilance indeed. As we think about the metaphors we’re using to describe generative AI—and perhaps even more importantly, the metaphors others are using to describe generative AI to us— we must consider the limits of these metaphors and challenge the additional implications we are intentionally or unintentionally carrying alongside them.
Got a thought? Leave a comment below.
Reading Recommendations
Melanie Mitchell’s essay “The metaphors of artificial intelligence,” which was published in Science at the end of 2024, is an excellent read. She does a wonderful job of interrogating several metaphors being used to describe AI.
AI researchers are still grappling for the right metaphors to understand our enigmatic creations. But as we humans make choices on how we deploy and use these systems, how we study them, and how we craft and apply laws and regulations to keep them safe and ethical, we need to be acutely aware of the often unconscious metaphors that shape our evolving understanding of the nature of their intelligence.
The Book Nook
After following the discourse in the wake of the release of Paul Kingsnorth’s latest book Against the Machine, I finally managed to dig in last week. So far, I’ve been enjoying the read, although I must admit that I’m struggling to understand the difference between what Kingsnorth defines as the machine and what Jacques Ellul defined as Technique in The Technological Society. They seem quite aligned.
The Professor Is In
Last Thursday, I had the chance to present a talk at the Baylor Institute for Faith and Learning symposium on Technology and the Human Person in the Age of AI. Writing this post was part of my process for writing that talk, but if you’re curious to check out the slides, you can find them here. The talk was recorded, so I’ll share that here as well once it is available.
Leisure Line





After having starter from a friend in the fridge for a few weeks, I finally got around to making my first loaf of bread. Pretty happy with how the first one turned out!
Still Life
#1 is in Little League this year, and we had fun at the Opening Day ceremonies a few weeks ago. Quite a production!










When Altman says "it takes a lot of energy to train a human," he's downplaying costs, and he's also implicitly framing humans as an earlier, less efficient version of the same product. Once you're comfortable making that comparison at all, you've already conceded that what a human life amounts to can be measured in energy inputs and outputs. But by the time you're arguing about whether the energy costs are equivalent, you've already accepted the frame, and that's where some real smuggling happens.
Check out more smuggling here! https://xmousia.github.io/discoursedepot/