Thank you for being here. As always, these essays are free and publicly available without a paywall. If you can, please consider supporting my writing by becoming a patron via a paid subscription.
Love the Lord your God with all your heart and with all your soul and with all your mind and with all your strength.’ The second is this: ‘Love your neighbor as yourself.’ There is no commandment greater than these. - Mark 12:30-31
If personhood depends only on a physical mind and body, are humans fundamentally different from an artificially intelligent robot?
Whatever your thoughts about AI, we had better get clear on what we think it means to be human. Today, I want to think out loud about some questions I've been wrestling with on this topic in recent weeks.
Philosophy incarnate
One of the things that has fascinated me most about the conversations about AI over the past several years is how deeply they are connected with philosophical questions. While we try to figure out what exactly generative AI is and how it works so surprisingly well, we're simultaneously (and often subconsciously) asking other questions connected to what it means to be human, what intelligence really is, and what the ultimate limits of AI are.
If you've been part of these conversations for any period of time, you've probably encountered a whole range of different perspectives on all of these questions. There are those who believe that AI is conscious and that when we say that AI can think, this isn't merely anthropomorphism. Others are significantly more skeptical. This is not to say that they don't appreciate what AI can do but just that they have a more limited view of what AI fundamentally is.
Today I want to explore some of the questions I've been thinking about related to these questions, with a particular focus on how my Christian faith is shaping my thinking.
The Absent-Minded Professor is a reader-supported guide to human flourishing in a technology-saturated world. The best way to support my work is by becoming a paid subscriber and sharing it with others.
What it means to be human
The question at the root of many of these questions has to do with what it means to be human. From a materialistic perspective, there does not seem to be a fundamental distinction between Wendell Berry's categories of machine and a creature. In this frame, the difference between a computer, a dog, and a human but not one of kind but of degree.
As a point of comparison, consider what the Christian worldview says about what it means to be human. Last week I listened to a podcast conversation with a fellow LeTourneau alum Pete Shull, Professor of Mechanical Engineering at Shanghai JaoTong University. Pete works in the field of AI and had some thoughtful things to share about his own thinking on these issues. In particular, he pointed to the passage in Mark 12 where a teacher of the law asks Jesus about the greatest commandment. Jesus responds:
“The most important one,” answered Jesus, “is this: ‘Hear, O Israel: The Lord our God, the Lord is one. Love the Lord your God with all your heart and with all your soul and with all your mind and with all your strength.’ The second is this: ‘Love your neighbor as yourself.’ There is no commandment greater than these.”
Pete goes on to explain how the four elements that Jesus highlights—heart, soul, mind, and strength—provide a helpful grid through which to understand what it means to be human. In brief, here are some definitions of these four elements:
Heart: The seat of a person's emotions, desires, and will.
Soul: A non-material and eternal part of a person and the center of their spiritual life.
Mind: The source of a person's thoughts, intellect, and understanding.
Strength: A person's physical abilities as enabled by one's body.
These categories clash with a materialistic worldview. In a materialistic worldview, whatever aspects of our existence are ineffable simply seem ineffable. Ultimately, if the physical is all there is, the phenomena we experience must be downstream of mechanisms wrought by material components. While certain elements of reality may appear transcendent, they can be explained in physical terms even if we may not possess the ability to articulate them now.
The implications of a materialistic worldview
If you're approaching the world from a materialistic worldview, it seems reasonable to have one's doubts about the uniqueness of humans. This seems most clear to me on the axes of strength and mind. While our engineered examples typically do not exhibit the generality of humans, we have created machines that can at least on some tasks, demonstrate much higher performance than even the most capable human.
As just one example, consider the robotic systems used to assemble electronic components on printed circuit boards. These machines, called pick-and-place machines, are able to populate a board two orders of magnitude faster than a human operator. This isn't to mention the ability to perform this function without the need for breaks and with a very low error rate. There is clearly no contest here. There are many other examples from the rapidly progressing field of robotics, including this recent freaky video from Boston Dynamics that shows their humanoid robot Atlas getting up from the floor with ease. (Seriously, skip the video unless you want to risk having a hard time falling asleep tonight.)
The point here is that I think most reasonable people would concede that there is no fundamental distinction between humans and robots on the strength dimension. On a purely mechanical basis, we've designed machines that are stronger, faster, more flexible, and more durable than we are.
What about the mind? A decade ago, we felt more comfortable. Sure, we had seen rule-based AI like DeepBlue demonstrate the ability to conquer the human intellect in a game like chess. But many believed that there was a fundamental limit to these types of systems—and they seem to have been right. It wasn't until machine learning hit the mainstream in the early 2000s and deep learning exploded onto the scene in the 2010s that the belief in the fundamental supremacy of humans in the brainiac department began to show significant cracks.
This was why AlphaGo in 2015 was significant. Go, a game that many thought was so complex and dependent on human intuition and creativity, was thought to be beyond the capabilities even of the advanced deep learning models that had proven valuable in tasks like image recognition and recommendation algorithms. And yet, the limits thought to be unreachable were once again reached.
Enter the early late 2010s, and with the development of the transformer architecture by Google in 2017, the limits of artificial minds were once again pushed. To the degree that human thought is computable, it is only a matter of time until it is conquered as we develop increasingly advanced computers.
Strength and mind, heart and soul
Strength and mind, at least in some sense, can be understood to be the result of physical processes. Strength is the movement of muscles and tendons as coordinated by brain signals traveling throughout our nervous system and the mind is, at least in part, the result of outputs of the networks of many billions of neurons within our brain.
It seems relatively clear that humans are not fundamentally different from the rest of creation on account of our physical bodies. We can see that our physical capabilities are similar to other animals and also can be reproduced and surpassed, albeit in limited ways, by machines that we create.
Mind is much more nuanced. I'm no expert here, but it's not clear to me that there is a fundamental difference between a human, animal, and machine mind, at least in essence. We know much less about neuroscience and the functioning of the human brain than we do about other aspects of the body, which makes these conversations more difficult to speak with authority. I certainly am not one. While AI as it currently exists is not conscious and does not have subjective experience, that doesn't eliminate the possibility that a mind cannot be created from material components alone. I'm currently learning a lot more about this from
. Larson, both in his book The Myth of AI and on his Substack , presents arguments that there is a fundamental difference between the way humans and machines think.The real question is whether there is any limit that prevents AI from becoming conscious. If you believe humans are conscious and that nothing exists beyond the material, it is reasonable to think that there is no fundamental reason that would prevent us from engineering conscious beings. However, I think Larson makes a convincing case that we are nowhere near this, although the recent developments in LLMs might provide some very preliminary evidence that such an endeavor might not be completely hopeless.
What can AI become
Setting aside what you think about strength, mind, and heart, the fundamental dividing line is what you think about the existence of a soul. The soul is by definition non-physical. Therefore, it is, by definition, excluded from a materialistic explanation of reality. From a Christian perspective, all four aspects—heart, soul, mind, and strength—are critical parts of what it means to be human. While our strength does not seem to be fundamentally different, the mind and heart are less clear. However, the question about the existence of the soul and whether it is a property unique to humans is a clear divider between humans and the rest of creation.
The hypothetical made concrete
As a Christian, I have strong convictions not only about what AI is, what it can become, and how we should engage with it. Despite its many impressive capabilities simulating human cognition in at least some limited ways, I do not believe it is thinking or reasoning in a way that is equivalent to the way we as humans do. You may think that all of this is rather theoretical, but I assure you that it is not. Your answers to these questions about the essential nature of the world and what it means to be human have more practical significance to the way we live than at any time in my lifetime, and perhaps in all of human history. What AI has done is to make the hypothetical real.
What you choose to do with AI, in your life and your work will be influenced by what you think about all of this. For me, my Christian convictions give me clarity about how and why I will use AI in my own life and also directly influence how I bring it into my work and into the experiences of the students who come through my classes.
I'm reminded of a phrase from one of my friends about the power of culture. "Who we are shapes what we make, and what we make shapes our culture, and that our culture comes back to ultimately shape who we are.”
Regardless of where you stand on the existence of the soul, if you sense that there is something truly ineffable in the world, let's let that shape how, what, and why we build.
Recommended Reading
This piece from
is a good introduction to his thoughtful engagement with these ideas. I recommend you give it a read.I thought there was some interesting analysis here about the way that the expanding role and responsibilities of the college president present significant leadership challenges.
Whatever you think about Andreessen Horowitz and a16z, I found this collection of resources that they’ve compiled around AI to be a helpful curated resource.
The Book Nook
After reading Erik’s post over the weekend, I picked up his book to explore his thinking in a bit more depth. It’s very readable and has a nice balance of historical context with future predictions mixed in. Given that he wrote it pre-ChatGPT, I’m curious to see how that development shapes his arguments.
The Professor Is In
I’m tempted to bring donuts to campus almost every Friday but especially when it is National Donut Day. This Friday I had the pleasure of giving the group their first Randy’s Donuts experience.
Leisure Line
This Saturday we checked out the new secondhand-LEGO store in town—Bricks & Minifigs. Pretty fun idea. They carry a range of different stuff: new sets, used sets, and big bins of loose legos that you can buy by the flat-rate container or bag.
Still Life



After watching the building go up piece by piece over the last year or so, it was fun to get to walk up close to the new Resnick Sustainability Center on campus at Caltech this week. A beautiful building!
I loved the article, of course. I share your spiritual perspective on the matter, but decided for now to challenge myself with the philosophical questions that arise without explicitly including the heart or soul in my ruminations. Result: A question initially, then my attempt to answer this question with more questions. So then! Why do we insist on always looking at the best that a human or a computer / machine / robot / AI can do? Why not look at the rest of the performance spectrum, like when we fail or simply are not at our best? Can we not draw a distinction between humans and [I'll just call it] AI simply by looking at poor or less than perfect performance? When AI does poorly, we naturally attribute that to a lack in our concept and programming, after all, AI owes its existence to us! Not thorough enough logic on our part? Or a lack of good training data (much of that being human-generated data)? That's on the software end of things. As for hardware, there are power outages, component failures, poor design, faulty manufacturing, signal degradation and interference, and the like. For humans, a simple heart attack is an ultimate Ctrl-Alt-Delete, but even a brain fart can lead to dire consequences (I watch lots of Mentour Pilot, so human experience is fraught with examples of operator and inventor error). Is the human mind slip comparable to the AI hallucination? Can AI be willfully ignorant, which humans often are? Rebellious? Disinterested? Flirtatious? Sinister? Can AI be too tired to think clearly one day, and then power on another day and be utterly inspired and brilliant? I hope that I get a chance to read that book by Larsen, because I'm sure that I would find it to be helpful as I think through these questions comparing humanity with AI. I really appreciate your column, since it brings me back full circle to these same questions which first arose during my time at Harvey Mudd College.
Another thoughtful piece. Thanks, Josh. I wasn't aware of Larson either, so I'm following up on his work now.
Thought I'd share a couple of things I had been reading on this topic.
This paper by the psychiatrist Thomas Fuchs was an excellent teasing apart of the distinction between human and machine "intelligence": https://www.researchgate.net/publication/361625257_Human_and_Artificial_Intelligence_A_Critical_Comparison
And I love the invitation that Fuchs offers us: "Precisely because technology exceeds many of our specialised abilities, it challenges us to rediscover what our humanity actually consists of - namely, not in information or data, but in our living, feeling, and embodied existence."
Of course, you would go further than that: a human being is an ensouled creature. As would I.
Another by British neuroscientist Anil Seth, who seems to have a materialist view (biological naturalism), but even from that standpoint makes a compelling case against the idea that machines can become conscious: https://osf.io/preprints/psyarxiv/tz6an
The issue strays into territory I recently explored in an essay series about the rise of hyper-anthropomorphic AI also. It's a long one, but I'd love to hear your thoughts on it if you ever get time: https://allenj.substack.com/p/hollow-world-part-1-of-5