Discussion about this post

User's avatar
Jim Au's avatar

I loved the article, of course. I share your spiritual perspective on the matter, but decided for now to challenge myself with the philosophical questions that arise without explicitly including the heart or soul in my ruminations. Result: A question initially, then my attempt to answer this question with more questions. So then! Why do we insist on always looking at the best that a human or a computer / machine / robot / AI can do? Why not look at the rest of the performance spectrum, like when we fail or simply are not at our best? Can we not draw a distinction between humans and [I'll just call it] AI simply by looking at poor or less than perfect performance? When AI does poorly, we naturally attribute that to a lack in our concept and programming, after all, AI owes its existence to us! Not thorough enough logic on our part? Or a lack of good training data (much of that being human-generated data)? That's on the software end of things. As for hardware, there are power outages, component failures, poor design, faulty manufacturing, signal degradation and interference, and the like. For humans, a simple heart attack is an ultimate Ctrl-Alt-Delete, but even a brain fart can lead to dire consequences (I watch lots of Mentour Pilot, so human experience is fraught with examples of operator and inventor error). Is the human mind slip comparable to the AI hallucination? Can AI be willfully ignorant, which humans often are? Rebellious? Disinterested? Flirtatious? Sinister? Can AI be too tired to think clearly one day, and then power on another day and be utterly inspired and brilliant? I hope that I get a chance to read that book by Larsen, because I'm sure that I would find it to be helpful as I think through these questions comparing humanity with AI. I really appreciate your column, since it brings me back full circle to these same questions which first arose during my time at Harvey Mudd College.

Expand full comment
James Allen's avatar

Another thoughtful piece. Thanks, Josh. I wasn't aware of Larson either, so I'm following up on his work now.

Thought I'd share a couple of things I had been reading on this topic.

This paper by the psychiatrist Thomas Fuchs was an excellent teasing apart of the distinction between human and machine "intelligence": https://www.researchgate.net/publication/361625257_Human_and_Artificial_Intelligence_A_Critical_Comparison

And I love the invitation that Fuchs offers us: "Precisely because technology exceeds many of our specialised abilities, it challenges us to rediscover what our humanity actually consists of - namely, not in information or data, but in our living, feeling, and embodied existence."

Of course, you would go further than that: a human being is an ensouled creature. As would I.

Another by British neuroscientist Anil Seth, who seems to have a materialist view (biological naturalism), but even from that standpoint makes a compelling case against the idea that machines can become conscious: https://osf.io/preprints/psyarxiv/tz6an

The issue strays into territory I recently explored in an essay series about the rise of hyper-anthropomorphic AI also. It's a long one, but I'd love to hear your thoughts on it if you ever get time: https://allenj.substack.com/p/hollow-world-part-1-of-5

Expand full comment
4 more comments...

No posts