5 Comments
Jun 17Liked by Josh Brake

I loved the article, of course. I share your spiritual perspective on the matter, but decided for now to challenge myself with the philosophical questions that arise without explicitly including the heart or soul in my ruminations. Result: A question initially, then my attempt to answer this question with more questions. So then! Why do we insist on always looking at the best that a human or a computer / machine / robot / AI can do? Why not look at the rest of the performance spectrum, like when we fail or simply are not at our best? Can we not draw a distinction between humans and [I'll just call it] AI simply by looking at poor or less than perfect performance? When AI does poorly, we naturally attribute that to a lack in our concept and programming, after all, AI owes its existence to us! Not thorough enough logic on our part? Or a lack of good training data (much of that being human-generated data)? That's on the software end of things. As for hardware, there are power outages, component failures, poor design, faulty manufacturing, signal degradation and interference, and the like. For humans, a simple heart attack is an ultimate Ctrl-Alt-Delete, but even a brain fart can lead to dire consequences (I watch lots of Mentour Pilot, so human experience is fraught with examples of operator and inventor error). Is the human mind slip comparable to the AI hallucination? Can AI be willfully ignorant, which humans often are? Rebellious? Disinterested? Flirtatious? Sinister? Can AI be too tired to think clearly one day, and then power on another day and be utterly inspired and brilliant? I hope that I get a chance to read that book by Larsen, because I'm sure that I would find it to be helpful as I think through these questions comparing humanity with AI. I really appreciate your column, since it brings me back full circle to these same questions which first arose during my time at Harvey Mudd College.

Expand full comment
author

Lots of good questions here, Jim, thanks for sharing. I do hope you have a chance to pick up Larson's book, I've been really enjoying it.

Expand full comment

I'm not clear on how you're differentiating the mind and soul here. Under most dualist views, the mind is the same thing as the soul. Can you say more about what the soul does in your view?

Expand full comment
author

I don’t think of it as a strict distinction, but as a way of describing different aspects of a dualistic view of the soul. In many ways, I see overlap between heart, mind, and soul. The soul describes the fundamental immaterial aspect of a person which then has some aspects that are connected to the body and mind (e.g., the cognitive capacities of the brain) and also the heart (e.g., the physical outworking of will and intention). Thinking out loud here and would be curious to hear your thoughts and reaction!

Expand full comment

That's helpful, to describe them as different aspects. I think the central premise of your article here is about what aspects an AI could have, which you describe as strength and mind. I would think, for a dualist, the most natural way of carving it up would be to say an AI could in principle do whatever our physical brain does, but whatever the non-material part does they wouldn't be capable of. Would you agree with that?

If so, I wonder where you draw the line on the non-material and material parts. You admit that it's hard to say when it comes to the "mind" aspect. Can you give me an example of something you think definitely falls on the non-material soul side?

Expand full comment