10 Comments

This subject of AI Bots sounding like humans in education is totally scary! Children do not have the capability of discernment! They can easily be shaped and manipulated by this!

Expand full comment

Dave? Dave?

Expand full comment

I'm so glad you put Turkle on my radar, Josh. You and Marc are rightly focusing on questions about LLM interactions as a "training ground for intimacy with people" and the role that anthropomorphizing plays in how that actually work. Its capacity for sophisticated language games does predispose it to the use of companionship and intimacy, to use Postman's terms, its "inherent bias."

Where I depart from Gioa and Turkle, and maybe you, is on the topic of pretending. The AI itself is not pretending...it cannot, as it has no mind or intentions. It is humans who pretend in a process something like what Coleridge called "suspension of disbelief." Whether it is Eliza in 1965 or ChatGPT today, there is some awareness (maybe not always) that the conversational partner is a machine, but the idea of talking to a machine who talks back is so compelling that the awareness of its artificiality gets suspended.

Seen this way, an LLM is like an imaginary friend come to life or a lovey that a kid insists can speak. The fears that this form of pretend play could replace human relationships is reasonable, but is also an open, empirical question. I think the inherent bias toward treating a talking machine like a person is interfering with our ability to assess its educational and cultural value, the ways it can be used to improve learning.

As you point out, Khanmigo's chirpy insistence on chat is a good example. It's distracting! Instead of trying to be your friend or a personalized tutor, it would be so much better, if an LLM-based homework helper would use its summarizing and language formulating capacity to address the problem of how to tackle homework assignments.

Expand full comment

Right on, Josh.

Because maximally human-like AI has the potential to displace many interpersonal and intersubjective spaces (be it the teacher-student relationship, parent-child relationship, therapist-patient relationship, mentor to mentee relationship, spousal relationships, etc), I see this as a very likely near term contributor to a devastating shredding the already threadbare social fabric.

I invoked Chesteron's Fence in my essay on this subject. Because the construction of maximally human-like AI is simultaneously tearing down fences. Before we commodify and outsource en masse the fulfilment of human roles to machines, be they intimate companions, teachers and therapists, or check-out staff, we may wish to consider how those roles may not be ‘simply that and nothing more’. We may wish to pause, walk around the role, and consider the myriad of ways that a person brings it alive, or that it brings alive a person, that it is a sacred space for communion, for intersubjectivity. (https://allenj.substack.com/p/hollow-world-part-1-of-5)

Expand full comment

You make a good case, but I think it is a bit more nuanced than you suggest .

There is an aspect and discipline for all technology called UX user experience, which is all about making technology as easy and pain-free to use .

Which in principle seems like a good idea right?

So the thing is, modern AI is about conversation , so yes on the one hand there are real dangers as highlighted about people getting confused or enterprising AI because it seems more human like.

On the other hand, I think there’s a good case to be made that making it more human like and relatable makes it easier to use and also makes it easier for AI systems to potentially understand our requests and needs better.

So I would just caution about the strong case of hey we don’t like this so let’s just put on the brakes because I don’t think it’s a simple as that and it’s very easy to criticise and calls in any technological progress which is inevitably imperfect.

It’s also worth recognising the nuances of the benefits and everything is a matter of trade-offs , there is no ideal or perfect scenario that we can aspire to.

Its worth being honest and realistic about that too.

Expand full comment

I actually think Josh struck the right balance here. He did acknowledge that there were use cases to create AI interfaces that made it easier for users to get the task done. It seems like his point is that it's going further than that.

I made a similar point in an essay on this subject recently:

'I have been unable to fathom any good reason why I need an AI assistant that laughs at my dad-humour (as if I’d be inclined to tell it a joke in the first place), or show empathy for my emotional state, let alone to hear it audibly sigh, or take an in-breath before it speaks... [There is a] distinction between anthropomorphic design that maximises the usability of an AI tool, and ultra-anthropomorphic design that subliminally suggests to the human user that the machine they are conversing with also has a set of lungs and a persistent need to maintain normal blood oxygen levels through audible breathing. To design a technology to be maximally usable for humans, to eliminate technical barriers to accessibility, is a worthy endeavour. But to make a technology as human-like as possible is an act of deception.'

https://allenj.substack.com/p/hollow-world-part-2-of-5

Expand full comment

So you tried out Khanmigo and tonight you're hearing Sal Khan speak... first read John Warner's scathing review of Khan's new book: https://biblioracle.substack.com/p/an-unserious-book

His conclusion: "Integrating Khanmigo or other generative AI tools into schools would be to engage in a massive, unregulated, untested, possibly deeply harmful experiment. Sal Khan’s book-length informercial is meant to grease the wheels for that journey. I think the book’s utter unseriousness should be read as a warning to the rest of us. The people in charge have not thought these things through."

Expand full comment

Scary stuff indeed. Thank you for all the research and links. You sent me down rabbit holes of learning, as always.

Expand full comment

I'm also extremely concerned because it's so natural to pretend our computers are human-like. It's been a long dream of sci-fi! And we are so close to faking it good enough that the difference between actually conscious and just faking consciousness won't matter for the everyday user. A world full of literal p-zombies out there. And don't think we can force an Asimov law of "a robot should not pretend to be a human" in a way that is mathematically demonstrable cannot be violated.

Expand full comment

No, but it will matter that humans don't know how to sound like AI.

Expand full comment