6 Comments
User's avatar
Daniel's avatar

Wonderfully thoughtful; thank you.

Maybe it sounds overly simplistic, but if we didn't cause our own births, doesn't that suggest something about what our primal stance towards the world ought to be -- something something gratitude humility....

I wonder what techno-capitalism would be like if the people running things incorporated some genuine humility into their work?

Expand full comment
Josh Brake's avatar

Totally agree with you. Our desire for superpowers blinds us to our own dependence. We so often see dependence as thing to be avoided, but it is integral to what it means to be human. Striving for independence for its own sake is cause for concern. In many ways, this narrative is the core way that technology threatens our humanity.

Expand full comment
Rob Nelson's avatar

Wonderful post, Josh. I'm with you on emulation is what we will have, if we ever get something like AGI. I still think we're quite far away from AGI and it is an open question whether machines could ever have consciousness in any way similar to human consciousness.

The essay of Dave Karpf's you linked to has a great account of how close human cloning seemed to many of the technologists and journalists talking about it in 1998. Those predictions seem hilariously wrong today. Predictions about AGI floating around now seem likely to read similarly in fifteen years.

For a longer take on the point in the Audrey Watters note, I recommend the essay "We're sorry we created the Torment Nexus" by Charlie Stross, which talks about dangers of taking entertaining stories written for the SF market as a blueprint for the future.

Expand full comment
Josh Brake's avatar

Thanks Rob. Yes, the Karpf essay is a great example of how it's challenging to accurately predict the timeline of the future of technology. Probably a Pareto distribution there where the last 20% of development actually takes 80% of the time.

Thanks for the pointer to the Charlie Stross essay as well, I added it to my reading list. I'm sure it will make its way into a future post in some way :)

Expand full comment
Alberto's avatar

Great article. Thank you! I think believing in the AGI has the poor premise of believing in a meager vision of what intelligence, reason and human knowledge and wisdom are.

I am completely with you when you say "The biggest strength and the fatal flaw of AGI are the same: that it is intrinsically designed to bypass human agency." I wonder if you ever reflected (and wrote about) on the relationship between the ever-increasing dependence on automated decision-making and human freedom?

Expand full comment
Jim Au's avatar

Great article! We all love to reference due diligence in some way or other, perhaps to assuage our own guilt of having not done it when it was due! I think that you have brilliantly shown a light on the missing due diligence in so much of the push for AGI or whatever is to supersede it. Namely, you put forth the question as to whether THEY, those who persist in this technology surge, have truly asked themselves whether they know what kind of world they think that they are constructing, and do they really want that. Brilliant, and yet so Simple (the inevitable B.S.).

Expand full comment