6 Comments

Chollet provides a great frame for understanding all the hype about AGI. That Mindscape episode is a nice introduction to his thinking. Another thinker worth paying attention to if you want a different concept of intelligence is Michael Levin. Here is a blog post by him and an interview with Michael Pollan.

https://thoughtforms.life/self-improvising-memories-a-few-thoughts-around-a-recent-paper/

One of the things I love about Levin is that he has read William James, and brings Principles of Psychology to bear on attempts to define intelligence.

Expand full comment

This is a fascinating read!

Expand full comment

Hehehe, I read this and thought, "hey what a great chance to share one of the very first essays I wrote for my Substack about Chollet and how we model human minds!" The article got one like...from one Josh Brake. https://buildcognitiveresonance.substack.com/p/modeling-minds-human-and-artificial

Expand full comment

I don't think I agree - at least, we need to be more generous in our understanding of intelligence, which is just the ability to solve problems through the application of strategies. Chollet has a certain frame which has utility, but LLMs clearly generalize a lot more than one would think since LLMs that have never been trained in a language are able to effectively communicate in that language.

Strictly speaking, pure memorization would indicate that LLMs cannot generalize at all - but we do know that they can, in the sense that at the very least, they can accomplish tasks that they had never been trained for. See "Emergent Abilities of Large Language Models" by Jason Wei et al.

So I think that no, in the sense that all important tasks can be automated, LLMs(and related technologies) can at least reach AGI from learning human training data. At 100x speedup, it seems that in effect, this is all superhuman, see persuasion capabilities.

https://www.psychologytoday.com/us/blog/emotional-behavior-behavioral-emotions/202403/ai-is-becoming-more-persuasive-than-humans

"That is, when faced with an LLM that has access to demographic information allowing it to personalize its argument, humans are 81.7% more likely to agree with the arguments when compared with a human adversary"

Expand full comment

Great work delving deeply into something as seemingly simple as definitions. The act of creating—of generalizing from a point of origin—in the classroom is integral to meaningful teaching and learning. This is helpful is more fully contextual using that argument. Nice work.

Expand full comment

Excellent post!

I will try to answer your questions:

Prosperity for whom? ->

While it may increase productivity for some and potentially lower costs for consumers, it can also lead to job displacement for workers without the skills to adapt to the changing job market. Therefore, it's essential to ask whether this "next leap in prosperity" will be broadly shared or concentrated among a select few. Will it lift the most vulnerable populations or widen the gap between the rich and the poor? The answer is probably immense prosperity for Sam, VCs, a few big companies and their executives, and a few OpenAI competitors. For the rest, it will bring universal basic pay if it succeeds and possibly no purpose in life with a massive inequality between rich and poor.

At what cost? Technological progress rarely comes without a price. The cost may come in many forms: environmental, social, and economic. There are an estimated more than 100 billion dollars in sunk cost at this point, and it may be a Trillion or more if we do not hit another AI winter in the next 3-5 years. A huge cost to the environment, and if we hit another AI winter, it may bring a massive waste of resources that were brought online to run these models, which could have been spent on better projects.

Toward what ends? -> Not much. What are the ultimate goals of this pursuit of prosperity? Is it simply about increasing material wealth, or are there broader social, cultural, or environmental objectives? Is the technology being developed to address fundamental human needs, or is it driven by other motives, such as profit or power? I think we all know the answer to these questions.

And perhaps most importantly: to what problem is more prosperity the solution?  It solves some but creates other ones. If we don't identify the specific problems we're trying to address, we risk creating new problems while pursuing prosperity. Furthermore, prosperity itself may not be the appropriate solution to many problems. For example, increasing material wealth may not address underlying social injustice or environmental sustainability issues. In some cases, it might even exacerbate them.

The other questions to ask are:

What are the unintended consequences of this technology that go beyond the environment and inequality? The last twenty years have brought all kinds of mental health challenges, especially in kids. Why do we not think this will make it worse? If kids start thinking that there is no future because AI will be able to do most jobs, what is the incentive for spending 18 years or more in school?

Why is so much focus on material wealth? Yes, I know society measures success by how many toys you have, but is it the right metric to evaluate success? The better metric is probably general happiness, but does the new technology increase happiness when everyone gets universal basic pay and does not have much purpose in life?

Why are we allowing a few to build black box technology without holding them accountable for safety and unintended consequences? If it succeeds, a few will benefit (realistically), but the rest of the world may suffer the consequences.

Are we building something that we will regret that we built, or in the name of the AI race with China and others, are we willing to compromise the greater good?

Should we build it so that only good guys have the technology before bad guys? It is a noble goal if it is achievable. When we are not able to stop nuclear weapons proliferation, why do we think the scenario will be any different this time?

Expand full comment