Thanks, this is a helpful analysis that gives me some more words and concepts to share with others but also to structure my own thinking. What's foremost on my mind now in an educational sense is how to get younger people entering the world to commit to developing the skills they need to be capable of using generative AI well--and to find what makes them one kind of 'uncommon' mind that people want to glue to an assembly.
Thanks Tim. I am asking the same questions you are about the implications for education. What seems to be clear to me is that the fundamentals of what it means to learn and build skills will remain largely unchanged. LLMs and Generative AI at the end of the day are a new way of interacting with information and data, much in the same vein as books, the Internet, and Google were in their day. Each of these technologies has particular strengths and weaknesses, but the core set of competencies needed to understand those and navigate them wisely feels to me to be relatively unchanged by each.
This week, I'm planning to write about some of my experiences using generative AI to code and what I've been sensing at the metacognitive layer. I think it has helped crystallize the educational challenges and opportunities for me. I'll be curious to hear how that lands with you.
Big agree! "Just because LLMs cannot reason or think doesn't mean it's useless."
The chatter about AI reasoning feels like such a red herring to me. Who cares/Does it really matter that much? I get that we should continue to research the inner-workings and understand how it maps its outputs (etc etc), but this focus completely overlooks what you so succinctly describe in this piece.
I like to focus on "sophisticated pattern matching" to help me "approach it well" and get some use out of it. In other words, it's analyzing my inputs for patterns, checking its data for similar patterns, and then reformatting an output that aligns the two. I think that aligns with your approach but let me know if not.
Another way I try to frame this is -- Imagine if you had a friend who could do that pattern-matching pretty well - on-call 24/7 -- but they occasionally made mistakes and didn't really understand your underlying purpose for the work you did? Or the nature of perspective and experience? You might not chat with that friend all day, every day -- in fact that would be quite exhausting -- but you would certainly have some fascinating conversations with him/her and gain insights into your own thinking from them. That's how I think of GenAI. That aligns of course with the "mirror" concept that is fairly widespread at this point. But I've been trying to reframe it even further into a "sparring partner" approach which forces me to consider these gaps in experience, perspective, and purpose -- while also creating the conditions to reveal user/student thinking on the page. (This was the crux of my student-facing AI activities a few years ago.)
Thanks Mike. I appreciate the mirror metaphor as well.
The central question in my mind is understanding what LLMs are and how they work. Having the right mental model is a key determining factor for whether you can use them well or not.
Josh - thank you, as always, for your insights. They’re incredibly valuable. Your description of LLMs as super-autocompleters makes sense… is there a difference between inference and reasoning? How does reasoning differ from what you’ve described, if at all? Thanks!
Thanks, this is a helpful analysis that gives me some more words and concepts to share with others but also to structure my own thinking. What's foremost on my mind now in an educational sense is how to get younger people entering the world to commit to developing the skills they need to be capable of using generative AI well--and to find what makes them one kind of 'uncommon' mind that people want to glue to an assembly.
Thanks Tim. I am asking the same questions you are about the implications for education. What seems to be clear to me is that the fundamentals of what it means to learn and build skills will remain largely unchanged. LLMs and Generative AI at the end of the day are a new way of interacting with information and data, much in the same vein as books, the Internet, and Google were in their day. Each of these technologies has particular strengths and weaknesses, but the core set of competencies needed to understand those and navigate them wisely feels to me to be relatively unchanged by each.
This week, I'm planning to write about some of my experiences using generative AI to code and what I've been sensing at the metacognitive layer. I think it has helped crystallize the educational challenges and opportunities for me. I'll be curious to hear how that lands with you.
Big agree! "Just because LLMs cannot reason or think doesn't mean it's useless."
The chatter about AI reasoning feels like such a red herring to me. Who cares/Does it really matter that much? I get that we should continue to research the inner-workings and understand how it maps its outputs (etc etc), but this focus completely overlooks what you so succinctly describe in this piece.
I like to focus on "sophisticated pattern matching" to help me "approach it well" and get some use out of it. In other words, it's analyzing my inputs for patterns, checking its data for similar patterns, and then reformatting an output that aligns the two. I think that aligns with your approach but let me know if not.
Another way I try to frame this is -- Imagine if you had a friend who could do that pattern-matching pretty well - on-call 24/7 -- but they occasionally made mistakes and didn't really understand your underlying purpose for the work you did? Or the nature of perspective and experience? You might not chat with that friend all day, every day -- in fact that would be quite exhausting -- but you would certainly have some fascinating conversations with him/her and gain insights into your own thinking from them. That's how I think of GenAI. That aligns of course with the "mirror" concept that is fairly widespread at this point. But I've been trying to reframe it even further into a "sparring partner" approach which forces me to consider these gaps in experience, perspective, and purpose -- while also creating the conditions to reveal user/student thinking on the page. (This was the crux of my student-facing AI activities a few years ago.)
https://mikekentz.substack.com/p/the-butler-vs-the-sparring-partner
In any case, great piece! Thanks for sharing. And Nick Potkalitsky has a good piece on this from a few weeks ago too, if interested.
Thanks Mike. I appreciate the mirror metaphor as well.
The central question in my mind is understanding what LLMs are and how they work. Having the right mental model is a key determining factor for whether you can use them well or not.
Josh - thank you, as always, for your insights. They’re incredibly valuable. Your description of LLMs as super-autocompleters makes sense… is there a difference between inference and reasoning? How does reasoning differ from what you’ve described, if at all? Thanks!