Thanks for your thoughtfulness on this topic. I think (and read, and write—so far, just to myself and colleagues) about it a lot myself. I appreciate the comparison to piloting, which I’ve been thinking about in terms of Atul Gawande’s The Checklist, also. Thanks, too, for the sort of annotated bibliography of further reading at the end! It’s refreshing to hear someone (and an engineer, in particular) be cautious about this new technology.
User passwords -> Password managers for simple recall -> Password managers for creating 'strong' passwords
Manual cash registers -> Scanners
Cooks and chefs who gather and interpret sensory cues from food as compared with blind reliance on recipes and timers
Creators in many media determining by instinct, intuition, experience when a piece of art is done (or ready 'to be abandoned', as the saying goes)
Meteorologists surrounded by computer-generated models and data of all kinds who must distill that information into a few short forecast sentences every hour for regular folks to act on
The unfettered imagination of little ones in a playhouse
Always and everywhere: so much to gather, consider, balance
This is a very though-provoking article. I have two things to share about the aviation references.
The 737 Max system is an example where automation made some pretty flawed assumptions about how human users would interact with the system. In the two horrible crashes (Indonesia and Ethiopia), there were multiple distracting alarms going off at a time in flight (takeoff and initial climbout) when task-saturation can be a major issue for pilots. Besides that, the "correct" response to all the alarms was far from obvious and not well-documented.
Second, there have been multiple accidents in general aviation (mostly small planes) over the past several years that involved autopilot issues. Often the pilots don't really learn the nuances of the autopilot or they become over-reliant on it and their manual flying skills deteriorate (or never really develop).
Learn the basics of flying well first, then learn the systems like autopilot that make your workload easier next. If you use automation extensively, you're ultimately at the mercy of the system engineers unless you know how to customize (or override) the system yourself.
The application to AI should be obvious: AI in critical systems or safety-sensitive applications (like full self driving, robotic operating systems, industrial monitoring equipment, etc.) needs to have engineers in the loop and actively stress-testing the system to push it to it's limits of failure. Using AI for writing tasks may seem innocuous unless you're using it to write code for applications that impact critical systems.
I've seen thoughtful professors who are teaching students to use AI in their writing tasks and instead of using it as a lazy crutch they are learning to really work and use AI as a tool. It augments their writing and thinking and helps them get better at those.
Thanks Ron. Good observations about the task-saturation and dangers of overreliance on automation.
My biggest fear with AI and students is that there won't be the appropriate amount of effort devoted to learning how to manually fly. We should help students understand what AI can do, but the line between augmentation and crutch is a challenging one to find, especially if one doesn't know how to do the work without AI assistance to begin with.
Significant correction: MCAS is an automated system, and its inclusion in the 737 MAX was indeed at the heart of the two heartbreaking airline crashes that occurred shortly after the aircraft type's release into service. The real problem, however, was that virtually no one except Boeing knew of the existence of MCAS, no less what it was and how it worked. My favorite online airline crash explainer (Mentour Pilot https://www.youtube.com/watch?v=L5KQ0g_-qJs) goes to great lengths to show what was wrong with MCAS. My take on this is that automation in and of itself was not the issue, but rather a simple design flaw with MCAS and the company's inexplicable failure to have airlines and their pilots know about this new system automation at all.
As for the significant correction, you make a good point. So many of the challenges around automation are not strictly because of the automation itself, but are connected to the way that the automated systems interface with the rest of a larger system, including human users. Thanks for the link to the explainer video which. I'll add it to my queue.
From my perspective, the 737 MAX MCAS issue is a good canary in the coal mine for what we'll see with AI. Aspects of the tools we use will be gradually replaced or augmented with generative AI tools behind the scenes. In many cases we won't be notified or aware of where the genAI is coming into play or how. This in turn will make it harder in some contexts for us human "pilots" to "fly the plane."
Thanks for the thoughtful comment and corrections!
Your article and now this thoughtful reply are making me think that we're probably already seeing a split between those who grew up and have quite a trove of lived experience without AI and/or automation, and those who never really knew life apart from ubiquitous automation. The former group has the best chance of having any awareness at all, suspecting that perhaps AI is "monkeying" with the controls, and might try to exert manual control to achieve a known, safe operating condition (not just in reference to flying, but EVERYTHING that AI touches, which may well be... EVERYTHING). What would it take for the latter group be able to have this same awareness? Will AI develop to the point that it is finally able to consistently fool the former group out of its awareness? (And so many other questions, thus, your excellent Substack column)
Thanks for your thoughtfulness on this topic. I think (and read, and write—so far, just to myself and colleagues) about it a lot myself. I appreciate the comparison to piloting, which I’ve been thinking about in terms of Atul Gawande’s The Checklist, also. Thanks, too, for the sort of annotated bibliography of further reading at the end! It’s refreshing to hear someone (and an engineer, in particular) be cautious about this new technology.
"three hundred screws later (I kid you not)"
-- LOL, and Lovely for your kids! Would be interesting to see if a Boston Dynamics robot could assemble one, though?! Advantage: humans.
Thinking about skills and the fork in the road between refinement or atrophy --
Handwriting -> Typewriters -> Computers -> Tablets -> Phones -> Audio capture
User passwords -> Password managers for simple recall -> Password managers for creating 'strong' passwords
Manual cash registers -> Scanners
Cooks and chefs who gather and interpret sensory cues from food as compared with blind reliance on recipes and timers
Creators in many media determining by instinct, intuition, experience when a piece of art is done (or ready 'to be abandoned', as the saying goes)
Meteorologists surrounded by computer-generated models and data of all kinds who must distill that information into a few short forecast sentences every hour for regular folks to act on
The unfettered imagination of little ones in a playhouse
Always and everywhere: so much to gather, consider, balance
My oldest son is a pilot. I think of this metaphor often.
Thanks Sandra. Appreciated your post on LinkedIn too with the great photo of you and your son!
This is a very though-provoking article. I have two things to share about the aviation references.
The 737 Max system is an example where automation made some pretty flawed assumptions about how human users would interact with the system. In the two horrible crashes (Indonesia and Ethiopia), there were multiple distracting alarms going off at a time in flight (takeoff and initial climbout) when task-saturation can be a major issue for pilots. Besides that, the "correct" response to all the alarms was far from obvious and not well-documented.
Second, there have been multiple accidents in general aviation (mostly small planes) over the past several years that involved autopilot issues. Often the pilots don't really learn the nuances of the autopilot or they become over-reliant on it and their manual flying skills deteriorate (or never really develop).
Learn the basics of flying well first, then learn the systems like autopilot that make your workload easier next. If you use automation extensively, you're ultimately at the mercy of the system engineers unless you know how to customize (or override) the system yourself.
The application to AI should be obvious: AI in critical systems or safety-sensitive applications (like full self driving, robotic operating systems, industrial monitoring equipment, etc.) needs to have engineers in the loop and actively stress-testing the system to push it to it's limits of failure. Using AI for writing tasks may seem innocuous unless you're using it to write code for applications that impact critical systems.
I've seen thoughtful professors who are teaching students to use AI in their writing tasks and instead of using it as a lazy crutch they are learning to really work and use AI as a tool. It augments their writing and thinking and helps them get better at those.
Thanks Ron. Good observations about the task-saturation and dangers of overreliance on automation.
My biggest fear with AI and students is that there won't be the appropriate amount of effort devoted to learning how to manually fly. We should help students understand what AI can do, but the line between augmentation and crutch is a challenging one to find, especially if one doesn't know how to do the work without AI assistance to begin with.
Insignificant correction: 737 MAX, not 787.
Significant correction: MCAS is an automated system, and its inclusion in the 737 MAX was indeed at the heart of the two heartbreaking airline crashes that occurred shortly after the aircraft type's release into service. The real problem, however, was that virtually no one except Boeing knew of the existence of MCAS, no less what it was and how it worked. My favorite online airline crash explainer (Mentour Pilot https://www.youtube.com/watch?v=L5KQ0g_-qJs) goes to great lengths to show what was wrong with MCAS. My take on this is that automation in and of itself was not the issue, but rather a simple design flaw with MCAS and the company's inexplicable failure to have airlines and their pilots know about this new system automation at all.
Thanks Jim, patched the insignificant correction.
As for the significant correction, you make a good point. So many of the challenges around automation are not strictly because of the automation itself, but are connected to the way that the automated systems interface with the rest of a larger system, including human users. Thanks for the link to the explainer video which. I'll add it to my queue.
From my perspective, the 737 MAX MCAS issue is a good canary in the coal mine for what we'll see with AI. Aspects of the tools we use will be gradually replaced or augmented with generative AI tools behind the scenes. In many cases we won't be notified or aware of where the genAI is coming into play or how. This in turn will make it harder in some contexts for us human "pilots" to "fly the plane."
Thanks for the thoughtful comment and corrections!
Your article and now this thoughtful reply are making me think that we're probably already seeing a split between those who grew up and have quite a trove of lived experience without AI and/or automation, and those who never really knew life apart from ubiquitous automation. The former group has the best chance of having any awareness at all, suspecting that perhaps AI is "monkeying" with the controls, and might try to exert manual control to achieve a known, safe operating condition (not just in reference to flying, but EVERYTHING that AI touches, which may well be... EVERYTHING). What would it take for the latter group be able to have this same awareness? Will AI develop to the point that it is finally able to consistently fool the former group out of its awareness? (And so many other questions, thus, your excellent Substack column)