Thanks for your thoughtfulness on this topic. I think (and read, and write—so far, just to myself and colleagues) about it a lot myself. I appreciate the comparison to piloting, which I’ve been thinking about in terms of Atul Gawande’s The Checklist, also. Thanks, too, for the sort of annotated bibliography of further reading at the end! It’s refreshing to hear someone (and an engineer, in particular) be cautious about this new technology.
User passwords -> Password managers for simple recall -> Password managers for creating 'strong' passwords
Manual cash registers -> Scanners
Cooks and chefs who gather and interpret sensory cues from food as compared with blind reliance on recipes and timers
Creators in many media determining by instinct, intuition, experience when a piece of art is done (or ready 'to be abandoned', as the saying goes)
Meteorologists surrounded by computer-generated models and data of all kinds who must distill that information into a few short forecast sentences every hour for regular folks to act on
The unfettered imagination of little ones in a playhouse
Always and everywhere: so much to gather, consider, balance
Significant correction: MCAS is an automated system, and its inclusion in the 737 MAX was indeed at the heart of the two heartbreaking airline crashes that occurred shortly after the aircraft type's release into service. The real problem, however, was that virtually no one except Boeing knew of the existence of MCAS, no less what it was and how it worked. My favorite online airline crash explainer (Mentour Pilot https://www.youtube.com/watch?v=L5KQ0g_-qJs) goes to great lengths to show what was wrong with MCAS. My take on this is that automation in and of itself was not the issue, but rather a simple design flaw with MCAS and the company's inexplicable failure to have airlines and their pilots know about this new system automation at all.
As for the significant correction, you make a good point. So many of the challenges around automation are not strictly because of the automation itself, but are connected to the way that the automated systems interface with the rest of a larger system, including human users. Thanks for the link to the explainer video which. I'll add it to my queue.
From my perspective, the 737 MAX MCAS issue is a good canary in the coal mine for what we'll see with AI. Aspects of the tools we use will be gradually replaced or augmented with generative AI tools behind the scenes. In many cases we won't be notified or aware of where the genAI is coming into play or how. This in turn will make it harder in some contexts for us human "pilots" to "fly the plane."
Thanks for the thoughtful comment and corrections!
Your article and now this thoughtful reply are making me think that we're probably already seeing a split between those who grew up and have quite a trove of lived experience without AI and/or automation, and those who never really knew life apart from ubiquitous automation. The former group has the best chance of having any awareness at all, suspecting that perhaps AI is "monkeying" with the controls, and might try to exert manual control to achieve a known, safe operating condition (not just in reference to flying, but EVERYTHING that AI touches, which may well be... EVERYTHING). What would it take for the latter group be able to have this same awareness? Will AI develop to the point that it is finally able to consistently fool the former group out of its awareness? (And so many other questions, thus, your excellent Substack column)
Thanks for your thoughtfulness on this topic. I think (and read, and write—so far, just to myself and colleagues) about it a lot myself. I appreciate the comparison to piloting, which I’ve been thinking about in terms of Atul Gawande’s The Checklist, also. Thanks, too, for the sort of annotated bibliography of further reading at the end! It’s refreshing to hear someone (and an engineer, in particular) be cautious about this new technology.
"three hundred screws later (I kid you not)"
-- LOL, and Lovely for your kids! Would be interesting to see if a Boston Dynamics robot could assemble one, though?! Advantage: humans.
Thinking about skills and the fork in the road between refinement or atrophy --
Handwriting -> Typewriters -> Computers -> Tablets -> Phones -> Audio capture
User passwords -> Password managers for simple recall -> Password managers for creating 'strong' passwords
Manual cash registers -> Scanners
Cooks and chefs who gather and interpret sensory cues from food as compared with blind reliance on recipes and timers
Creators in many media determining by instinct, intuition, experience when a piece of art is done (or ready 'to be abandoned', as the saying goes)
Meteorologists surrounded by computer-generated models and data of all kinds who must distill that information into a few short forecast sentences every hour for regular folks to act on
The unfettered imagination of little ones in a playhouse
Always and everywhere: so much to gather, consider, balance
My oldest son is a pilot. I think of this metaphor often.
Insignificant correction: 737 MAX, not 787.
Significant correction: MCAS is an automated system, and its inclusion in the 737 MAX was indeed at the heart of the two heartbreaking airline crashes that occurred shortly after the aircraft type's release into service. The real problem, however, was that virtually no one except Boeing knew of the existence of MCAS, no less what it was and how it worked. My favorite online airline crash explainer (Mentour Pilot https://www.youtube.com/watch?v=L5KQ0g_-qJs) goes to great lengths to show what was wrong with MCAS. My take on this is that automation in and of itself was not the issue, but rather a simple design flaw with MCAS and the company's inexplicable failure to have airlines and their pilots know about this new system automation at all.
Thanks Jim, patched the insignificant correction.
As for the significant correction, you make a good point. So many of the challenges around automation are not strictly because of the automation itself, but are connected to the way that the automated systems interface with the rest of a larger system, including human users. Thanks for the link to the explainer video which. I'll add it to my queue.
From my perspective, the 737 MAX MCAS issue is a good canary in the coal mine for what we'll see with AI. Aspects of the tools we use will be gradually replaced or augmented with generative AI tools behind the scenes. In many cases we won't be notified or aware of where the genAI is coming into play or how. This in turn will make it harder in some contexts for us human "pilots" to "fly the plane."
Thanks for the thoughtful comment and corrections!
Your article and now this thoughtful reply are making me think that we're probably already seeing a split between those who grew up and have quite a trove of lived experience without AI and/or automation, and those who never really knew life apart from ubiquitous automation. The former group has the best chance of having any awareness at all, suspecting that perhaps AI is "monkeying" with the controls, and might try to exert manual control to achieve a known, safe operating condition (not just in reference to flying, but EVERYTHING that AI touches, which may well be... EVERYTHING). What would it take for the latter group be able to have this same awareness? Will AI develop to the point that it is finally able to consistently fool the former group out of its awareness? (And so many other questions, thus, your excellent Substack column)