Great post and always useful to add a case study to the tech library of unintended consequences. Did the series provide any *practical* lessons? What should the founders have *actually* done?
To merely prescribe more "wisdom" or more "thoughtfulness" is the vague advice that engineers find unhelpful at best and sanctimonious at worst. Clearly we can't expect omniscience in our tech founders. The are often operating in high pressure competitive situations within larger systems that impact underage consumption/addictions.
The bigger task with stories like these is to translate them into practical wisdom that can be generalizable and thus something that can be realistically incorporated into some version of a startup toolkit. Otherwise they are easily dismissed as "just so" stories that make sense only in retrospect.
Great question and very important point. The short answer to your question of whether the series provided any practical lessons is not really in any super concise or digestible way. Perhaps the clearest lesson they highlighted was to be very mindful of how a product is marketed in a launch campaign. But I didn’t pick up on a deeper analysis of the inherent values and politics of technology more broadly.
I totally agree that general calls for more wisdom are not very helpful for founders and engineers. You’ve got me thinking that a followup post about making specific prescriptions more specific might be in order and worth writing for my own thinking about the issue, if for no other reason.
That being said, I think the first step to prevent these types of things from happening over and over is an awareness of the many ways they’ve happened in the past. My personal feelings are that our ethics curriculum within technical training is not nearly as robust as it needs to be and these discussions need to be much more central. Thanks for the comment!
> That being said, I think the first step to prevent these types of things from happening over and over is an awareness of the many ways they’ve happened in the past.
Agree! A library of case studies (without obviously judgmental slants), would be invaluable. Every entrepreneur needs to recognize that no technology enters society fully-formed. The path to integration is always evolving, and the outcome rarely matches the original vision. An under-appreciated corollary is that these deviations can also be positive!
Feels like it’s well worth doing a little more extensive digging to see if such a library does exist, and if not, to begin to create it. I’ve often thought about doing this. At the very least I can start to collect these examples I’m writing about in a more organized and browsable format.
Some loosely-connected thoughts about your call for practical (specific?) wisdom...
Also there is the common refrain of “if we hadn’t done it someone else would have.”
Hard to counter in isolation, but when accompanied by a multimillion dollar lobbying effort to affect the terms under which the conduct of subsequent others is deemed permissible or not, I’m less sympathetic.
To the extent that regulatory capture is inevitable, does it confer an ethical responsibility on upon industry pioneers to pre-empt those others from engaging in the conduct which would otherwise be excused with the “if we didn’t do it others would” line?
AI is the elephant in the room. There is an additional “international arms race” argument not really applicable to nicotine products which makes the lessons of Juul difficult to apply.
Still, as AI labs have placed themselves largely ahead of the public in calls for regulation/safety, there is less room for an excuse of naïveté.
I am not sure how much their regulatory concern is earnest, as opposed to an attempt to steal oxygen from legitimate concerns, or even as a sales pitch--“if AI can kill us all, then surely it’s competent enough to automate this medical hotline!”
Interested in your (and of course Prof Brake’s) thoughts on what greatest common denominator of practical wisdom should be held against developers of AI, “before it’s too late.”
Moral duality in technology is a b*tch. There will always be someone to put a technology to 'ethically bad' use, given enough time to come up with the how and enough incentive (money) to do so.
Great post and always useful to add a case study to the tech library of unintended consequences. Did the series provide any *practical* lessons? What should the founders have *actually* done?
To merely prescribe more "wisdom" or more "thoughtfulness" is the vague advice that engineers find unhelpful at best and sanctimonious at worst. Clearly we can't expect omniscience in our tech founders. The are often operating in high pressure competitive situations within larger systems that impact underage consumption/addictions.
The bigger task with stories like these is to translate them into practical wisdom that can be generalizable and thus something that can be realistically incorporated into some version of a startup toolkit. Otherwise they are easily dismissed as "just so" stories that make sense only in retrospect.
Also, there’s an interesting article I stumbled on while writing this post about how Stanford is reflecting on the ethical implications of design thinking. Paywalled, but reader view in my browser worked to read the whole article. https://www.fastcompany.com/90993444/can-stanfords-design-school-find-a-way-to-teach-disruption-and-ethics-at-the-same-time?partner=rss
Great question and very important point. The short answer to your question of whether the series provided any practical lessons is not really in any super concise or digestible way. Perhaps the clearest lesson they highlighted was to be very mindful of how a product is marketed in a launch campaign. But I didn’t pick up on a deeper analysis of the inherent values and politics of technology more broadly.
I totally agree that general calls for more wisdom are not very helpful for founders and engineers. You’ve got me thinking that a followup post about making specific prescriptions more specific might be in order and worth writing for my own thinking about the issue, if for no other reason.
That being said, I think the first step to prevent these types of things from happening over and over is an awareness of the many ways they’ve happened in the past. My personal feelings are that our ethics curriculum within technical training is not nearly as robust as it needs to be and these discussions need to be much more central. Thanks for the comment!
> That being said, I think the first step to prevent these types of things from happening over and over is an awareness of the many ways they’ve happened in the past.
Agree! A library of case studies (without obviously judgmental slants), would be invaluable. Every entrepreneur needs to recognize that no technology enters society fully-formed. The path to integration is always evolving, and the outcome rarely matches the original vision. An under-appreciated corollary is that these deviations can also be positive!
Feels like it’s well worth doing a little more extensive digging to see if such a library does exist, and if not, to begin to create it. I’ve often thought about doing this. At the very least I can start to collect these examples I’m writing about in a more organized and browsable format.
Some loosely-connected thoughts about your call for practical (specific?) wisdom...
Also there is the common refrain of “if we hadn’t done it someone else would have.”
Hard to counter in isolation, but when accompanied by a multimillion dollar lobbying effort to affect the terms under which the conduct of subsequent others is deemed permissible or not, I’m less sympathetic.
To the extent that regulatory capture is inevitable, does it confer an ethical responsibility on upon industry pioneers to pre-empt those others from engaging in the conduct which would otherwise be excused with the “if we didn’t do it others would” line?
AI is the elephant in the room. There is an additional “international arms race” argument not really applicable to nicotine products which makes the lessons of Juul difficult to apply.
Still, as AI labs have placed themselves largely ahead of the public in calls for regulation/safety, there is less room for an excuse of naïveté.
I am not sure how much their regulatory concern is earnest, as opposed to an attempt to steal oxygen from legitimate concerns, or even as a sales pitch--“if AI can kill us all, then surely it’s competent enough to automate this medical hotline!”
Interested in your (and of course Prof Brake’s) thoughts on what greatest common denominator of practical wisdom should be held against developers of AI, “before it’s too late.”
Moral duality in technology is a b*tch. There will always be someone to put a technology to 'ethically bad' use, given enough time to come up with the how and enough incentive (money) to do so.