The Choice To Do Evil Is Itself A Good
E-bikes, E-motorcycles, AI, and innovation's two-sided coin
Thank you for being here. Please consider supporting my work by sharing my writing with a friend or taking out a paid subscription.
How should we think about building new technologies that have the potential to do great good as well as great evil?
What responsibility do we have to steward our inventions in ways that support human flourishing?
What should the moral responsibility for a given invention be divided between its creator and user?
These are just a few of the questions that I’m constantly thinking about these days. Of course, you won’t be surprised to hear that I’m thinking about these most directly as they connect to AI, but the longer you sit with them, the more you will realize that these questions are not just about AI or even technology more generally. They are fundamental to what it means to live and act in the world. They are questions that apply not just to our sophisticated digital tools, but to almost anything we create or use.
What makes these questions feel so live today as we talk about AI is that a technology like AI operates at a level of flexibility and generality that is wide-reaching. The significance of these questions scales with the potential impact of the artifact in question. It’s one thing to ask these questions about a hammer; it’s another to ask them about humanoid robots.
GenAI’s strength is its challenge
When I talk about the generality of AI, I’m not talking about the quest for the ill-defined concept of Artificial General Intelligence (AGI). Instead, I’m trying to articulate the way that AI can be used in many different ways.
Whether or not AI tools ever cross the threshold of what most people would call AGI is immaterial. Even in their current state, they possess significant generality and flexibility simply because they are able to generate and execute computer code. This is further amplified by our existing digital infrastructure, which enables AI tools to communicate with other systems and people over the Internet.
Generative AI wouldn’t exist without the Internet and the vast amounts of data that we’ve been collecting for the last several decades. Likewise, it wouldn’t be very useful without the infrastructure that it are able to directly plug into.
Taken together, this means that AI “agents” have the potential to be significant and powerful, even if—and perhaps especially if—they cannot really think or reason the way that humans do. AI agents don’t make decisions the way that humans do. LLMs and the systems built with them are not intelligent in the way that we humans are. And yet, whatever it is that they are doing by predicting next tokens is something not entirely unlike thinking. Thinking like a human turns out not to be necessary to be useful.
We will never make a technology that we cannot abuse or weaponize
All of this brings me to the theme I want to wrestle with for the rest of today’s post. As I’ve been grappling with these ideas this week, I stumbled on this conversation that
had with a few years ago, from which the title of this post was taken. Here’s the paragraph I lifted that line from.Kevin: We have not yet and never will, make a technology that we cannot abuse or weaponize. And I’ve been saying this for a while saying, Oh, and by the way, the most powerful technology that we just invented, the internet, we’re going to weaponize and we’re going to abuse it. It’s going to be abused powerfully. And this is the thing, the more powerful the technology, the more powerfully it will be abused. That’s the nature of it. AI, man, it will be really abused. However, and this is the curious thing, even those abuses of technology are increased choices. When the first humanoid picked up a rock and turns it into a hammer, either to make a shelter or to kill his brother, he suddenly had a new choice he never had before. That choice is good. You see what I’m saying? Even the choice to do evil is itself a good. Which means that if there’s a 50/50 wash between good and evil, the fact that there’s another choice gives it a 1% edge.
Kevin is definitely right about the first part. We certainly find ways to corrupt and misuse any tools we create. It goes all the way back to the Biblical narrative of Cain and Abel that Kelly references.
But the second part of the paragraph is sure to challenge us. Is it true that even the choice to do evil is itself a good? And if this is true, how should it shape the way we think about building and using technology?
The choice to do evil goes all the way back
If we want to travel up the stack trace on the Biblical thread, the Judeo-Christian tradition would agree with Kevin’s analysis. In the first few chapters of Genesis, we read how God creates Adam and Eve and puts them in the Garden of Eden. He proclaims that all of his creation is very good. And yet, in the middle of the garden, there is a tree, the tree of the knowledge of good and evil, that Adam and Eve are commanded not to eat from. Creation is good, and yet the potential to choose evil by disobeying God is there.
Even if we’re not choosing to eat from the tree, these choices are still all around us today. I’ve been thinking about this in connection to a specific example that I wrote about a few months ago: e-bikes. When I first wrote about AI as an e-bike for the mind, I was thinking mostly of the potential for AI to amplify human effort to enable you to go further with the same amount of effort. Speaking as someone who’s put over 700 miles and counting on my cargo e-bike over the last seven months, I know first hand the positive impact that it has had on me (and my six- and four-year-old passengers).
And yet, the same technology that enables the cargo e-bike (namely, high-density lithium-ion batteries and powerful and light DC electric motors) also enable other choices.
Exhibit A: E-motorcycles.
How to skew the distribution
If you’ve followed the public conversation around e-bikes, you won’t have any trouble finding news articles about the public nuisance of teenagers pulling wheelies while they whip around town. If you want to get technical about it, the real problem is e-motorcycles, not e-bikes, which are required to have functional pedals and built-in speed limits. But these safety features are often easily overridden by YouTube-literate young people or non-existent on electric motorcycles disguised as e-bikes with purely cosmetic pedals.
If Kelly is right that any technology we create is bound to be abused or weaponized, then my kids and I on the back of a cargo e-bike and a teenager riding an e-motorcycle are two sides of the same coin. You don’t get one without the other.
But just because the choice exists does not mean the potential for good and evil applications is a 50/50 toss-up. We have a responsibility, both as designers and users, to try to condition the probability distribution of potential outcomes toward good outcomes and away from harmful ones. I can think of at least three ways we can and should do this:
Thoughtful design
Regulation
Education and social norms
Thoughtful design
First, we can thoughtfully design technologies to make it harder to abuse them. In the e-bike/e-motorcycle example, this is demonstrated in limits on the maximum speed at which the motor can activate. E-bikes are labeled as one of three classifications:
Class 1 bikes are pedal-assist only, with no electric motor assistance above 20 mph
Class 2 bikes have a throttle that is capped at 20 mph, and
Class 3 bikes have a higher max assisted speed of 28 mph, but retain a max throttle speed of 20 mph.
These design choices help to limit the potentially dangerous uses. E-motorcyles, on the other hand, are often designed to travel much faster with maximum speeds that can rival those of a car and travel at highway speeds. They might have pedals, but only to try and masquerade as an e-bike and avoid garnering unwanted attention from the local police.
Regulation
A second lever is regulation. Municipalities can create laws to create a minimum age for the use of an e-bike or require registration. These regulations work hand in hand with thoughtful design and the classification system I just mentioned. These classifications didn’t come from thin air, but were designed by an organization called PeopleForBikes to help bike manufacturers and regulators design consistent legislation across the US.
Education and social norms
Education and social norms are a third way to steer technologies toward good rather than bad uses. For example, campaigns to help parents understand the differences between the three classes of e-bikes and how they are different from e-motorcycles can help them to make wise choices about what they purchase for themselves or their children. This, in turn, can help them to communicate with other parents so that they can make consistent choices for their kids and their friends.
All of this is a helpful frame for us as we think about AI. It’s clear that AI is being weaponized and abused. Emotionally manipulative chatbots and deepfake porn are just the tip of the iceberg. But this is just a fact of technological innovation. The same thing can be said about agriculture, the automobile, the computer, or the Internet. As Ursula Franklin would say, every technology has both enabling and foreclosing angles.
The question is less about the existence of those two sides of the coin and more about how we respond. We need new wisdom and discernment to match the pace of our new technological capabilities.
The choice to do evil is itself a good. We need to do all that we can to cultivate habits and practices that help us choose wisely.
Got a thought? Leave a comment below.
Reading Recommendations
’s conversation with is worth reading in full. There are several other interesting passages, including this one about how the Amish make their decisions regarding technology.[I]t’s sort of evidence-based technological policy, which is what I advocate for, rather than basing our policy and what we could imagine could happen, let’s base it on what actually does happen. Let’s take the evidence. Okay, so let’s take the evidence of social media and use that, the actual scholarly scientific evidence, rather than all the things that we could imagine could happen. And the Amish do that in that kind of very IHAT way. They never actually make a real decision. And most of those decisions are at the parish community level, one by one. But yeah, they will allow an Amish early adopter to try out something, always with the caveat that if they observe negative effects, he has to surrender that immediately. And that’s the deal that they make. And bit by bit, basically what the Amish are, is they are very late adopters.
This was a cool post from
arguing for making your own custom tools instead of using MCP. He argues that this approach can be more effective and more efficient with token use. Very creative.This essay from
in The Atlantic was a good read. Excellent commentary and food for thought for a future post to follow on to what I riffed on in this one. Raffi’s big (and very good) question: how do we preserve friction and protect our ability to choose?The early internet was never perfect, but it had a purpose: to connect us, to redistribute power, to widen access to knowledge. It was a space where people could publish, build, question, protest, remix. It rewarded agency and ingenuity. Today’s systems reverse that: Prediction has replaced participation, and certainty has replaced search. If we want to protect what makes us human, we don’t just need smarter algorithms. We need systems that strengthen our capacity to choose, to doubt, and to think for ourselves. And just as democracy relies on friction—on dissent that tempers opinion, on checks and balances that restrain power—so, too, must our technologies. Regulation is more than restraint; it’s refinement. Friction forces companies to defend their choices, confront competing views, and be held to account. And in the process, it makes their systems stronger, more trustworthy, and more aligned with the public good. Without it, we aren’t practicing democracy. We’re outsourcing it.
Dipping back into the archives, this essay from Tish Harrison Warren on the Amish and technology from her NY Times newsletter in 2023 is worth a re-read.
I will probably never join the Bruderhof community, but I think their way of approaching technology with skepticism and caution, seeking the good of the whole community and the flourishing of human beings, is something we can all learn from. Rhodes encourages mainstream Americans not to be afraid to walk away from new technology. “To be tech-savvy is not a virtue,” he writes. “‘Blessed are the early adopters’ is not a wise rule for living. If a form of technology is proving to be deleterious to relationships with others, we must have the fortitude to drop it.” I wish I’d found the fortitude years earlier.
The Book Nook
Busy last few weeks, so my reading progress has been quite slow. Made a little progress, but still slowly working my way through Postman.
The Professor Is In



#1’s birthday was last week, so I gave my red velvet cake recipe from last year another try. This year, I was determined to make it actually red, instead of just a shade of pink like last year, when I ran out of red food coloring. After dumping a full tube of red gel coloring into the batter, I’m happy to report that I achieved my goal.
Leisure Line
My stash of these Kirkland Signature Ethiopian beans ran out a few weeks ago, to much weeping and gnashing of teeth in the Brake household. Fortunately, I discovered that you can buy them online, even if they’re not available in the warehouse. Much rejoicing (although the first time I ordered, Costco sent me the wrong beans…)
Still Life
This year’s jack-o-lantern.









Yes, again. As a high-school teacher, I still feel that the conditions under which we train ourselves to make that choice must be non-technology, or technology-free, conditions. Ursula Franklin, trained in chemistry, insists that structure is everything: you can’t shave copper.
So we need to decide what matters in the absence of the technology, and regulate our behavior accordingly.
That ability—the ability to discuss it among ourselves, face to face, and accepting our human structure—is the good one. Absent that, Kelly’s line feels glib.
Which wolf wins? The one you feed.
Wow !!! This perspective is new to me, and you are right, this is power !!
Once I have it, I can use it for good also