We Shape Our Tools. Then They Shape Us.
A framework for understanding how what we create shapes us
As I've been having more conversations about ChatGPT and the current and future deluge of similar AI tools quickly following in its wake (perhaps most significantly headlined by Google’s announcement of Bard), I've been trying to map the main themes onto fundamental principles. A lot of the concern about ChatGPT and its impact is based on amazement at its superpower: generating sophisticated segments of text. It has taken the world by storm because it is a powerful version of this technology and has been released in easy-to-access packaging. However, while a particular technology like ChatGPT might be new, there is much to be learned from the past and by analyzing these AI tools within a broader framework.
Ever since diving into Neil Postman a few years ago, I’ve been taken by his reiteration of Canadian philosopher Marshal McLuhan’s statement “the medium is the message.” In a piece last year in the Opinion pages of the New York Times, Ezra Klein provides an account of his journey seeing this play out in the way that he has interacted with the Internet. He writes of the way that the Internet and the applications that live on it shaped him:
“Hungry.” That was the word that hooked me. That’s how my brain felt to me, too. Hungry. Needy. Itchy. Once it wanted information. But then it was distraction. And then, with social media, validation. A drumbeat of “You exist. You are seen.”
Klein’s reaction shows the way that technology can have far-reaching impacts. Our tools are not just things we shape, they are things that shape us. Hence the title of this post, taken from a quote ascribed to McLuhan’s friend and colleague, John M. Culkin.
Thoughtful Tool Building
As an engineer, I like to build things. I love the process. And the result of seeing something you have envisioned working in the world is gratifying. But all too often I (and others like me) fail to spend enough time thinking about the purpose and impact of the tools that we create. And when we do, we tend to focus on the positive impacts and fail to ask hard questions about potential misuses or unintended consequences.
While I'm not arguing that we stop making new things (engineering has great potential to solve problems and promote greater human flourishing), we do need to spend more time thinking critically about the work we are doing from a variety of perspectives and do our best to understand what the impact might be. This is one of the reasons I resonate so deeply with Harvey Mudd’s mission statement which seeks to “educate engineers, scientists, and mathematicians well versed in all of these areas and in the humanities, social sciences and the arts so that they may assume leadership in their fields with a clear understanding of the impact of their work on society” (emphasis mine).
My goal today is to present one framework that might help us to ask these types of questions and approach specific technologies with a deeper understanding and appreciation for their impact on the world. To do this, I’ll use ChatGPT as a specific case study.
What is technology?
First, let's start by defining our terms. What exactly do we mean by “technology?” It's a bit hard to nail down, but one definition that I like breaks it down along three main axes: tool, medium, and social practice.1 This framework helps to separate the various ways that technologies have an impact and provide a useful way to analyze them.
Tool: What does it do?
This definition is perhaps the most obvious and straightforward one. When creating new technology, we are working to solve a problem. The tool will either reduce friction to make something more efficient or enable its user to expand their capabilities.
ChatGPT, as a tool, is designed to generate passages of text in response to a specific prompt. At first glance, this can have the appearance of intelligence, but if you understand the fundamental mathematics powering tools like ChatGPT (a large language model), it is obvious that there is no intelligence or logic embedded within the tool, but simply an advanced heuristic of sorts that can string together words in a way that sounds reasonable using conditional probabilities2. This is why, as many have pointed out by now, ChatGPT doesn't do well with math.
Generating information where sources are properly attributed or performing mathematical calculations is simply not something that a tool like ChatGPT is designed to do well. This isn’t to say that there aren’t things it is useful for—I think there are!—but we need to be mindful of what we are asking a specific tool to do and whether or not it is well suited for the task.
If we are looking to brainstorm ideas or generate some template text, a tool like ChatGPT makes sense. But if you’re trying to figure out the answer to a question that requires logic or is backed up by real sources, look elsewhere.
Medium: How does it change the way we see the world?
Understanding technology as a medium requires a deeper and more careful analysis. The idea here is that a particular technology not only accomplishes a task but also shapes the way we see the world. In the words of psychologist Abraham Maslow, “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”
The question then is to ask ourselves how the technologies we use shape our view of the world. I appreciated this passage from Johann Hari’s book Stolen Focus where he performed this analysis on Twitter.
What is that message? First: you shouldn’t focus on any one thing for long. The world can and should be understood in short, simple statements of 280 characters. Second: the world should be interpreted and confidently understood very quickly. Third: what matters most is whether people immediately agree with and applaud your short, simple, speedy statements. A successful statement is one that lots of people immediately applaud; an unsuccessful statement is one that people immediately ignore or condemn. When you tweet, before you say anything else, you are saying that at some level you agree with these three premises. You are putting on those goggles and seeing the world through them.
In this framing, AI tools like ChatGPT reframe the way we think about written communication. As a mathematical model, at its base, ChatGPT and tools like it are nothing more than eloquent fakers. It can put together words that sound good together, but don't necessarily mean anything. This is why ChatGPT is so often dead wrong. It’s actually somewhat unfair to expect it to be right—it has no real concept of what right means.
The medium of ChatGPT says communication is nothing more than stringing together sets of words that makes sense. Understanding this perspective helps us to see both the ways that technologies like ChatGPT can be useful (e.g., generating text which we can use as a template or assisting ideation) and also to see its shortfalls (e.g., that this method of generating text has no way to cite where its "ideas" come from and no robust way to assess whether what it is writing is factually true or not).
One silver lining is that this framing can also help us to better understand the flip side: what is truly human and most important about written communication. This includes powerful concepts like the ability to express thoughts and tell stories about our personal experiences, to reason through ideas with nuance, and express curiosity and epistemic humility about what we think.
Social practice: How does it shape us?
The final definition of technology as a social practice helps us to see the impact that technology has on us, both as individuals and as a community. The question here is what does this technology do to us? How is it not only accomplishing a task and shaping our view of the world, but changing us? This dimension of technology is perhaps the most important and yet is often considered the least.
So, how does an AI tool like ChatGPT shape us? I think there are a few ways:
Unhealthy dependence
Skepticism
Relational breakdown.
Unhealthy dependence
First, as users of the technology, we can develop an unhealthy dependence on the text it generates. This was the crux of my argument a few weeks ago where I argued that we need to use AI tools as a ladder and not a crutch. With Microsoft's announced partnership with OpenAI and its integration of the technology behind ChatGPT into the MS Office suite, this may be a pressing concern. Using AI to generate or expand on thoughts is one thing, but dependence upon a tightly integrated AI assistant is another.
Skepticism
Another impact of AI-generated text is that it adds a layer of skepticism to written communication. Beneath the text we're reading, we'll now be wondering if it was written by a human, an AI, or some combination of the two. While in some circumstances AI can help to improve our ability to communicate, there is also the potential for it to add a new layer of separation between the author and reader.
Relational breakdown
A final social practice implication of AI tools like ChatGPT is how it impacts the dynamic in the classroom. As educators decide how to address these tools within the context of their courses, there are opportunities to either strengthen or weaken the relationship between the student and the teacher. More than any specific pedagogical goal motivating the embrace or ban of AI text generation tools, the impact that a particular policy has on the teacher-student relationship should be carefully considered.
Deciding to ban a particular technology is a decision that must be carefully considered or else it is doomed to create a game of cat and mouse. And right now the cat really doesn’t have much in the way of offense—even openAI’s AI classifier for indicating AI-written text has a true positive rate of 26% percent. This is abysmally low, which should signal the challenges inherent in developing this type of tool.
Instead, as I’ve argued before, we should double down on a transparent teaching framework, focusing our energy on building stronger relationships with our students and clearly motivating why a tool like ChatGPT is or is not well aligned with the learning goals for our assignments. Collaboration is a far better framing than competition.
So what?
Technology is a complicated topic and it is easy to get caught up in the excitement of new tools and what they can accomplish. I hope that as we think about the technologies we use, this framework of tool, medium, and social practice will be a helpful way to analyze, commend, and critique the technologies we are using today, as well as those that we use tomorrow.
The Book Nook
On a whim this past week I picked up The Lincoln Lawyer by Michael Connelly. In it, the conflicted defense lawyer, Mickey Haller, attempts to navigate the ethical challenges embedded in his work in the Los Angeles court system. I had already watched the excellent TV series last year and so I knew the basic thread of the plot, but it was fun to read the novel and see the story from a different perspective.
This was the first Michael Connelly novel that I've read, but surely won't be the last!
P.S. I always love book recommendations, so please send me a note with anything you think I’ve got to have on my list!
The Professor Is In
With the Spring semester comes Spring Clinic Presentations at Harvey Mudd. It's always great to see the students present what they've been working on in their projects.
Watching the presentations this last week reminded me of my favorite talk on how to design good slides from Jean-Luc Doumont. This talk is a masterclass on how to communicate effectively with slides, with many analogies that resonate with me as an engineer—perhaps none more deeply than “maximize signal–to–noise ratio.”
This video had a huge impact on the way that I give talks. I was lucky to stumble upon it during grad school, but hopefully it gets to you sooner!
Leisure Line
The first pie before it hit the oven this weekend. Followed by pepperoni, barbecue chicken, and pesto chicken with veggies.
Still Life
One of the fist-sized Camellias on the tree in the back. This one is off the tree with white flowers right next to the red one. Hoping to have some white flowers on the bush in front soon too!
This definition is from Smith, Sevensma, Terpstra, and McMullen in Digital Life Together.
If you want to dig deeper into the technical aspects, I recommend checking out the materials from a course at Stanford: CS324 - Large Language Models.
By the way, your inclusion of the video by Jean-Luc Doumont reminded me of my own education in the art of presentation by Tom Mucciolo, a master presenter himself who trains others in making presentations. I heard him in person where he taught us to become more effective presenters, and this happened at the beginning of my career switch to video producer, very auspicious timing. Long before my exposure to Mucciolo's precepts for awesome presentation, some professors at Mudd were saying back in 1980 and 1981 that I had set some kind of precedent in engineering presentation simply by referring to myself as "I" and my team as "we," and then telling the audience what we or I had done in our project. Apparently, speaking in the third person was the norm for presentations back in those days.