Live By The Vibe, Die By The Vibe
Rare and valuable skills still rule in the age of AI
Thank you for being here. Please consider supporting my work by sharing my writing with a friend or taking out a paid subscription.
Over the last week, I’ve been spending lots of time with Claude Code. Claude Code is an agentic, AI-powered software development tool that lives in your terminal (and as of yesterday, on the web and the iOS app). After installing it, you simply type claude in a terminal window and hit enter. Shortly thereafter, you are greeted with a friendly little robot icon and a line to enter some text. To proceed, you simply ask Claude a question or to do something, and off it goes.
While programmers are the main target demographic for Claude Code, it won’t stay that way for long. As these tools continue to get better, they’ll become more accessible for users who have never before interacted with a command line interface. In fact, as
showed in a recent post, there are probably many ways that you can use Claude Code in your work right now, even if your role has nothing to do with software engineering.Writing code with an AI assistant like Claude Code is an example of a new style of programming commonly referred to as “vibe coding.” Vibe coding, a term coined by
, refers to a way of programming using LLMs where you focus on describing the high-level details, allowing the AI tool to manage all the specifics needed to make that idea come to life. You write in English and without worrying about the specific syntax of programming languages or the nitty-gritty details of variables, data structures, or build scripts. It’s pretty convenient, and frankly, quite fun.There’s a good argument to be made that this approach to writing code will become a formidable tool for writing software. There’s already good evidence to back this up.
Just do a quick search on YouTube and you’ll find countless videos demonstrating the way that software developers are using these tools to build even more quickly. Some of them, like Lovable, continue to build on their offerings by bundling together an LLM-powered chat interface for building apps, combined with the full-stack cloud services needed to deploy them. With a few prompts and a click of a button, you can launch your idea into the world and share it with anyone with an Internet connection.
The rapid evolution of these tools is creating significant disruption not only in professional practice, but upstream in the computer science and engineering programs that train these engineers. Given what I know about academia, I’m bearish on the potential of most programs to respond quickly enough.
We’re seeing the challenges as new graduates search for entry-level jobs. Earlier this year, Axios published a piece declaring a “white-collar bloodbath”, riffing on statements from Anthropic CEO Dario Amodei in which he suggested that “AI could wipe out half of all entry-level white-collar jobs — and spike unemployment to 10-20% in the next one to five years”. Amodei also famously made the prediction in March of this year that, within three to six months, AI would be writing 90% of the code software developers are responsible for. Here we are, almost seven months from that moment. I doubt the number is 90% for all software engineers, but my hunch is that the number is more than 90% for many.
How then shall we learn?
In the face of this kind of disruption by an emerging technology, what is the prudent move? Here are three principles that I think are worthwhile guiding principles:
Ride the wave.
Skate to where the puck is headed, not where it is.
Look backward to see forward.
But before I take a few paragraphs to unpack what each of these means, I want to take a quick detour to talk about compilers. Stick with me.
Learning From Compilers
While I’m certainly no professional software developer, I’ve written plenty of code in my day. As I search for a mental model to understand AI-assisted programming, I can’t help but think about a similar software development revolution that happened in the 1950s: the compiler.
You see, at the end of the day, all computers understand are ones and zeros. They are built around sequentially executed instructions that perform relatively simple operations like add, sub, load, store, and branch.
All of the software that runs on your computer today is built using these low-level building blocks. Unless you’re reading this essay on a printed piece of paper, the glowing screen in front of you is not very far away from a processor that is executing these binary-encoded machine code instructions, numbers composed most likely of sixty-four ones or zeros put together according to a specific format to configure the machinery of your processor to perform a certain operation. The beauty of the complexity that can be built with just these seemingly trivial instructions never ceases to amaze me.

But writing these machine code instructions wasn’t always as easy as it is today. In the early days, you wrote your instructions on punch cards or punch tape, like the ones used to program the Monrobot XI in an assembly language called QUIKOMP.
Last summer, my uncle gave me a piece of computing history when he passed along some Monrobot punch tape and a few instruction manuals. These manuals were developed by local high school teacher, Mr. Arthur Pedley, who used them as part of a summer computer camp that my uncle participated in following the fourth grade. (Luckily, the spool of punch tape and the programming manuals managed to escape the Eaton Fire unscathed this January, safely encapsulated in my trusty Chevy Tracker.)



Here’s a photo of the machine that would read the programs off the punch tape.
When the microprocessor was invented, the process of writing code for it was essentially the same process, albeit minus the tape. Instead of punching holes in specific configurations, you would arrange ones and zeros in their digital counterpart, digital memory. You would write programs by writing sequences of these simple instructions, encoding the operation into ones and zeros that would properly configure the pieces of the processor datapath to perform the requested operation.
You can imagine how tedious programming such a machine must have been. In theory, any of the complicated programs we run on our computers today can be written directly in assembly and hand-transcribed to machine code. In practice, it would be a mind-numbing process you wouldn’t wish on your worst enemy.
Then, along comes the compiler. The compiler is a piece of software that helps to automate the process of writing machine code, allowing you to write programs in a higher-level language with structures that are much more intuitive to humans than an endless list of 32-bit binary numbers. It’s essentially a translator, mapping the intent embodied in one language into another.
The transformative impact of the compiler on software development is hard to overstate. Now, you could write programs in a language like C with constructs like conditionals (if, else), loops (for, while), and functions to more naturally communicate what your code should do without needing to spell out every detail of the assembly code it would be converted into. Not only could you express your ideas more easily and in a format that was much more human-readable, but you could work faster because the code to perform an operation in C was much more compact than the number of assembly instructions it corresponded to. One line in C might map to ten lines of assembly.
The Compiler’s Big Brother
The analogy is far from perfect, but we are seeing a revolution right now in software development whose impact will be of at least the same magnitude as the compiler. AI-assisted software development tools like Claude Code extend the level of abstraction even further. Writing all that machine code no longer requires knowledge of programming languages at all. Just write the specifications for the software in plain English, send the prompt to Claude Code, and it will start to crank it out.
So why bother to learn how to write code in 2025? If Claude Code is to the compiler what the compiler is to machine code, surely there is no reason to learn how to program, right?
Wrong.
I understand the allure of this argument, but it’s wrong. You see, even if you never write a line of assembly or machine code in your entire career as a software developer, it’s still essential that you know at least the basics of what assembly and machine code are. Even if you don’t write it directly, you’ve got to have an understanding of the full pipeline of your code as it gets distilled down to machine code.
The same lesson holds for learning to vibe code today. If you want to be a professional engineer, it’s not enough to know how to use Claude Code to create apps. You might be able to get something to work, but you’ll have little to no understanding of why. And when something goes wrong, you’ll be at the mercy of Claude to fix it.
This is the trap of our current moment with AI. There is a temptation to think that because an AI-powered tool can do something, you don’t need to learn how to do that thing. There is a truth to that, but it’s more nuanced than it might seem. Even though you may directly write far fewer lines of code yourself, there is still significant value in understanding what you are looking to generate. Knowing what you want allows you to much more effectively write the prompt to get it.
Don’t Die By The Vibe
When I say live by the vibe, die by the vibe, what I mean is that the kind of approach to programming that argues that there is no value in learning to write programs in Python or understanding the fundamentals of computer architecture is fundamentally flawed. The true power of vibe coding is that it enables you to go from zero to one without getting stuck on all the minutiae that can easily bog you down.
But the argument that the future of any profession is just learning how to properly prompt the AI tool to do the thing is misleading at best and professional suicide at worst.
Cal Newport, in his book So Good They Can’t Ignore You, develops a concept that he calls “career capital.” He argues that a meaningful career starts with developing skills that are rare and valuable. When you have these skills, they give you the kind of leverage in the marketplace that allows you to write your own job descriptions.
As a perpetual student, I am thinking today about what I can continue to do to develop rare and valuable skills. The thing about rare and valuable skills is that they keep evolving. What was rare and valuable last month or last year may not be today and almost certainly won’t be next year. You’ve got to keep moving, growing, and learning.
If you think that the rare and valuable skill is learning how to prompt some set of AI tools more effectively than the guy or gal next to you, think again. Here are three better ideas to shape your thinking:
Ride the wave.
Skate to where the puck is headed
Look backward to see forward
1. Ride the wave
Ride the wave means that you should try your best to stay on top of the emerging trends and movements in your field. You don’t need to be an expert in all of them, but you should know the gist. Given the far-reaching implications of LLM-powered AI tools today, this means that almost everyone should have some fluency with using AI tools and at least a basic understanding of the fundamental ways they work under the hood, and even more importantly, the ways that they don’t.
2. Skate to where the puck is headed
Skating where the puck is headed means that you’ve got to keep your eyes on the horizon. In many ways, this is just another aspect of riding the wave—namely, that to ride the wave, you can’t get swallowed by it.
If you are paying too much attention to small details or trying to go deep on every aspect of the emerging technologies, you’ll burn out. It’s exhausting trying to keep up. If you want to stay on top of the wave, you’ve got to try to get a sense of where it’s heading and use that as a filter for where you spend your limited time and energy. Many times, this means focusing on some fundamental skills that have nothing to do with AI that will enable you to use AI most effectively in a particular discipline.
3. Look backward to see forward
Lastly, sometimes the best way to figure out where we’re going is to look where we’ve been. If you read some of IBM’s brief history of Fortran, the first compiled programming language to achieve widespread success and adoption, you’ll see themes of skepticism that might remind you of the discourse around AI-assisted coding tools today.
Fortran confounded skeptics who insisted that a program compiled from a high-level language could never be as efficient as one that was handcrafted directly using numerical codes. Backus’s team had implemented the first optimizing compiler, which not only translated Fortran’s programs into the IBM 704’s numerical codes but produced codes that ran nearly as fast as anything that could be crafted by hand.
While there is no clear one-to-one mapping to the AI-assisted coding tools we are seeing explode today, it’s hard to not see the parallels.
Rare and Valuable Skills, Amplified
At the end of the day, I continue to return to a
tweet from a while back where he made the point that AI really means amplified intelligence. Here’s the thing about amplifiers: garbage in, garbage out. Even if you think that AI is going to drastically reshape the way that code is written today, it’s still going to run on the same processors. AI’s amplification of your ability to create software means that there is even more value in learning the fundamentals.If you live by the vibe, you’ll die by the vibe. The real unlock is to use it to continue to push yourself and to deepen and expand your rare and valuable skills.
If you focus on career capital, you’ll be in good shape. Developing expertise requires concerted effort, the same as ever.
Got a thought? Leave a comment below.
Reading Recommendations
This piece from The Observer, the student newspaper serving Notre Dame, Saint Mary’s, and Holy Cross, takes no prisoners. These students want no part of AI in their core curriculum. They get how using AI will more than likely rob them of the whole point of the classes in the first place.
riffs on grading and how avoiding the AI-apocalypse might be easier than it seems at first. Amen.AI should not only be prohibited in the lower-level courses students most often take to fulfill their core requirements. These classes should also be restructured to make AI use functionally impossible. Assignments should be formulated in such a way that students cannot use artificial intelligence tools in substantive ways.
Professors should employ in-class essays, oral exams and rigorous discussions — assessment options that are far more difficult to use AI tools to complete. Assignments that force a close reading of the text should constitute the bulk of grading in any introductory humanities class.
keeps banging the drum, this time taking aim at Perplexity and calling them out on their shameless wink-wink marketing around their Comet browser, something I was fuming about a few weeks ago, too.The Observer editors are right: AI forces us to confront the purpose of our courses. Alternative assessments — oral exams, in-class writing, project portfolios — might help, but they aren’t the heart of the solution. The heart is rethinking assignments and assessment itself: designing courses that value reasoning, collaboration, and formation over mere output.
We badly need to move beyond talk of AI slop, model collapse, or general criticisms that AI is bad at performing all tasks. Instead, we should start being honest when these tools work and are effective, while honing meaningful criticism of when they are not and cause negative and often lasting impacts on our culture. Selling AI to students as a means to avoid learning must be one of those, and companies that engage in it should be put on notice that this practice isn’t acceptable and has consequences.
The Book Nook
Returning this week to one of my favorite guides to technology’s impact on us: Neil Postman. This time, I picked up The End of Education. Still only a few pages in, but the introduction alone is worth the price of admission.
Whereas the science-god speaks to us of both understanding and power, the technology-god speaks only of power. It demolishes the assertion of the Christian God that heaven is only a posthumous reward. It offers convenience, efficiency, and prosperity here and now; and it offers its benefits to all, the rich as well as the poor, as does the Christian God. But it goes much further. For it does not merely give comfort to the poor; it promises that through devotion to it the poor will become rich. Its record of achievement—there can be no doubt—has been formidable, in part because it is a demanding god, and is strictly monotheistic. Its first commandment is a familiar one: “Thou shalt have no other gods before me.” This means that those who follow its path must shape their needs and aspirations to the possibilities of technology. No other god can be permitted to impede, slow down, frustrate, or, least of all, oppose the sovereignty of technology. Why this is necessary is explained with fierce clarity in the second and third commandments. “We are the Technological Species,” says the second, “and therein lies our genius.” “Our destiny,” says the third, “is to replace ourselves with machines, which means that technological ingenuity and human progress are one and the same.”
The Professor Is In
What I didn’t write about up there was what I’ve been spending my time building with all those hours in Claude Code this week. Amongst several things, one of the main products has been a microapp that helps you to write introduction emails to connect two people in your network. It pulls data from Airtable and syncs up with your Gmail account to easily save a templated draft, which you can easily customize and send. It’s been a fun project to learn some new skills about databases, Google OAuth credentials, and self-hosting an app using Docker Compose.
Leisure Line


Last Friday night, we had a fun time at the Claremont High homecoming football game. Complete with halftime fireworks.
Still Life



Your friendly reminder that the macro mode on your iPhone rocks. Featuring the small lizard from the park. No scale bar, but that tree trunk was probably only about an inch and a half in diameter.










Just being reminded of FORTRAN brings back so many memories from my first post-Mudd job. The algorithm that I worked on in the 1980's, well it had to be thoroughly vetted and tested the heck out of before its assembly code was burned into ROM and deployed with our flying armed services; I'm told it's still in service today. At that time, we had only one hardware prototype of our product connected to an assembly code emulator. I had heard horror stories of fellow engineers spending weeks in the lab sweating over the cryptic assembly language that was native to our in-house-created CPU (16 bit, but with floating point multiply and divide commands available to us, unprecedented at that time). All of us in the department had virtually unlimited access to a VAX/VMS computer that ran FORTRAN. Tada! I first wrote out my algorithm on paper in a high-level language that resembled Pascal (since as freshmen at Mudd, we were required to learn ALGOL, a structured language very similar to Pascal, and it was my first computer language ever). After hand-compiling my algorithm into FORTRAN, it gave me a few weeks to work out all the computing kinks and get it working to my satisfaction. This is where I did my own testing the heck out of the algorithm. The rest was just busy work by comparison: hand compile my now revised FORTRAN code into assembler, take it into the lab, and prove that it worked on the actual hardware. I was in and out of the lab in a matter of days. I wish I had spent a lot more time in there, since that was a once-in-a-lifetime experience in itself. All of this was so much easier on me thanks to FORTRAN.
Have you played around with Claude's Skills yet? These are equally impressive and another nod in the direction we are headed.