If Part 1 was the dream of playing Emperor, standing atop a hill, pointing at the horizon, and commanding a wall into existence; and Part 2 was the soggy reality of that wall crumbling because nobody could see what was inside it, then Part 3 is where the story takes a turn.
We stop fighting the machine. We start dancing with it.
For the last two years, the tech world has been consumed by a peculiar obsession: Prompt Engineering, the art of whispering exactly the right magic words into an AI’s ear and hoping it behaves. Entire careers have been built on the premise that if you just phrase the question perfectly, the oracle will hand you the perfect answer.

A moment of choosing
But that era is already fading, like morning fog burning off a lake. We are moving beyond rigid instructions and entering the age of Intentional AI; a world where we stop dictating every footstep and start describing the destination.
From Instructions to Intent: Letting Go of the Map
Picture the old world of software like planning a cross-country road trip in 1985. You’d unfold a massive paper map across the kitchen table, trace the route with a highlighter, and write down every highway number, every exit ramp, every gas station where you’d need to refuel. Miss one turn, and you’d end up forty miles off course in a town you’d never heard of.
That was imperative coding—a detailed travel itinerary for the computer, accounting for every twist, turn, and gear shift. Even in the early days of Generative AI, we treated prompts the same way: like a complex recipe with rigid steps. Do X, then Y, but only if Z happens. If Z doesn’t happen, check for W. If W is missing, throw an error. It was exhausting. It was fragile. And it was familiar.

The contrast between precision and vision
But something is shifting beneath our feet. We are moving from instructions to intent. Instead of telling the AI how to build the wall—which bricks, which mortar, which angle—we are returning to the Emperor’s original power: declaring why the wall needs to exist and what it should feel like to stand behind it. We are no longer writing a script for the journey. We are naming the destination, and trusting the machine to find the road.
The “Spotify Moment”: Welcome to Vibe Coding
Here’s an analogy that makes the shift click.
Think about how you used to listen to music. You’d browse through albums, pick a song, queue it up, skip to the next one, curate every minute of your listening experience. It was deliberate. It was manual. It was yours.
Then Spotify came along and asked a simpler question: “How do you want to feel?”
You type in “chill, lo-fi Sunday morning” and the algorithm conjures a stream of songs you never would have found on your own—but that land perfectly. You didn’t select the frequencies. You didn’t specify the BPM. You described a vibe, and the system delivered.

The Spotify moment: merging of technical precision with emotional expression.
The software industry is living through its own Spotify Moment. The buzzword is “Vibe Coding”—and it’s exactly what it sounds like. Engineers are beginning to describe the feeling and behavior of an application rather than its explicit logic. Here is what’s driving it.
- Organic vs. Brittle: Traditional business automation is “brittle.” If one variable is missing or a single comma is out of place, the entire workflow grinds to a halt.
- The Intent-Based Shift: Intent-based systems are “organic.” They understand the goal—such as “Close this deal while maintaining our brand’s reputation for quality”—and can navigate the messy, unpredictable nature of real-world data to get there.
The CEO’s New Interface: Architecture as Conversation
Now, here’s where the story gets interesting for the people who don’t write code—and never wanted to.
For over sixty years, we taught humans how to speak to computers. We built languages—COBOL, Java, Python—each one a dialect designed to bridge the chasm between human thought and machine execution. And for sixty years, if you couldn’t learn the dialect, you were locked out of the conversation.
That door is swinging open.

Strategy made visible.
In this new era, a CEO’s “code” is actually their culture and intent. If you can clearly articulate the soul of your business—what you stand for, how you treat customers, what quality means in your world—then the AI can architect the execution. The “interface” between human leadership and machine labor has finally become fluid, like a conversation between two people who share a language. We are moving from a world of “managing tasks” to a world of “curating outcomes.” The leader who once had to translate their vision into a hundred Jira tickets and a dozen project plans can now speak their vision aloud—and watch the system begin to build.
Beware of the “Vibe Trap”
And now we come to the shadow in the story.
The industry has its shiny new object: “Vibe Coding.” The promise is seductive—don’t worry about the rigid logic; just describe the “vibe” of the application, and the AI will manifest it. It sounds like freedom. It feels like magic. And for certain kinds of work, it genuinely is.
But for the enterprise leader sitting in the corner office, peering over a business that processes millions of transactions, manages regulatory obligations, and answers to auditors with very little sense of humor—“Vibe Coding” isn’t just a new interface. It’s a sophisticated new version of the Black Box.

The “Vibe Trap”
If we aren’t careful, we aren’t building software. We’re building a probabilistic liability.
The Siren Song of “Intent”
Let’s be precise about what’s happening.
We are moving away from “Instructions”—step-by-step code that tells the machine exactly what to do—and toward “Intent”—describing a goal and letting the machine figure out the path. This is the Spotify Moment: you don’t pick the notes; you pick the mood.
And for creative tasks, this is glorious. Writing a poem? Sketching a user interface? Brainstorming a marketing campaign? The AI thrives on ambiguity, plays beautifully in the spaces between specifics, and produces work that can genuinely surprise you.
But here’s the rub.
A business doesn’t run on “vibes.” It runs on states—a predictable automation engine that always produces the same output given the same input. When a customer requests a refund, the system must check the purchase date, verify the return policy, process the reversal, and update the ledger. When a bank processes a wire transfer, every digit matters, every compliance check must fire, and every audit trail must be clean.

In these moments, “creative reasoning” is not a feature. It is a catastrophic bug.
The New Turbulence: Why the “Vibe” Goes Sour
As organizations rush to embrace these intent-based systems, they’re hitting turbulence. Not gentle bumps—walls. Here are the six that matter most.
- Prompt Fatigue: When a vibe-based system misbehaves, the instinct is to fix it with more prompts—more instructions, more guardrails, more carefully worded constraints layered on top of each other. But trying to fix a vibe-based system with more prompts is like trying to nail Jell-O to a wall. Each new nail just creates a new wobble. It’s a manual workaround for a fundamental lack of structural integrity.
- The Context Window Ceiling: Large Language Models need context to generate useful output—the more relevant information you feed them, the better they perform. But here’s the catch: as you iterate on intent, refine your prompts, and layer on more context, the input grows. And every LLM has a context window—a hard ceiling on how much it can hold in its working memory at once. Even as models now offer a million tokens, they’ve been shown to degrade as the context approaches this limit. The sharper the tool, the sooner it dulls under the weight of its own instructions.
- The Comprehension Gap (Cognitive Technical Debt): This may be the most dangerous wall of all. If a leader doesn’t understand the system’s underlying architecture, and their team “vibe-coded,” they can’t fix it when it breaks. They can’t explain it to an auditor. They can’t predict how it will behave under stress. They become a hostage to a system they cannot explain—and in business, what you can’t explain, you can’t control.
- Popularity Leads, Versions Confuse: The translation of your “vibe intent” into an executable script depends entirely on the LLM’s training data—what it has read, when it was last updated, and which libraries, frameworks, and patterns it considers “current.” The AI might generate code using a deprecated library simply because that library dominated the training corpus. Your “vibe” was clear; the AI’s reference material was stale.
- Architecture Drowns: Robust software architecture—clean separation of concerns, well-defined interfaces, scalable patterns—is the invisible skeleton that keeps complex systems alive for years. Vibe coding often overlooks these critical blueprints entirely, producing code that works today but collapses under the weight of tomorrow’s requirements. Imagine building a bridge with “vibes”: it might look magnificent, but the first heavy truck tells you whether the engineering was real.
- Cloud Lock-In: Many “vibe” platforms are the 4GLs of the modern era—proprietary silos dressed in friendly interfaces. They promise freedom and deliver dependence. Your business logic lives inside their black box, on their servers, under their pricing model. You’ll pay for it indefinitely, and migrating away will feel like extracting a tooth with no anesthesia.

The “New Turbulence”
Coming Up Next…
The “Vibe” is a starting point, but it cannot be the finish line. We need a way to marry the fluidity of AI with the rigors of the enterprise.
Join me for the final chapter, Part 4: The Agentic Era: Curating Outcomes vs. Cognitive Technical Debt,” where we look at how builders are constructing improved agents using a self-referential feedback loop—sometimes called the “Ralph Wiggum Loop”—and ask the question that matters most: Is this genuinely new, or is it old wine in a shiny new bottle?

