Vibe Coding as a software engineer
There’s a lot of talk about “vibe coding”, but is it just a vague term for prototyping, or could vibes change how we build software?
The term “vibe coding” is relatively new and has been gaining traction recently, since computer scientist Andrej Karpathy, OpenAI cofounder and Tesla’s former director of AI, used it in a now widely-referenced tweet which helpfully provided a definition. There were a few earlier references to “vibe coding”, but Andrej’s post seems to have propelled it into wider usage.
Today, terms like “I vibe coded this app” can be found in developers’ lingo, especially frontend-focused devs. Last week, my colleague Elin attended the Local First conference in Berlin, and found that more than a few engineers mentioned “vibe coding” when discussing the development process. So, today, we’re all about the vibes:
What is vibe coding? There’s a blurry line between “letting the AI rip” (vibe coding), versus paying attention and correcting it, which could be called AI-assisted coding. This is because as engineers, we can understand outputted code – if we choose.
Vibe coding tools. A collection of tools frequently mentioned by developers. GitHub Copilot is often cited, while Claude Code is getting lots of love, and ChatGPT is still used a lot.
Use cases. Prototyping is the most common, but brainstorming, and building better dev tools are also useful.
Vibe coding examples. An experienced iOS dev “vibe coding” an app in 3 hours, and a product manager who got stuck on vibe coding, and became more hands-on in order to ship a neat app.
Reminder: it’s not production ready! It’s a risk to push code from the AI to production, without careful review. Security issues, bugs, performance issues, cost spikes can all be easily shipped.
What will vibe coding change? Agentic modes are making LLMs more capable at coding, and they will help us prototype faster. At the same time, software engineers who are hands-on architects, have deep technical knowledge, and product taste; well, they’ll likely be even more in demand.
Before we start: I recently talked about AI coding with Yuzheng Sun, host of the Pragmatic Data Scientists YouTube channel. It’s a 35-minute conversation that you can watch here.
1. What is vibe coding?
Here’s how Andrej Karpathy defined his understanding of it (emphasis mine):
“There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good.
Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it.
The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away.
It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or web app, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.”
Andrej explains talking to his Mac using SuperWhisper, and telling Cursor’s agent mode, Composer, what to add to the app he’s building. It sounds like being involved in coding but a bit disengaged from it at the same time; the focus is not on the code itself, but on the big idea. With AI coding tools and agents increasingly good at generating code, this mostly works.
Letting AI generate code
At least two books with “vibe coding” in their titles will be published this year:
Beyond Vibe Coding by Addy Osmani
Vibe Coding by Gene Kim and Steve Yegge
In Beyond Vibe Coding, Addy Osmani defines it like this:
“In vibe coding, you leverage powerful LLMs as coding partners, letting them handle the heavy lifting of code generation so you can focus on higher-level goals.”
Steve Yegge, co-author of Vibe Coding, told me what the term means to him:
“Vibe coding is when the AI writes the code and the human supervises.”
My take on vibe coding is similar, in that you allow an LLM to “take the lead” in writing code, a bit like turning on a car’s self-driving mode and taking a metaphorical back seat.
Vibe coding vs AI-assisted coding
Software engineer and Django creator Simon Willison, points out how “vibe coding” gets meshed together with “AI assisted coding,” in his post Not all AI-assisted programming is vibe coding (but vibe coding rocks):
“I’m seeing people apply the term “vibe coding” to all forms of code written with the assistance of AI. I think that both dilutes the term and gives a false impression of what’s possible with responsible AI-assisted programming.
Vibe coding is not the same thing as writing code with the help of LLMs! (...)
It’s fun to try out wild new ideas, and the speed at which an LLM can produce code is an order of magnitude faster than even the most skilled human programmers. For low stakes projects and prototypes, why not just let it rip?
When I talk about vibe coding, I mean building software with an LLM without reviewing the code it writes.”
What if we can’t separate “vibe coding” and “AI assisted coding”?
A strict definition of vibe coding seems to involve not looking at the code, only at what it generates, and prompting an LLM. But can we really do this? I tried several times and failed at vibe coding by that definition; usually because the AI tool asked me to approve things, like creating a new database schema or picking an approach. Then, I took a glance at what it did and sometimes intervened.
I knew what I wanted to build, and kind of knew how I wanted to do it. To repeat the driving analogy, I was mostly letting the car drive itself, and occasionally steered it to change lanes, or take an exit. So, was that “vibe coding” because I gave away most control, or was it “AI assisted coding” because I paid some attention? It felt like I was moving faster with less effort, so I’d say it was “vibe coding” to a good extent.
Personally, I find it hard to pretend I don’t know anything about code, but I do sometimes hand over most control to an agent, then check what it does. I guess this combines “vibe coding” and letting the agent “rip”, with taking control back, as and when.
In the book, Beyond Vibe Coding, Addy Onsmani differentiates between vibe coding and “AI-assisted engineering”:
“On one end of the spectrum lies vibe coding. On the other end is what I’ll call AI-assisted engineering: a disciplined method of weaving AI into each phase of software development, from design through testing, under clear constraints.
Both approaches leverage powerful AI, but their goals, audiences, and expectations differ markedly.”
For the purposes of this article, “vibe coding” means handing over control to an AI tool to write all the code. The definition of “vibe coding” seems clear cut for non-developers who don’t understand code, but it’s murkier for us engineers who do understand – if we choose to, that is!
2. Vibe coding tools
There are lots of tools for vibe coding, and many are also useful for AI-assisted development. I asked devs for their most commonly-used tools:
“Agent modes” within IDEs or IDE extensions
Tools many of us likely use for autocomplete or AI-assisted coding, can also be used to let the AI rip with vibe coding. Popular ones include:
GitHub Copilot and its Agent Mode within VS Code, Visual Studio and other IDEs (e.g. JetBrains IDEs, Eclipse, Xcode)
Cursor Chat (previously: Compose) in Cursor
Cascade by Windsurf, within Windsurf and JetBrains IDEs
Cline: the “collaborative AI coder” VS Code extension
Junie within JetBrains IDEs
Augment Code with support for VS Code, Jetbrains IDEs, and others (including Vim)
Others:
Zed editor and its Agentic Editing mode
Roo Code for VS Code
Goose for Jetbrains
Cody from Sourcegraph
Tabnine: the AI coding assistant that predates even GitHub Copilot
Command line agentic tools
I am hearing more devs rave about the power of using agents without an IDE, and just via a command line. For vibe coding, when not looking to “take over” from the AI and edit the code, an IDE could be unnecessary:
Claude Code by Anthropic, using the powerful Sonnet 4 model. Runs in your terminal and interacts directly with your codebase. I’m hearing a lot of praise from engineers, and some devs switching over fully even from tools like Copilot.
Codex by OpenAI: a software engineering agent that runs in a cloud-based virtual machine
Aider: paid programming with an agent in the terminal. This tool was also popular a year ago.
Amp by Sourcegraph
Other tools
Tried and tested:
ChatGPT: the most commonly-mentioned LLM, and probably still the most popular among developers.
Claude: especially capable at code generation, and gets lots of mentions by engineers
Gemini and other LLMs. Plenty of devs just prompt an LLM, then copy+paste the code into their IDE to run.
Claude Artifacts. Great for small web apps and code snippets. We did a deepdive in How Anthropic built Claude Artifacts.
Design-related tools:
Google Stitch: turn prompts into UI designs for mobile and web
Figma Make: for creating prototypes from ideas. Start prototyping by sketching a design
Anima: converts Figma designs to code
Other:
BMAD: an agent that follows the “Breakthrough Method of Agile AI-driven Development”
Gamma: for generating slides
n8n for workflow automation
There’s an ever-growing list of tools that generate code with AI: here’s an additional 20+.
Fullstack web platforms
Several startups have built products which can theoretically build and deploy a fullstack web app with database support, usually using Supabase. Some of the more popular ones:
Lovable: probably the most popular tool for quick prototyping for web apps
Vercel v0: good feedback from devs on creating visual prototypes
Replit. Former Googler Julian Harris shared that he built VoteLogo.com in just one weekend
Bolt.new: this platform can generate mobile apps using React Native and Expo
Others:
Firebase Studio: Google’s offering that uses Gemini and builds on top of Google’s Firebase backend
Grok Studio: the fullstack workspace powered by X’s Grok model
These products seem built for non-developers. Even so, devs are in a much better position to create something usable because these tools inevitably run into issues when you try to prompt anything even moderately complex, such as using a database for a website. Most engineers who mentioned these tools made it clear they use them for prototyping UIs, and showing off ideas.
3. Use cases
Vibe coding seems best suited for prototyping, and devs also mention a few other interesting use cases.