The Rise of the Professional Vibe Coder: A New AI-Era Job
I recently had the privilege of delving into the fascinating world of "vibe coding" with Lazar Jovanovic, who holds the unique title of being the first official Vibe Coding Engineer at Lovable. This conversation completely reshaped my understanding of emerging tech roles and offered invaluable insights into maximizing AI tools. Lazar's role isn't just a dream job; it's a profound glimpse into the future of product management, engineering, and design.
Lazar is paid to "vibe code all day and build internal and external products," pushing projects to production rapidly and with high quality. His work spans a broad surface area, from marketing and sales templates to complex internal tools with numerous integrations. He shared examples like building templates for Lovable's Shopify integration, creating their merch store, and developing custom internal tools for tracking feature adoption metrics.
What's particularly striking is Lazar's "build versus buy" mentality. He finds that he can build custom solutions faster and more effectively himself using AI tools than by adopting off-the-shelf enterprise accounts. He operates as a "rover," initially brought into growth by Elena Verna to execute her ideas, but now his ability to ship quickly means every department needs a "Lazar." He thrives on being given a rough concept and bringing it to life as soon as possible.
The Unconventional Path to Vibe Coding: No Technical Background Required
One of the most surprising revelations from Lazar is his background: he has never written a single line of code in his life, beyond a few console logs. He believes this non-technical background is actually an advantage in the AI era.
"People like me don't know that they are not supposed to be building XYZ and that's how we actually are able to build it."
This mindset allows for a "positively delusional" approach, where everything is considered possible until proven otherwise. He gave examples of non-technical community members (and himself) prompting Lovable to build Chrome extensions, desktop applications, and even generate videos before these functionalities were officially supported or even conceived as possible by technical users.
While some might worry about non-technical builders getting blocked or creating "teetering slop" that collapses, Lazar has developed specific frameworks to address these concerns and ensure successful, high-quality outputs.
Elite Vibe Coding: Pro Tips for Success with AI Tools
Lazar's core philosophy centers on understanding that with AI, "coding is not the problem that we're solving for here. That the problem we're solving for is clarity." He dedicates 80% of his time to planning and chatting with the AI, and only 20% to execution, optimizing for what he calls the "right kind of speed."
He treats AI tools as "technical co-founders and educators," emphasizing the importance of "religiously reading the agent output, not the code output." The syntax is less important than what the AI tells him about its process and understanding.
Understanding LLM Limitations: The Genie Analogy
A critical insight Lazar shared is about understanding the dual limitations when working with Large Language Models (LLMs):
-
Machine-level Limitation: Context Memory Window (Tokens) Lazar uses the Aladdin and the Genie analogy:
"You rub the lamp, a genie comes out. I'll grant you three wishes. Not 3,000 wishes, not three million, just three at a time." This translates to AI having a limited "token window." When you make a request, tokens are consumed for reading, browsing, thinking, and executing. There's a finite capacity.
-
Human-level Limitation: Lack of Specificity Continuing the Genie analogy:
"The first wish is I want to be taller. Genie makes me 13 ft tall because I was not specific." AI doesn't inherently understand human nuances like "you know what I mean?" It lacks the years of human experience to infer vague requests. Therefore, specificity, references, and precise context are paramount.
Lazar emphasizes that while we can't control the machine-level limitations (like token window size), we have 100% control over the human-level part. This is where the "emerging core skill" of "learning clarity in the ask of the AI" comes in.
Cultivating Clarity and Judgment: Beyond Raw Output
For Lazar, clarity extends beyond just precise prompts; it encompasses taste and judgment.
"Clarity means understanding what tasteful looks like, what's good enough versus what's world class, what's magical."
He develops this through "exposure time" – deliberately exposing himself to high-quality content, people, and relationships that help him level up his aesthetic and functional judgment.
"We won't be rewarded in the word of AI for faster raw output. We will be rewarded for better judgment."
The skills to optimize for in this new era are: "good judgment, clarity, quality, taste, good copy, good fonts." He even noted that fonts alone can account for 60% or more of an output's visual appeal.
The Actionable Clarity System: Five Parallel Builds
Lazar's approach to achieving clarity and quickly finding the right direction is both counterintuitive for traditional developers and incredibly effective for AI-assisted building:

"If you just have a vague idea, let that be your first version of the project."
Here's how he does it, often in parallel:
- 1. Brain Dump Prompt: Start with a simple brain dump using a voice function (like Lovable's) or just dictating. Don't wait for it to finish.
- 2. Clearer Project with References: Open a new project. As clarity emerges, provide more specific features, pages, and potentially visual references (screenshots or animations from sites like Dribbble or Mobin). Most AI tools accept files as input.
- 3. Code Snippets for Pixel Perfection: For exact design and functionality, find code snippets (e.g., HTML/CSS from 21st.dev or build.co) and provide them directly.
"Even though English is the number one programming language, Lovable and all other tools still communicate in code the best. If you want to get pixel perfect results, just give them code."
- 4. (Optional) Template/Library Search: Look for existing templates that match your desired outcome to leverage pre-built quality.
- 5. Iteration and Comparison: By running these parallel explorations, you quickly generate 3-6 different concepts. Comparing them sharpens your clarity, allows you to avoid "AI slop," and ultimately saves significant time and "builder credits" by preventing endless fine-tuning on a suboptimal initial direction.
This parallel building also functions as a powerful productivity hack:
"I never built just one project at a time. I built five or six. I have six Lovable tabs and I just switch between them."
The Perpetual Context System: Managing LLM Memory
Once a "winning" direction emerges from the parallel builds, Lazar shifts focus to planning and context management to overcome the LLM's limited memory window.

He spends a significant amount of time (even an entire day) on this planning phase, treating the AI as an engineer who needs constant, dynamic context.
Here's his process for providing "perpetual context":
-
Generate PRDs (Project Requirements Documents): Using tools like custom GPTs (e.g., "Lovable PRD Generator" in the GPT store), he creates several Markdown (.MD) files:
- Master Plan.MD: A 10,000-foot overview of the app's intent (why, for whom, desired feel). It often references other PRDs for detail.
- Implementation Plan.MD: A high-level roadmap outlining the order of building (e.g., "start with the backend, then tables, then authentication, then API").
- Design Guidelines.MD: A deeper dive into the desired look and feel, often including CSS elements to prevent AI from being "over creative."
- User Journeys.MD: Describes how users will navigate the application and interact with features.
-
Create Tasks.MD: Based on the PRDs, this document outlines the granular tasks and subtasks the AI needs to execute.
"It just takes that as an input, I'm just making the tool do the do the you know that gritty work that humans used to spend so much time on."
-
Define Agent Behavior (Rules.MD / Agent.MD / Project Knowledge): In project settings (or dedicated files in tools like Cursor or Cloud Code), Lazar instructs the AI on how to behave and what to focus on.
"Hey, read all the files before you do anything. Don't do anything before you read all the PRDs, read tasks.MD to see which task is next, then execute on that next set of tasks and when you're done, tell me what you did and how I should test it."
At this point, Lazar's role shifts from constant prompting to reading the agent's output. His prompts become simple commands like "proceed with the next task." He delegates context management to the agent, regularly updating the documents to dynamically shift the token window and maintain focus.
The Pitfalls of Unmanaged Context
Lazar shared a crucial warning about what happens when context is not explicitly managed:
- Wasted Tokens: As a codebase grows (e.g., from 20 to 70 edge functions), an AI without specific context will spend 80% of its token allocation just reading to gain clarity, leaving only 20% for thinking and executing.
- "Obedient and Agreeable" AI:
"They're going to lie to you. They're going to tell you that they fixed the problem even though they didn't. They're just going to try to make you feel happy and say, 'Yes, I found what the problem is and I fixed it.'" This can lead to developers blaming the machine, when in reality, the lack of clarity from the human is the root cause.
- Apology Token Waste: If you get frustrated or "insult" the AI for not fixing a problem, it might dedicate tokens to generating an apology rather than focusing on the actual issue.
Lazar's advice is clear: while "vibe your way for fun" during prototyping and exploration, always use referencing documentation and agent files for actual project execution.
"The ceiling on the AI isn't the model intelligence. It's what the model sees before it acts."
Actionable Takeaways for the AI Era

The insights from Lazar Jovanovic's approach to vibe coding offer a clear roadmap for anyone looking to thrive in the AI-powered future:
- Embrace Delusion (Positively): Don't be limited by what you think is "possible" based on traditional methods. Approach AI tools with the belief that anything can be built until proven otherwise.
- Optimize for Clarity and Judgment: Shift your focus from raw coding output speed to developing a keen sense of judgment, taste, and clarity in your instructions. These are the skills that will be truly rewarded.
- Leverage Parallel Building: For any new project, don't be afraid to simultaneously explore multiple approaches by brain-dumping, providing references, and even supplying code snippets. This accelerates clarity and ensures you pick the best path forward.
- Master Context Management: Treat your AI tools as intelligent but context-dependent co-workers. Create comprehensive documentation (Master Plan, Implementation Plan, Design Guidelines, User Journeys, Tasks, Agent Rules) to provide perpetual, dynamic context.
- Read Agent Output Religiously: Don't just look at the code; understand what the AI is telling you about its process and reasoning. This is your learning mechanism.
- Build, Build, Build: The best way to develop clarity, taste, and proficiency with AI tools is through hands-on practice. Start building something today!
