From Bookshops to Parallel AIs: My Journey Into Vibe Coding

In 2025, “vibe coding” is the buzzword on everyone’s lips. As someone who’s been around the block a few times in the engineering world, I couldn’t resist diving in to see what all the fuss was about. A recent conversation with a young programmer, who told me how much faster AI tools had made his work, prompted me to reflect on my own beginnings. So let me take you on a little journey—from the days of bookshop coding to the era of parallel AIs.

A Conversation That Sparked Everything

Just last month, at a tech meetup, I found myself chatting with a programmer in his early twenties about whether AI coding tools were really delivering on their promises. What struck me was his matter-of-fact attitude about productivity gains that would have seemed like science fiction when I started. He told me that tasks that typically took him three days could now be done in half a day with modern AI tools.

But what really caught my attention was his story about bug fixing. We all know debugging can be unpredictable—sometimes you spot the issue in two minutes, other times you’re hunting for weeks through a massive codebase. He described how he could now feed a bug ticket to an AI agent, which would dive into the code, identify the error, and propose a fix. In one recent case, he said the agent found a bug in hours that he felt would have taken him days to track down manually.

This got me thinking: how did we get here? And more importantly, what does this mean for the rest of us?

The Early Days: Coding in the ’80s

I began my studies as a mechanical engineering student at Monash University in the 1980s. We learned Pascal on mainframes in a computer lab. There were no slick interfaces, no home computers that could handle much, and definitely no internet to Google a solution. If you needed help with a coding problem, you drove to a bookshop and flipped through technical manuals—assuming they had what you needed. Bug fixing could easily take days, and the idea of having instant answers was pure fantasy.

As a mechanical engineer who found myself increasingly drawn to the intersection of engineering and software, I spent countless hours in those computer labs. What I learned early on was that engineers who could write their own tools had a significant advantage. But in those days, creating those tools meant building everything from scratch, including user interfaces.

The Visual Basic Revolution: Discovering the 80/20 Rule (Before I Knew Its Name)

Fast forward a few years, and along came Visual Basic. This was a game-changer. Suddenly, I wasn’t spending 80% of my time crafting interfaces from scratch and only 20% on the actual engineering logic. Visual Basic flipped that ratio completely—I could now spend 80% of my time writing code that actually solved engineering problems and just 20% on the interface.

I didn’t know it at the time, but I was living the Pareto Principle in action. Looking back, this was my first real taste of how the right tools can fundamentally shift where you spend your mental energy. Some people criticised Visual Basic as “not as powerful” or worried about performance, but with Moore’s Law in full swing in the early 90s, it was often cheaper to buy a faster computer than to spend weeks optimising code.

This lesson feels incredibly relevant today. Current research indicates that developers using AI coding assistants, such as GitHub Copilot, are experiencing similar productivity shifts. According to GitHub’s studies, developers complete tasks approximately 55% faster, with the saved time being redirected to higher-value activities, including system design and collaboration.

From Bookshops to Instant Answers: The Internet Era

Then came the internet and Google, and suddenly the idea of physically driving to a bookstore to find a coding solution felt almost quaint. But even with instant access to information, you still had to know what to search for, dig through forums, and piece together solutions yourself.

My Weekend with Vibe Coding: Two AIs Are Better Than One

This past weekend, I decided to put modern AI coding tools to the test. I challenged myself to write a small app using Xcode—an IDE that I had never used before. This wasn’t just about getting back into coding after a break; I was learning an entirely new development environment from scratch. The difference this time? I had a coding assistant running inside the IDE and ChatGPT in speech mode as my second co-pilot.

What struck me wasn’t just the speed, but the nature of the interaction. When I got stuck—which happened frequently since I was navigating unfamiliar territory—I didn’t have to break my flow to search through documentation or tutorials. I could literally have a conversation with one AI about the broader approach while the other AI handled specific Xcode workflows and code suggestions in real-time. It felt like having two knowledgeable colleagues looking over my shoulder, one explaining the IDE’s quirks and the other helping with the actual programming logic.

By the end of the weekend, I had not only built a working app but had learned enough about Xcode to feel confident continuing to make this a useful app. More importantly, I had discovered that AI assistants could accelerate learning entirely new tools, not just help with familiar ones.

The 80/20 Rule Comes Full Circle

This experience brought the Pareto Principle full circle for me. Just like Visual Basic freed me up to focus on real engineering problems instead of interface drudgery, these AI tools freed me up to focus on creativity and problem-solving rather than syntax and setup.

The current data backs this up in interesting ways. Research from GitHub shows that developers don’t just work faster with AI assistants—they reinvest the saved time into activities like system design, learning new technologies, and improving code quality. It’s the 80/20 rule playing out again, but at a higher level.

The Parallelisation Paradox: From CPUs to AIs

As my weekend experiment evolved, I found myself not just using two AIs, but three. Beyond my coding assistant in the IDE and ChatGPT as my conversational partner, I had a third AI effectively working through GitHub’s web interface, with ChatGPT acting as the intermediary, to create a basic website to go with the app.

What struck me was how familiar this felt—not from a coding perspective, but from my experience with computational parallelisation back in the early 90s when we were scaling crash analysis simulations on LS Dyna 3D.

Just like Visual Basic rode the wave of Moore’s Law to deliver usability gains, this multi-agent approach feels like it’s riding a similar wave of AI capability improvements. But here’s where the crash analysis parallel becomes really interesting: we learned that parallelisation had sweet spots. You could scale up to a certain number of CPUs and see dramatic performance improvements, but beyond a point, the coordination overhead started eating into your gains. You’d hit diminishing returns where more processors actually made things slower.

I suspect we will see the same pattern with AI agents. Currently, three agents felt manageable—each had a clear role, and I could coordinate effectively between them. However, I can imagine that at some point, having five or ten agents working on the same project might create more coordination complexity than it is worth. The “agent parallelisation curve” probably resembles the CPU parallelisation curve we mapped out decades ago in computational fluid dynamics and crash simulations. Let’s see how that plays out.

This suggests that the next phase of AI-assisted development won’t just be about more agents, but about smarter orchestration—finding that optimal number where you maximize collaborative benefit without drowning in coordination overhead. It’s history repeating itself, but with artificial intelligence instead of silicon processors.

Addressing the Elephant in the Room: Will AI Replace Us?

As I wrapped up my weekend project, I realized something important about the broader conversation around AI and jobs. For anyone worried that AI tools are here to replace us, my experience suggests the opposite. I had never used Xcode before and realistically wouldn’t have had the time to work through tutorials and documentation to learn it from scratch. But with AI assistants guiding me, it was like having knowledgeable mentors who could help me navigate both the new IDE and the coding challenges simultaneously.

This isn’t about AI doing the job for you—it’s about AI removing the learning curve barriers that prevent you from tackling new tools and technologies. The assistants amplify your existing engineering knowledge and problem-solving skills, applying them to unfamiliar environments.

“A fool with a tool is still a fool.”—Grady Booch

The Broader Trend: Parallel AIs as the New Normal

What I experienced with my multi-AI setup is part of a larger trend. The most effective AI coding workflows I’m seeing aren’t about a single, all-knowing assistant, but rather multiple specialised agents working in parallel.

This parallel approach feels like it could be the future—not one AI trying to do everything, but a constellation of specialised assistants, each optimised for different aspects of the development process.

Embracing the Future of Coding

So if you’re wondering whether vibe coding is worth the hype, my answer is a resounding yes. Not because it replaces the human touch, but because it enhances it. The research bears this out: studies show that approximately 67% of developers use AI coding tools at least 5 days per week, and the productivity gains are measurable—typically in the 20-55% range, depending on the task.

However, these tools are transforming our understanding of the creative process in engineering. They’re like having a constellation of AI sidekicks that handle the routine parts, letting you focus on the interesting problems—the architecture, the user experience, the elegant solutions that only human insight can provide.

From those days of copying BASIC code from dusty programming manuals to having AI agents write unit tests while I sketch product strategy, it’s been quite a journey. And honestly, I’m excited to see where this leads next.

NOTE:

  • This article was written with the help of multiple LLM’s.
  • The workflow consisted of speaking with ChatGPT for 30 mins, discussing thoughts about what to use in the article, and allowing it to create a Canvas for checking—more discussion to add extra thoughts.
  • Transferring this to Claude and asking it to take the transcript and rethink the article.
  • Final hand editing in Linkedin.