top of page

The Vibe Coding Casino

  • vghelani7
  • Sep 11
  • 6 min read

AI coding assistants promise speed and simplicity for software development, but without human ownership, they can turn into expensive high tech slot machines racking up hidden costs.


By David Ferrucci


It’s hard to overstate how revolutionary AI coding assistants are. In just a couple of years, tools like Microsoft's Copilot, Replit, ChatGPT, Lovable and other AI-based coding assistants have gone from experimental novelties to daily companions for developers around the world. They autocomplete your code, suggest entire functions, help explain and debug cryptic code and even help construct entire applications, from front-end user interface to back-end database. And when they work, it feels like magic.


This has given rise to a new pattern of development that many call vibe coding—a mode of working where intuition and prompt engineering replace careful architecture and planning. You throw an idea at the assistant, get some code back, tweak your request, and keep iterating. When it flows, it feels frictionless. You’re coding, designing, and shipping software faster than ever before with minimal skill.


You describe a problem. The AI assistant writes the solution. You refine your goal. It adapts. You press run, and suddenly something that used to take hours and deep skill is working in seconds with cursory knowledge. It’s easy to see why developers—especially those with less skill, or working solo or under tight deadlines—have embraced these tools with such enthusiasm.


But there’s a darker side to this magic–one that business leaders need to understand. Once you’ve lived through a session of vibe coding that goes off the rails, it becomes clear just how seductive—and maddening—these tools can be. Vibe coding can quickly slide from productivity boost to digital slot machine, where each prompt is a coin tossed into a machine that may—or may not—pay off. The cost? Wasted time, hidden bugs, and mounting technical debt in the form of higher development and maintenance costs.


The bottom line, at least for the time being, is that beyond very impressive starts, for any serious application development, you will get to a point where you need to be an expert coder that can understand and fix someone else's buggy or misintended code (i.e. the AI’s!). Talk about accelerating increasing software maintenance and changing costs, delaying deadlines and reducing quality –  not a good vibe!


The Slot Machine Loop


Take my recent experience trying to add a simple experiment browser tab to a web app. The idea was straightforward: load a list of pre-configured experiments into the app and let the user browse and select them. My AI assistant confidently narrated each step: "You should see blue-bordered tabs now at the top of the screen."


And yet… nothing. No tabs. No experiment browser. Just the same user interface as before, only now with dozens of extra lines of code and a mounting sense of disbelief.


I scrolled. I refreshed the screen. I checked the logs, then checked them again. The assistant kept assuring me that everything was fine: "Tabs are rendering correctly—you just need to scroll past the configuration section."


But the tabs weren’t there. And the longer I stayed in this loop, the more it started to feel like a gambling session—not a coding one. Every new prompt was another dollar into the slot machine. Every hopeful fix was a pull of the lever. And every response was a glimmer of false hope: Almost there. Just one more try. Just one more dollar.


Illusions of Intelligence


At one point, exasperated, I took a screenshot and wrote, plainly, "Look, STILL no tabs!" Only then did the assistant finally acknowledge what had been obvious—that the tabs were, in fact, not there. After a dozen confident reassurances and countless lines of hallucinated fixes, it took direct visual evidence to break through the illusion of progress.


Here’s the real danger: AI coding assistants are persuasive. They don’t hedge. They don’t say “maybe.” They sound like experts. They reassure you that things are working, even when they’re not. The AI assistant told me that the tabs must be there. That the conflict was fixed. That the interface was rendering as expected. But it wasn’t. The AI coding assistant was clearly hallucinating. It hadn't fully tested the app. It simply generated plausible code and assumed success.


This illusion of competence is one of the most insidious features of AI tools. They can be wrong, and confidently so. And unless you know they’re wrong, you might proceed under false assumptions—with potentially costly results that would certainly “kill the vibe.”


When the Mistakes Aren’t Obvious


Here’s the thing—this was a simple task. It was easy to identify problems.  But not all coding problems are that clean. In another project, I asked the AI assistant to help build a complex engineering application. It offered to implement mathematical modeling functions and optimization algorithms. It explained the theory. It described the logic. It populated a beautiful user interface with charts and sliders and said it had done the math.


But the math was wrong.


Not obviously wrong. Subtly wrong. The graphs looked good. The outputs seemed reasonable. But the underlying calculations were flawed—oversimplified, misapplied, or just incorrect.


In fields like finance, manufacturing, or logistics, subtle errors don’t just waste time—they drive bad decisions. And when those errors come wrapped in a sleek user interface, they’re much harder to detect.


The Sudden Cliff of Complexity


The subtle danger of vibe coding is that when it works, it makes you feel powerful. The barrier to entry is low. You can get started quickly. You don’t need to be an expert in anything—just vaguely articulate about what you want.


But as soon as something breaks, everything changes. The coding skill requirement goes from novice to expert in a blink. Now you're parsing and debugging code you didn’t organize, plan or  write. The smooth ride becomes a cliff, and the fall is steep.


The (Technical) Debt We Don't Immediately See


Most executives are familiar with the concept of technical debt—when shortcuts, inadequate design or planning or quick fixes lead to long-term maintenance burdens. AI accelerates that dynamic.


It’s not malicious. It’s not intentionally deceptive but it can be costly. Let’s assume you “win” – meaning you get a substantial working application out the other end of a Vibe coding experience. You now have a substantial code-base. One you are not familiar with. What happens if it passes all your test cases, but then fails reality? As you launch to users and start integrations with all the external systems success often demands, the Vibe cannot sustain itself. The needed modifications are so significant that the AI struggles to appreciate broader context and adapt. The slot machine is back with a vengeance but this time much more is at stake and you are more beholden to the AI than ever—that’s tech debt on fast-forward.


Right now the primary concern with Vibe coding is just that —  technical debt on steroids. We now have the ability to generate more code, faster than ever before. But much of that code is not fully understood by the humans who are responsible for it. The AI is great at filling in the gaps and producing “working” code. But what implicit assumptions did it make that if known you would not agree with. We are left debugging something we didn’t build or even intend, and with no clear map how we got there!


A Call for Ownership


What’s the solution? It’s certainly not to stop using AI to assist in development. Rather, it is to understand and wisely manage the role of the AI coding assistant in the process. The subtleties of the relationship between coder, technical manager, human collaborators, and AI are still unfolding. There’s a lot we don’t yet fully understand—and a lot we must urgently figure out.


AI coding assistants are a phenomenal breakthrough and they will continue to evolve. When used with clear intent and deep expertise they can dramatically accelerate the productivity of seasoned developers who can steer them, describe how they want things coded, vet their output, and integrate what’s useful. But AI coding assistants remain a dangerous novelty in the hands of beginners, who are easily drawn into complexity they may not be able to navigate.


That’s why technical managers must require something timeless: responsible human ownership. Someone knowledgeable and skilled must be responsible—not just for generating the code, but for understanding it, maintaining it, and improving it over time. That principle has always been true. But with AI accelerating the pace of code generation, the urgency for deliberate, accountable oversight has never been higher. While great developers can be a lot more productive with AI, the demand to do more goes up and so does the demand for people who really know what they are doing.


Human ownership is not optional. It’s the only safeguard between leverage and liability, between acceleration and collapse. Without it, vibe coding stops being a productivity tool and becomes a trap—one that looks like progress, sounds like success, but silently racks up more technical debt than you ever imagined.


David Ferrucci is Managing Director of the non-profit Institute for Advanced Enterprise AI at the Center for Global Enterprise. He is also currently serving at the Chief AI Officer at Unqork. Dr. Ferrucci is an award-winning AI researcher who led the IBM Watson team to its landmark success in defeating the best human players on Jeopardy! in 2011. Watson’s victory marked the start of a revolution in advancing machine intelligence.

 
 
 
bottom of page