From Prompt to App Store in 48 Hours
“How I built a complete mobile game using AI and shipped it to the App Store over a weekend.”
It started with a single prompt:
"Vampire survivors style game where you fight programming bugs."
That was it. That was the whole prompt I fed into Google’s AI Studio. Twenty seconds later, I was looking at a playable browser game where I was shooting syntax errors at a boss named Robert'); DROP TABLE Students;--. By the end of the weekend, that same game was live on the App Store.
The Spark: Curiosity and Momentum
I didn't sit down on Friday night with a grand plan to ship a game. I just had a persistent idea and wondered if Google's Gemini model could actually build it. I expected a half-broken mess, but twenty seconds later, I was looking at a playable browser game.
It was a simple React app using the Canvas API, but it had the fundamentals: a player character, enemies that moved, a basic weapon, and XP gems. It wasn't polished, but it was playable.
That's the moment where most side projects fail—the gap between the idea and the first working line of code. By skipping that gap entirely, I found myself in a loop I hadn't experienced before. I liked what I saw, so I gave it another requirement. Then another. I wasn't "developing" in the traditional sense; I was chasing the momentum of seeing a dumb idea become real in real-time. Each prompt was a tiny dopamine hit of progress.
The First Hour: Rapid Iteration
The initial prototype had problems. The player was a generic circle. Enemies were squares. The balance was nonexistent—you'd get overwhelmed in seconds.
But here's what made AI-assisted development different: fixing these issues wasn't a multi-day effort. It was a conversation.
"Add floating XP text when gems are collected."
"The yellow enemies are impossible to defeat... make them spawn less often."
"Implement a player dash ability with a cooldown."
Each request came back as working code within seconds. I wasn't debugging—I was directing. The AI handled the implementation details while I focused on game feel.
Within the first hour, the game had:
A proper developer character with a laptop and hoodie
Multiple enemy types with unique behaviors (Stack Overflow bugs that grow larger, Type Error bugs that reverse your controls)
A weapon upgrade system during level-ups
Screen shake effects
A dash mechanic with invulnerability frames
The Hallucination Hurdle
It wasn't all magic. By Friday night, the codebase was a single 2,000-line file. Every time I asked the AI to add a new weapon, it would get confused by the file size and accidentally "optimize" the code by deleting the enemy rendering logic.
I spent an hour wondering why my enemies had turned invisible before I realized the AI was hallucinating a "cleaner" version of the code that didn't actually work.
The fix was old-fashioned software engineering: I had to stop the feature development and ask the AI to refactor the code into separate modules—Enemy, Weapon, Renderer, etc. Once we broke the context window limits, the "amnesia" stopped. It was a stark reminder that AI is a junior developer: fast, enthusiastic, but needs supervision.
The Design Philosophy: Programming Humor as Gameplay
Early on, I realized the game's theme wasn't just aesthetic—it could inform actual mechanics. Every weapon became a programming joke that also made sense as gameplay:
Console.log() is your starting weapon because that's what every developer reaches for first when debugging. It's basic, reliable, everywhere.
Unit Test Shield orbits your character protectively. Because good tests protect your code from regressions.
Garbage Collector is an area-of-effect pulse. It cleans up everything around you, just like memory management.
Refactor Beam pierces through multiple enemies. Because a good refactor cuts through accumulated technical debt.
The enemies followed the same logic. NullPointer enemies are fast but fragile—they crash quickly. Memory Leak enemies are slow tanks that just won't die. Race Condition enemies move erratically because you never know where they'll be next.
This thematic consistency made the game more fun to build and more memorable to play.
The Boss Problem
Simple enemies were working, but the game needed punctuation. Something to break up the endless horde.
Bosses became the answer, and they became the game's personality.
The Spaghetti Monster spawns obstacles across the arena, slowly making it "unreadable"—just like actual spaghetti code.
The Compiler periodically freezes your character while displaying "Compiling Assets..." Then, in phase two, it permanently deletes your lowest-level weapon from your inventory. Brutal, but thematically perfect.
Robert'); DROP TABLE Students;-- (yes, that's the actual name) attacks with SQL injection projectiles. Because of course it does.
The Incident is a burning server rack that pops up system error modals you have to dismiss. It interrupts your gameplay at the worst possible moments, exactly like a 2 AM production incident.
My favorite is The Tech Lead. It's protected by Junior Dev minions that buff it. You can't damage the boss until you deal with the juniors first. It teaches priority management through game design.
The Mobile Question
After a day of iterating on the browser version, I had something I was genuinely proud of. The game had depth. It had personality. It had the kind of "one more run" compulsion that marks a good roguelike.
Then came the thought: could this be a mobile app?
The honest answer was that I had no idea. I'd never shipped anything to the App Store. The process seemed intimidating—certificates, provisioning profiles, screenshot requirements, review guidelines. It felt like a completely different discipline.
But I'd come this far by not overthinking. Why stop now?
The Technology Decision
I spent time researching mobile development options. The choices seemed to boil down to:
React Native would let me reuse some React knowledge, but the graphics stack was completely different
Flutter had excellent performance but required learning Dart and rewriting everything
Native development was obviously the best quality but also obviously the most time-consuming
Or… Capacitor.
Capacitor wraps existing web applications in a native container. Your Canvas code, your Web Audio API calls, your touch handlers—they all just work. The wrapper provides access to native features like haptic feedback and proper fullscreen handling.
The promise was compelling: 95%+ code reuse with a 1-3 week timeline instead of 3-9 months. The catch was that some users might notice it wasn't truly native—slight loading differences, minor touch latency in the WebView.
For a game that was already running smoothly at 60fps in the browser? Worth trying.
The 24-Hour Sprint: From Browser to App Store
The actual mobile conversion happened in a blur.
Friday Night: The Prototype. I had the core loop working in the browser. It was ugly, but fun.
Saturday Morning: I installed Capacitor. Within an hour, I had the web app running on my iPhone. I added a virtual joystick because keyboard controls don't exist on phones.
Saturday Afternoon: I fixed audio issues on iOS, handled the notch on newer iPhones, and added haptic feedback. This is where the game went from "web port" to "mobile game."
Saturday Night: I used The App Launchpad to generate screenshots because I wanted something that stood out. I generated an icon, filled out the forms, and hit "Submit" before I went to sleep.
Monday: Live. The game was approved and on the store 36 hours later.
Application screenshots generated by The App Launchpad
What Made This Possible
Looking back, several factors converged to make this weekend project viable:
AI as implementation partner. I didn't need to know how to implement screen shake effects or chain lightning weapons. I needed to know what I wanted, and the AI handled the how. This freed me to focus on game design rather than technical implementation.
The browser as prototype platform. Building for web first meant instant feedback. No compile times, no simulator launches. Just save and refresh. By the time I went mobile, the game design was already validated.
Capacitor as bridge technology. Being able to wrap an existing web app rather than rewriting it made the mobile timeline reasonable. A weekend instead of months.
Iteration over planning. Every feature emerged from playing the game and noticing what was missing. The Type Error enemy that reverses your controls? That came from a prompt about "enemies with unique behaviors." The Pull Request event system that pauses gameplay for decisions? That was added to break up the monotony of endless waves.
What’s Next?
If I were to keep iterating on Debug Survivor, there are a few features at the top of my list:
The Bestiary: An unlockable index of every bug you’ve encountered. You’d start with blank entries and slowly fill them in as you "document" the bugs by defeating them.
Game Center Integration: Adding global leaderboards so developers can compete for the title of "Senior Debugger."
Achievements: Rewarding players for specific feats, like "Legacy Code" (surviving 10 minutes with only starting weapons) or "Production Hero" (defeating a boss during a simulated 2 AM incident).
Try It Yourself
Debug Survivor is available on the App Store and Google Play (Looking for testers! Invite here). It’s $2.99, though I’m running a $0.99 sale for the first month. The web version is still up at debug-survivor.leek.io.
To be clear: the goal isn’t to get rich off a weekend project. It’s about the satisfaction of taking a single prompt and seeing it all the way through to a store listing.
One sentence can become a game. That game can ship to the App Store. You can do this in a weekend.