6 Months, 3 Game Projects, 0 Shipped
I spent 6 months trying to become a game developer while keeping my day job. I built three projects, wrote 684 commits, and shipped exactly zero games. Here is what happened, what I learned, and why nothing shipped.
I meant to write this post a few weeks ago. But I think I was hoping that by the time I sat down to write it, at least one of the three games I was building would be done.
None of them are.
So here we are: 684 commits across three projects in six months, and nothing you can download, play, or send to a friend. If you have ever learned something new outside a full-time job — something genuinely hard, something you were not already good at — you probably know exactly how this feels.
I am writing this anyway, because I think the story of why nothing shipped is more useful than a polished announcement of something that did.
The Opening Context
For 18 years I worked in advertising. It is good work, it is challenging, and I know how to do it. I have shipped campaigns, built teams, written books, stood on stages. I know what "done" looks like in that world.
Then in late 2025, I decided I wanted to learn game development. Not game marketing. Not game strategy. Actually building games — the code, the art, the systems, the feel of a character moving on screen.
I had no experience with game engines. I had never rigged a 3D model. I did not know what a state machine was in the context of animation. I have been coding for a few years now, mostly web apps and AI products, but games are an entirely different discipline. (Please correct me if I am wrong on any of this — I am still learning.)
The first thing I learned: shipping a game is hard. The second thing I learned: you cannot reason your way into a good game loop from a document.
You have to see it move. You have to play it, or at least get it into the Godot runtime and watch whether the character, camera, controls, and feedback actually create a feeling. And then you have to be willing to build, delete, cut, and rebuild until something interesting appears.
Let me walk through what happened.
Project 1: The AI Pet Sanctuary (288 commits)
December 2025 – February 2026
The first project started with a simple idea: what if you could have a pet that lived on your computer and talked to you using AI? Not a chatbot with a pet avatar — an actual pet with behaviors, a habitat, and personality.
I built a full-stack web application. React on the frontend, FastAPI on the backend, Supabase for the database. I used Gemini for conversations and Cloudflare Stream for video.
The ambitious part was the animation system. I wanted 18 pet types across six broad categories — dogs, cats, birds, small animals, fish, reptiles. Each one needed to move, blink, and feel alive.
I started with Rive animations. They did not work for this use case. So I replaced them with CSS and JavaScript pet animations. Then I built a pipeline using Google Veo 3.1 to generate animation content — AI video generation for the pet movements. That worked, but it was brittle. The API would fail, lighting would be inconsistent, scale would shift between generations. I spent more time fighting the generation pipeline than designing the actual pet experience.
I also built a habitat system with full-screen immersive environments — a room called "The Study" where the pet lived. Care screens. Memory screens. Conversation integration. A complete "Quiet Luxury" redesign halfway through because the first version looked like a prototype. (Because it was.)
Then I pivoted.
288 commits in, I decided the web app was the wrong platform and started planning an iOS native version instead. I still do not know whether that was the right call or just boredom dressed up as strategy. Part of me believes a pet on your phone is more intimate than a pet in a browser. Part of me thinks I just got tired of React.
But if I am being honest, the real reason I never shipped it was simpler: it was boring. I built all of this — the habitats, the animation system, the conversation integration — and when it came time to actually sit and interact with the pet, I lost interest within two minutes. There was nothing compelling enough to hold my attention. I have played games since secondary school, so I trust that first reaction, even if I do not always know how to fix it yet. I still play Mobile Legends with my family and usually climb to Mystic. That is not a game you play for passive comfort — it demands skill and attention. This pet sanctuary demanded neither.
I kept telling myself the problem was the platform. A web app could not deliver the immersive experience I wanted. Maybe iOS would be different. But deep down I knew the issue was not React — it was the design. No amount of platform switching was going to make an uninteresting game loop feel exciting.
What went wrong
Looking back at the repo, I do not think the right lesson is "288 commits without a release is bad." If the core loop is not engaging, shipping it earlier just means shipping a boring thing earlier.
The real problem was that I kept changing the container before proving the loop. The project went from merge/order mechanics, to pure companion, to immersive habitats, to iOS. Each pivot had a reason. But the question I did not answer clearly enough was simpler: would I want to open this every day?
I could have validated that with one pet, one room, one conversation, and one care/play interaction. Instead I built 18 pet types across 6 animal categories with a full habitat system before I had a runtime answer to the most important question: is this pet interesting to come back to?
AI-generated animation is powerful but brittle. When it works, it is incredible. When it does not work, you are debugging prompts and API failures instead of your product. You need a fallback pipeline, and I did not build one early enough.
What I learned
The first job is not to cut scope. The first job is to find the interesting part. Sometimes that means building more. Sometimes it means deleting what you just built. The mistake was not exploration. The mistake was exploring too long without a playable proof that told me whether the core experience had life in it.
Project 2: The Fox That Became a Stepping Stone (35 commits)
April 17 – May 12, 2026
After the pet sanctuary pivoted to iOS, I started a new project in Godot 4.6. I wanted to build a game from scratch in a real game engine this time, not a web app pretending to be a game.
The initial concept: a Spirit Fox Kit Sanctuary Sim. A small fox that lives in a cozy room, and you take care of it. Simple, warm, contained.
I set up the Godot project. I tried to design a fox character with hidden tense silhouettes, faster twitch animations, and micro-signals for stillness.
But that sentence makes the prototype sound more legible than it was. On screen, the fox did not read as a fox. It was a small globe of mass. I could not see a usable shape, let alone judge the rest of the room around it.
I built a draft room loop with GUT test coverage. Placeholder audio. Hover cues for interactables. Room scene layout. A cold-start playtest harness so I could actually play the thing.
And then I played it.
The honest truth is more visual and less philosophical: the fox failed at the first read. The room prototype had a mechanically verified path, and the repo even records an abbreviated solo gate pass for the placeholder-art build. The stillness, the emergence beat, the touch distinction, and the room problem were acceptable for that stage.
But acceptable for a small authored beat is not the same as visually strong enough to carry a full game. This project depended on the fox carrying the experience. If I could not see its shape, posture, and appeal, then the sanctuary loop had no chance yet.
So on April 20 — just three days after starting — I wrote a full pivot spec. The fox sanctuary sim became something completely different: a landscape action-adventure with a warm safehouse and hostile territory to explore. I paused the fox-first plans and started designing a new direction.
35 commits. One validated beat. One pivot. Nothing shipped.
What went wrong
The real problem is that I treated animation signals as if they could compensate for an unreadable base shape. They could not. A twitch does not matter if the body underneath it has no silhouette.
The pivot was not proof that the first work was wasted. It made the first work a foundation. What I still did not have was evidence that the larger Hearthkeeper direction would work, or that I could make its core character readable enough to care about.
What I learned
Playtesting early reveals what a prototype is actually proving. The room prototype proved tone, pacing, and a relationship beat. It did not prove a whole game, and it exposed the visual readability problem immediately. That is useful information, but it points to a different next step: solve the character read and build the smallest safehouse -> run -> return loop before polishing the fox again.
Game 1 taught me that a technically complete loop can still be boring. Game 2 taught me that a documented, tested character concept can still fail visually in seconds. Those two failures are part of why Game 3 has made more real progress: I am paying much more attention to readability, motion, weapon feel, and whether the playable slice works on screen.
Pivoting from evidence is different from drifting. The fox did not read in runtime, so changing direction was not automatically a failure of discipline. The part I need to improve is the feedback loop: get the riskiest visual or gameplay question onto the screen faster, then decide with evidence instead of staying in concept mode.
Project 3: The One Where the Pipeline Started Working (361 commits)
April 21 – May 13, 2026
This is the project I am still working on. It is also the one where the learning from the first two projects finally started to compound.
It is a mech tactics game. You control a squad of mechs in combat, and after each battle, an LLM analyzes what happened and gives you a debrief. Think post-match analysis, but written by an AI that has access to all the game state.
The repo has three tracks, but they are not equal anymore:
Track A — The LLM Debrief System: This was the early AI validation track. It helped me test structured debriefs, but it is no longer the main product path.
Track B — The 2D Phone Prototype: This is where I finally found a workable pipeline for a relatively immersive 2D player experience. The actual Godot runtime has a lane-match combat loop, touch joystick, shop/modules, hero XP, ability cooldowns, phone-readable VFX motifs, enemy punish zones, a neutral relay objective, HUD feedback, and iOS export/testing flow. The art pipeline started working too: Nano Banana 2 (Gemini 3.1 Flash Image) generated usable initial hero and character images, then an MCP-based pixel cleanup workflow and Aseprite turned curated candidates into cleaned sprites, animation frames, and Godot-ready sheets. It is not a finished game, and phone acceptance is still pending, but it is much closer to "this feels like a game" than the first two projects were.
Track C — The Campaign and 3D Path: This is the current product direction. It keeps the useful Track B lessons, adds a campaign memory loop, and moves production combat toward 3D/2.5D. That is a much harder problem than the 2D pipeline. In 2D, I can control readability with sprite sheets, HUD tuning, procedural VFX, and phone captures. In 3D, every decision touches modeling, rigging, animation, camera, lighting, GLB export, weapon sockets, hit anchors, action timing, and runtime performance.
I also integrated Blender for custom rigging work, brought in Rokoko mocap data for locomotion, and built the hero mech character with a full rig, animation tree, and weapon socket integration.
361 commits in three weeks. A real 2D pipeline. A much harder 3D experiment. Still in prototype territory.
What went wrong
Here is the thing I have to admit: the 2D pipeline is starting to work, but the 3D pipeline can swallow the whole project if I let it.
For Track B, pipeline work was not fake progress. The Nano Banana 2 -> Aseprite workflow, touch controls, HUD, readable ability motifs, module feedback, phone captures, and export loop were the product becoming more playable. That work taught me what a player needs to see and feel moment to moment.
The risk is different now. In Track C, 3D requires a lot of pipeline before it feels like anything: model import, rig quality, animation names, weapon attachment, muzzle position, hit center, camera angle, lighting, movement, jump timing, attack timing, VFX, and runtime inspection. Those are not optional details. If the character does not read, the whole scene fails in seconds.
But the playable encounter still has to lead. Otherwise I can spend weeks improving the hero mech's weapon fit, jump compression, left-hand IK, and mocap cleanup without answering the simple player question: is this fun to control?
What I learned
Pipeline work is useful only when it keeps serving the playable slice. Track B taught me that the right pipeline can create a more immersive 2D experience. Track C is teaching me that 3D raises the cost of every improvement. I have to keep the acceptance gate concrete: can the player read the hero mech, move him, aim, fire, understand the hit, and want to do it again?
Tracks need gates. Track A did its job. Track B produced reusable 2D game feel and presentation lessons. Track C is the current bet. The problem is not that I explored multiple paths. The problem is reopening them without a clear runtime question and a clear decision rule.
3D game development is a massive skill gap. Rigging, animation, mocap data, asset budgets, weapon sockets, camera framing, and readable motion are disciplines I have zero professional experience in. I am learning them while also learning Godot, game design, and campaign architecture. The learning curve is not steep — it is vertical. I think this is the part that surprised me most. In web development, I can build something functional in a day. In 3D game dev, I spent a full evening trying to get a character's arm to bend the right way. (This is the kind of sentence I would not have understood six months ago.)
What These Three Projects Have in Common
If I am honest with myself — and this post only works if I am — the pattern is not "scope creep." That is too easy, and in this case I think it is wrong.
I do not think games work like business decks — at least, that is what these three projects taught me. You do not know whether an idea works because the premise sounds good. You know when you can play it, or at least see it running in the engine and feel whether the character, camera, interaction, and feedback are doing anything.
1. The runtime is the truth
The pet sanctuary sounded good until I interacted with the pet and got bored. The fox sounded good until I saw it in Godot and could not read the shape. Track B started working because the loop got into a phone-like runtime: touch controls, HUD, VFX, shop, XP, readable motifs, and capture feedback.
That is the real pattern. The useful truth only arrived when the idea became visible and playable.
2. Exploration is necessary, but it needs evidence
I do not think the right lesson is "stop adding things." Game development needs exploration. You build, try, delete, cut, rebuild, and try again because you are searching for the interesting part.
The part that needs discipline is not the amount of work. It is the evidence loop. Each piece of work should answer a question: does this make the game more readable, more engaging, more controllable, more worth replaying?
3. Pipelines are good when they make the game more playable
The Nano Banana 2 -> MCP cleanup -> Aseprite workflow was not a distraction. It helped me get better 2D characters and animation into the game. The Track B HUD and VFX work was not fake progress either. It made the runtime clearer.
The risk is when the pipeline stops answering a player-facing question. In Track C, 3D tooling is necessary, but it has to keep coming back to one thing: can the player read the hero mech, move him, fire the weapon, understand the hit, and want to do it again?
4. The gap is not coding skill. It is playable judgment.
I can build a full-stack application with Supabase, FastAPI, and React. I can rig a 3D model and write GUT tests and build custom debugging tools. What I am still learning to do is judge whether something is actually interesting as a game. Track B got me closer in 2D. Track C is showing me how much harder that becomes when motion, camera, lighting, weapons, and animation all have to work together.
I think this is the most important thing I have learned. The code is the part I am good at. The hard part is deciding what the game actually is, making it interesting, and then having the discipline to keep returning to runtime evidence until it is worth shipping.
What I Learned Overall
If I try to distill six months into something useful — for me, or for anyone reading this who is in the same position — here is what I think I know now that I did not know in December:
-
The first version should answer the riskiest question in runtime. My pet sanctuary needed one pet interaction that made me want to come back. My fox game needed a readable silhouette in a blank room before I worried about micro-signals. My mech game eventually got closer through Track B because I could see and feel the loop on screen.
-
Playtesting is not optional, and runtime review counts. The moment I saw the fox as a shapeless mass, I knew the project was not working yet. That was the most valuable 30 minutes of the entire project. I should have reached that visual truth earlier.
-
Learning is a valid outcome, even if it is not the one you planned. I set out to build games. I did not ship games. But I learned about Godot, LLM prompt engineering, Nano Banana 2 for character ideation, Aseprite animation cleanup, 2D phone readability, HUD and VFX feedback, animation pipelines, 3D rigging, mocap data, game feel, camera systems, visual readability, and the difference between building tools and building products. That is not nothing. It is also why Game 3 is better than Game 1 and Game 2, even if it still has not shipped.
-
I still need to define an honest finish line. Part of me wants to push this project to completion so I can say I shipped a game. Part of me thinks the learning was the product and I should accept that. Both are honest answers. I think the truth is somewhere in between: ship a focused slice that respects what I have learned instead of restarting from zero again.
What's Next
I am still working on the mech tactics project. The LLM debrief concept is interesting, and I think there is something genuinely novel in the idea of AI-powered post-match analysis for a tactics game.
But I am trying to change my approach: runtime proof first, polish later, ship something only after the playable slice has a reason to exist. The first two projects were not wasted. They changed what I look at in Game 3: does the character read, does the motion feel good, does the weapon feel right, and can someone understand the encounter without me explaining the architecture?
If I started a new game today, I would not begin with a big architecture plan. I would start with the fastest runtime proof that could answer the question I actually care about. For Game 3 now, the same discipline has to happen inside the project: one character, one weapon, one encounter, one clear acceptance test.
I do not know yet whether I will push one of these to completion or accept that the learning was the product. I might be wrong about the timeline, but I think the pattern matters more than any single project.
A Question for You
I want to ask something genuinely, not rhetorically: have you been here? (I suspect more builders have than we usually admit.)
Have you spent months building something that never shipped? Did you eventually finish it, or did you walk away? And if you walked away — was it the right call, or do you still think about it?
I have shipped things in my career. Campaigns, platforms, products. But those were in a domain where I had 18 years of experience. Game development is new, and the gap between "I can build this technically" and "this is a good game" is much wider than I expected.
If you have crossed that gap, I would love to hear what kept you from pivoting away. Feel free to reply on LinkedIn or reach out directly — I am genuinely curious.
That's it from me. I would love to hear your thoughts — especially if you have been through the same thing. Feel free to reach out on LinkedIn or send me a note directly.
Cheers, Chandler





