Intro
Four months ago I decided to run a strange experiment: build a production website without writing a single line of code myself. Everything - backend, frontend, infrastructure, CI/CD, image processing - would be written by AI. My role would be to describe what I wanted and supervise the result.
Four months and 663 commits later: bigunphoto.com is live. ~60,500 lines of code, 260 images, 16 pages in three languages. GitHub history tells the full story. Infrastructure cost: about €8/month. Cost spent on AI: €0. I still haven't written a single line of application code. This is not a tutorial or a success story. It's a record of what happened.
Why I started
On my current role, I manage a team that builds, customise or adopt automation tools. As I came from a bit different career development path, I often felt a growing technical gap between myself and the engineers I work with - I could discuss architecture, but I wasn't building anything myself. At the same time, I wanted to understand AI development tools from the inside, not from demos. The best way to learn in engineering is still the same: build something real.

My wife is a professional photographer. She never had a proper website - only Instagram. That gave me a playground. The goal wasn't to build the perfect development portfolio. The goal was to see how far AI-assisted development can go.
The rules
I set three constraints.
No manual coding - all application code by AI.
Minimal budget - I was investigating, playing, and not confident enough in the results to invest.
Real engineering stack - no pre-built e2e solutions like WordPress; usage of containers, CI/CD, modern frameworks.
In practice I occasionally edited some YAML, CSS, json files manually. The rest was AI-generated. I'm so lazy now I won't even fix a CSS padding myself - I inspect the element, try values in the console, then tell the AI: fix this padding, it looks like crap.
The stack ended up heavy for a portfolio.

Frontend: Next.js, React, TypeScript, Tailwind, Tiptap, Zustand.
Backend: Node.js, Express, PostgreSQL, PgBoss, Sharp. Auth: session-based (Postgres), bcrypt, OAuth (Google, Apple, Facebook), TOTP for admin. Security: AES-256-GCM for PII, blind indices.
Infra: Vercel (frontend), AWS Lightsail (backend), Docker Compose], Nginx; k3s + Skaffold for local dev. Why hit a fly with a sledgehammer? Because I wanted a training ground for the tech my teams use at work - including Kubernetes, just to stop being afraid of our internal clusters.
Timeline
Phase 1: Chatbots and infrastructure.
I started with ChatGPT and Gemini. One thing I quickly discovered: chatbots work better if you ask them to interview you. "Ask me one question at a time about the system we want to build and wait for my answer before the next." That helped clarify what I actually wanted. There's a flip side: when you develop via chat, the AI often keeps asking you questions - one more detail, one more clarification. It pulls context from you. It can feel like you're being interviewed until it has enough. Same pattern when you later ask an AI to help you write about it: it keeps pulling tokens from you. That's partly why I moved to specs - a point we'll return to later. Instead of having the context extracted piece by piece, you give it upfront. Using this approach we defined the architecture: the broad shape of what became the stack above - backend, frontend, database, deployment model. For production we considered Vercel and Render. For local development I wanted something closer to what our teams use - Docker Desktop wasn't an option due to licensing, so I went with Rancher Kubernetes and DevPod. DevPod runs dev environments in the cloud or your local Kubernetes cluster; I'd used it before with ContainerLab, following examples from my colleague Roman Dodin, and thought it could let me transfer environments between my corporate Windows PC and personal MacBook. At that moment about 60% of these technologies were new to me.
But chatbots live in the cloud. They can't see your local machine. Every troubleshooting step meant pasting logs into the chat. The context window ate itself. Models started hallucinating, looping, forgetting. I started thinking of AI as a motivated student who will confidently lie to pass the exam. What used to be simple ten years ago - install XAMPP and start coding - is now a different story with containers and Kubernetes. Infrastructure setup took far longer than I expected: 2–3 weeks of evening hours. I spent a lot of time fighting with DevPod, trying to get a stable local environment. Eventually I pivoted to Skaffold, which deploys and cleans up the dev environment automatically. That simplified things. But by then my enthusiasm had already started to drop. I even stepped away from the project for a while.
Phase 2: Gemini CLI.
The turning point came when I found Gemini CLI. I'd seen a YouTube video by Network Chuck about it and decided to give it a try. Unlike chatbots, it runs locally inside your project folder and sees the whole codebase. It can read files, update documentation, run commands. The first time I launched it I was amazed and slightly terrified - it generated code so fast I could barely follow. Production deployment, which had taken weeks to prepare for locally, was done in a couple of late-night sessions. I gave it a task to create the website pages. A few minutes later it had generated a set of pages and a design that looked more or less good for a photography portfolio. I was stunned - like, I spent so much time to prepare and now it's all over? Happily not. Gemini CLI did a nice job pretending the task was done. Most pages were static, lots of fallbacks, features like auth or dynamic galleries either not implemented or done with issues. The job was only started. But I could already see results. As the project grew, though, Gemini CLI started looping, giving nonsense, losing the thread. Sessions got stuck and had to be killed, losing context. I realised: if the AI doesn't have a map, it will drive you into a ditch.
Phase 3: Spec-first development.
Things only moved when I stopped "chatting" and started writing structured specifications. From prompt engineering to spec engineering. I moved to Cursor and Antigravity which was just released at that time; having rules and project overview in Markdown files let me work with different AI agents in parallel. The spec became a save point - if one model hit its quota, I could take the spec to another and continue. Different models for different tasks: Gemini Pro and Claude Opus for planning, Claude Sonnet and Gemini Flash for code. My workflow:
Describe the feature
Ask the model to propose a design
Refine the spec
Generate implementation
Test
Ask the model to fix bugs
Commit
When instructions were vague, models produced inconsistent or incorrect implementations. A typical spec looked like this:
Feature: Client gallery
- Clients see watermarked previews; download only for approved images
- Gallery access protected by invite token
- Images processed asynchronously via PgBoss queue
...
Then I'd ask the model to propose architecture and implementation. The spec gave it a map instead of extracting context piece by piece. The hardest part wasn't coding. It was explaining clearly what I wanted. At some point I reminded myself of something I already knew - and as a manager, should have known: it doesn't matter whether you work with AI or with an expert engineer; a precise specification is a must. That insight alone was worth the experiment.
When reality hit production
We went live with Vercel and Render, as the AI recommended. The site worked. Then came the first redeploy - I refreshed the page and every single image was gone.

Render's free plan has an ephemeral filesystem - uploaded files don't persist across redeploys. The AI had no idea. Gemini CLI was completely unaware of this limitation. When I asked why the images disappeared, providing logs from troubleshooting, it kept insisting the problem was with anything else - email provider, configuration - mixing up different troubleshooting streams. It was stubborn for hours. I had to google my way out, discover the ephemeral filesystem, and migrate the backend to AWS. That led to a whole configuration refactor. It was one of the moments I realised you can't chat your way to a reliable production system.
The merge hell. At some point Gemini CLI went rogue, modified a bunch of code, did a bad unconfirmed merge - and a botched rollback wiped days of progress. My fault - I wasn't committing often enough. I'd asked the AI to record everything in the project overview, also updating Technical Debt and Resolved Issues; at some point it rewrote that file too and the history was lost. You cannot delegate too much to the AI.
The dark side

AI development can be addictive - like losing money in a casino. One more prompt and you'll win it back. Suddenly it's 3:00 AM, you're yelling at the screen and hitting the keyboard. It's not healthy. AI gives you the illusion of a person behind the screen, but emotional prompts only extend the context window and move you further from the solution. When struggling, it's better to summarise the status, save logs, and open a new session or start new agent keeping emotions aside.
It is possible to build complex systems faster than you can truly understand them.
That is one of the main risks of AI-assisted development. Sometimes models rewrote parts I didn't ask them to touch; if I didn't review carefully, entire sections could drift. When the codebase grows, debugging becomes difficult if you didn't write the code yourself. It's also hard to keep focus - ideas pop up, you start implementing in parallel. I started asking the AI to save notes into different set of files we had so nothing would be lost.
One thing I got paranoid about: secrets. You need .gitignore for yourself, but AI agents need their own. Keys and passwords should stay outside the repo. It could be quite sneaky getting those from sessions, even decode. During troubleshooting the AI often found its way to ask for access - in the middle of the night you occasionally click to agree. Better to use temporary keys for the AI and keep prod secrets isolated.
Business-card website. What it became
Not a simple portfolio. A small Photography Business System: portfolio, client management, appointment booking, private galleries, delivery. A custom block-based CMS with drag-and-drop page builder. WYSIWYG text editor. Secure client galleries with watermarked previews and high-res delivery. Clients can book via Email, Instagram, WhatsApp, Telegram. For offline leads - say the photographer met someone at a wedding - the system creates shadow profiles and generates invite links so they can register and access their galleries later. PgBoss handles async image processing: watermarking, preview generation, bulk ingestion from Google Drive. Multi-language (EN, RU, NL) with regional pricing. The workflow: client books or admin creates appointment; if new, shadow profile and invite link; admin approves, shoot happens, admin marks completed; client selects from watermarked previews or photographer uploads finals directly; when client accepts, appointment closes. The site currently hosts 260 images, 4 galleries, 16 pages in three languages. Traffic is tiny so far. But the system works.
What I learned

AI is great at scaffolding and boilerplate, bad at architecture and long-term consistency. AI rarely challenges you. It usually just does exactly what you ask - even when it's a bad idea. When it fails, it often switches too early to a shortcut instead of finding a proper resolution.
Spec engineering beats prompt engineering.
You can build systems faster than you understand them. Version control discipline becomes critical very quickly.
AI doesn't replace engineering judgement - it amplifies whatever discipline you already have. Good discipline gets better; bad discipline produces spaghetti faster.
I didn't suddenly learn React or become a developer. But I got a much clearer picture of how modern stacks fit together - containers, CI/CD, frameworks. I'm not so lost now in discussions with engineering teams. The real payoff: before this project I had never pushed a commit in my life. Now I'm fine establishing CI/CD pipelines at work. I'm already applying this - Power Automate scripts, JSON parsers, internal RAG apps. I stopped being a "manager who talks about AI" and became someone who builds with it.
I wouldn't sell this system to a client. I'm not a developer. The auth, encryption and data flows were implemented by AI with my supervision, but I'm not a security engineer. The site may contain vulnerabilities I simply don't see. But as an experiment it was a success.
If I started again I'd skip the chat part, start with specs for overall architecture and separately for each feature, use an agentic approach - assigning different tasks to different models - and commit much more frequently.
Takeaway
I'm still trying new tools and approaches. I experimented with locally hosted LLMs via LM Studio and Ollama, wired them into VSCode with Continue. I've been keeping an eye on OpenClaw but am cautious about installing it - community-made agents with deep system access make me nervous. AI is moving faster than ever, new tools and models appear almost every week, and you're constantly deciding what to use while the ground shifts under your feet.
Andrey Doronichev once compared AI to electricity — you can borrow intelligence from the grid the same way you borrow power. I like that. It also means we can get weaker if we stop using our brains in favour of "borrowed intelligence." Same as electricity can hurt if you don't know how to use it. It doesn't make sense to ignore it or be afraid it will substitute you. Find your way to ride the wave, not get hit by it. AI will not substitute us in any profession. But an expert using AI will replace the one who doesn't. The experiment answered the question: how far can you go building real software with AI? Much further than I expected — but not without risks. And if you're going to over-engineer a photography website anyway, you might as well go all the way. Next plans? I'd like to extend the website functionality and explore a virtual photo studio — I'm already testing ComfyUI, diffusion models and image-generation workflows. I'd like to try mobile app development, something I dabbled with in the past. And of course there are plenty of opportunities at my main job to apply these skills. I try to keep ideas under control though — it's too easy to lose focus with the endless opportunities AI offers.
