47 Apps, One Week: Migrating the Afrotomation Fleet Off Vercel
At 05:01 UTC on April 14, 2026, I typed one sentence into a fresh Claude session:
"I just created a Coolify cluster with 3 servers — 1 Oracle VPS and 2 Contabo VPS. I need your help pulling all my deployed apps from Vercel and dockerizing them and deploying them to my own infra."
One week later, 47 production apps were running on my own metal. The bulk of them landed in a 48-hour sprint between April 14 and 16 — a handful of stragglers trickled in through April 20. This post is the honest retrospective — what broke, what held, and the shape of the playbook that emerged.
Why this happened at all
My Vercel account had been the home for almost everything I'd shipped in the last two years: the Codeni* portfolio, the Sahel Prosperity Group family, Afrotomation itself, client-facing landing pages, weather apps, dashboards, readme generators — the lot. Somewhere around 60 projects. Plus 34 Neon Postgres databases feeding them.
Then the deploys stopped working. Then the deploys stopped serving. Nothing I tried on the Vercel side brought them back. After a month of trying to coax the platform into cooperating, I decided to stop coaxing.
"Everything is down and we're not going back to Vercel after the 1-month blackout."
That was the mandate. The clock started.
The cluster, before the code
The three nodes were already online and joined into a Tailscale tailnet before I wrote a single Dockerfile:
| Node | Provider | Role | Specs |
| Ada | Oracle Cloud (ARM free tier) | Control-plane, Redis, openclaw agent gateway | 192 GB SSD |
| vmi3231260 | Contabo VPS 50 | Primary workload node — all user-facing apps | 16 vCPU · 64 GB RAM · 600 GB SSD |
| coolify-small | Contabo VPS 10 | Coolify control-plane | 150 GB SSD |
The logic: Oracle's free ARM box is too valuable to use for workloads that matter (free tier boxes get reclaimed when they idle), so Ada runs support services — Redis at redis.afrotomation.com, the internal auth gateway, observability. The Contabo VPS 50 is where the apps live. The VPS 10 is the Coolify installer itself, so that if a workload deploy goes sideways it can't take down the control plane with it.
Everything is on a private Tailscale network. DNS is Cloudflare, all records pointing at the Contabo 50's public IP and proxied.
The Dockerfile that won
Every Next.js app in the portfolio got the same multi-stage Dockerfile. Three hours of iteration on codeniserver produced the template, and from there it copy-pasted across the fleet with only the ARG list varying per app:
# syntax=docker/dockerfile:1.7
ARG NODE_VERSION=22-alpine
FROM node:${NODE_VERSION} AS builder
RUN apk add --no-cache libc6-compat openssl
WORKDIR /app
COPY . .
ARG NEXT_PUBLIC_OPENWEATHER_API_KEY
ARG NEXT_PUBLIC_MAPTILER_API_KEY
# ... one ARG per NEXT_PUBLIC_* var the app reads
ENV NEXT_TELEMETRY_DISABLED=1 \
NEXT_PUBLIC_OPENWEATHER_API_KEY=${NEXT_PUBLIC_OPENWEATHER_API_KEY} \
NEXT_PUBLIC_MAPTILER_API_KEY=${NEXT_PUBLIC_MAPTILER_API_KEY}
RUN if [ -f bun.lock ]; then npm install -g bun@1.2.17 && bun install; \
elif [ -f pnpm-lock.yaml ]; then npm install -g pnpm@9 && pnpm install --frozen-lockfile; \
else npm ci; fi
RUN if [ -f bun.lock ]; then bun run build; else npm run build; fi
FROM node:${NODE_VERSION} AS runner
WORKDIR /app
ENV NODE_ENV=production PORT=3000 HOSTNAME=0.0.0.0
RUN addgroup -g 1001 -S nodejs && adduser -u 1001 -S nextjs -G nodejs
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
USER nextjs
EXPOSE 3000
HEALTHCHECK --interval=30s --timeout=10s --start-period=45s --retries=5 \
CMD wget -O- http://localhost:3000/api/health 2>&1 || exit 1
CMD ["node", "server.js"]
Paired with a one-line next.config.ts patch:
const nextConfig = { output: 'standalone', /* ... */ };
Two non-obvious things that cost me hours before they went into the template:
NEXT_PUBLIC_*vars must be build args, not runtime env. Next.js inlines them into the client JS bundle at build time. If you only set them as Coolify runtime variables, the bundle ships withundefined(or your'your-api-key-here'fallback) and your first production fetch dies withFETCH_ERROR: API key is not configured. They have to be in scope duringbun run build.output: 'standalone'is not optional. Without it, theCOPY --from=builder /app/.next/standalonestep has nothing to copy and the runner image won't start. Next.js only emits the standalone tree when you ask for it.
The deploy engine: Coolify API + GitHub App
Coolify exposes a clean REST API. Once I had an API token in ~/.coolify-env and a GitHub App installed with "all repositories" access (scoping it per-repo missed codeniserver on the first try and I had to reinstall), the per-app dance became:
POST /api/v1/applications— create the app, point it at the GitHub repo, setsource=1(the GitHub App source).- Push the Dockerfile +
next.config.tspatch tomain. - Set build-time env vars (the
NEXT_PUBLIC_*family) in one Coolify field, runtime env vars in another. - Attach the subdomain (
*.afrotomation.comfor the Afrotomation family,*.tioye.devfor personal projects). - Trigger the first deploy. Coolify's bundled Traefik handles Let's Encrypt automatically.
After that, every git push to main redeploys via the GitHub App webhook. No GitHub Actions needed — Coolify's own build runner did the work.
The waves
Migrating a fleet is not migrating an app. You can't do it serially — you'd never sleep — and you can't do it fully in parallel — the build node has finite RAM. I batched them in waves of two to four, with each wave building while I wired up DNS for the previous one.
| Wave | Timeframe | Apps |
| 1 — Cornerstones | Apr 14, 15:00–17:30 | codeniserver, tioyedev, afrotomation, clickrise, bugginator, sahelfoods, sahelprosperity |
| 2 — Sahel Group | Apr 14 evening | sahelaqua, sahelivote, fructosahel, villes-semences, afrobaba |
| 3 — Portfolio apps | Apr 14 late / Apr 15 early | codenitask, codenisocial, codeniwork, codenilearn, codeniventure, codenibudget, codenihealth, codeniweather |
| 4 — Niche & tools | Apr 15 morning | bookshelf, bassaweb, solaire, unity-african, codeninvest, codeniscapes, codeniwatch, opennetsahel, codeniinvoice, jirasana, materialistix, smartnotes, whatsapp-clone |
| 5 — Client & side projects | Apr 15 afternoon | kazi-ai, credentials-vault, sahelenergies, codenalytics, sotigi, dashiki, calificient, mapnanimity, shoppydash, realtimechat, secretchat, reactdash, doubleshiitake, fasolara, github-readme-stats |
By the end of Wave 5 on April 15, about 40 apps were live. Two late follow-up waves the next morning — codenizoom, codeniline, mimishopstore, geoson, plus a couple of holdouts I'd skipped earlier — pushed the number into the mid-forties by April 16. The last few (a handful that were on GitLab instead of GitHub, plus one or two I'd under-estimated the build complexity for) dribbled in over the following days, landing the final count at 47 migrated apps by April 20. The defensible version of that count, cross-checked later: 47 distinct codenificient/* repositories that committed a new Dockerfile with a Coolify-related commit message in the 2026-04-14..2026-04-20 window.
What actually broke
The headline number hides the fires. Here are the ones that ate real time:
"The env vars didn't port over"
The single biggest regret of the migration was this sentence I typed at 18:10 on April 15:
"codenibudget app is running in Coolify but it is missing env variables. We deleted our Vercel projects too soon."
A few apps had Vercel env vars that existed only in Vercel — never committed to a local .env.local, never synced anywhere else. When those Vercel projects got deleted, the values went with them. For codenibudget this included a rotated SimpleFIN token that I couldn't reissue. The fix was forensic: dig through ~/.env* files, Claude conversation history, and git stash contents to reconstruct what was missing.
Rule that came out of this: Never delete the Vercel project until the Coolify deploy is green AND you've loaded the app and exercised at least one feature that hits an external API. A green Coolify checkbox means the container started. It does not mean the app works.
Cloudflare token: three tries
Coolify's DNS integration needs a Cloudflare API token with Zone:DNS:Edit scope. My first token had Zone:Zone:Read only. Second token had the wrong zones selected. Third token, scoped correctly, finally worked. Total time lost: about 45 minutes at 18:51–19:04 on April 14.
The bassaweb repo confusion
I had bassaweb on both GitHub and GitLab. The Coolify deploy was pointing at the GitLab copy, which was three months stale, while the active repo was codenificient/bassa_web on GitHub. The app deployed fine. It just wasn't the right code. Cost me about an hour to catch.
Frozen lockfile failures in CI
Several apps failed in the bun install --frozen-lockfile step because a local dev had bumped a dep, updated package.json, but the commit that landed didn't include the refreshed bun.lock. This produced a misleading error deep in the Prisma codegen step:
process "/bin/sh -c if [ -f bun.lock ]; then \
npm install -g bun@1.2.17 && bunx prisma generate && bun run build; \
else npx prisma generate && npm run build; fi" \
did not complete successfully: exit code: 1
New global rule: run bun install before every git push. If the lockfile changes, commit it. This one rule eliminated ~80% of CI-side failures on subsequent deploys.
DNS propagation vs Coolify "healthy"
Several *.tioye.dev apps were green in Coolify but unreachable from the public internet for 5–20 minutes because the Cloudflare DNS record hadn't been added yet. This is fixable with discipline (add the DNS record before attaching the domain in Coolify), but in the thrash of a 40-app day, I forgot more than once.
Oracle disk at 91%
In the middle of Wave 2, Ada's disk alarm went off: 176.9 GB used of 192.7 GB. Not a migration problem per se, but when your control-plane node can't write logs, nothing else recovers gracefully either. I deferred the cleanup to after the cornerstones landed and then spent an hour pruning Docker images the next morning.
The cost math
Vercel + Neon, at the scale I was running, was trending past $100/month and climbing with usage. My new bill:
- Contabo VPS 50 + VPS 10 combined: ~$50/month, all in (including IPv4, automated backups, and the US-region surcharge — the bare list prices on contabo.com don't include any of that; read the checkout page, not the marketing page)
- Oracle Cloud ARM: $0 (free tier)
- Cloudflare DNS: $0
- Backblaze B2 for off-site backups: $0 for the first month (Backblaze waives the first 10 GB/month anyway, and my total footprint is under 2 GB; I'll start paying something trivial once I either exceed that or the introductory window closes)
Call it roughly $50/month, all in. All 47 apps. Same traffic. Probably faster, because a single Contabo box with 16 dedicated vCPUs and 64 GB of RAM beats a cold Vercel serverless function on wake-up latency every time — and $50 is still half of the Vercel+Neon line it replaces.
The playbook, condensed
If you're staring down a similar migration, here's the 10-bullet version of what I'd do again:
- Provision the cluster first. Don't start containerizing until you have somewhere to deploy to.
- Put all nodes on a single private network. Tailscale is the easiest path. Treat the public IPs as ingress-only.
- Install Coolify on its own small node. Keep the control plane off the workload node so a bad deploy can't kill your deploy tool.
- Write one Dockerfile, not forty. The multi-stage template above works for 95% of Next.js apps. Vary only the
ARGlist per app. - Add
output: 'standalone'to everynext.config. Non-negotiable. - Classify env vars into three buckets: build-time
NEXT_PUBLIC_*, runtime server-only, and Vercel-injected noise (VERCEL_*,NX_*,TURBO_*) you can drop. - Pull every Vercel env var into a local file before deleting anything.
vercel env pullis your friend. Commit the manifest (scrubbed) to a private repo so you have a paper trail. - Batch deploys in waves of 2–4. Enough for parallelism, few enough that one failure doesn't cascade.
- Test each app in a browser before marking it done. A green container is not a working app.
- Set up a GitHub App with "all repositories" scope up front. Per-repo scoping misses apps and wastes a reinstall cycle.
What's next
Coolify is running about 55 apps now, and I'm already eyeing the next move — self-hosted Postgres to fully retire Neon, and eventually a k3s cluster across the same three nodes so I can adopt GitOps with ArgoCD instead of webhook-driven Coolify builds. But that's a story for the next post.
For now, 47 apps are home. They build on push. They serve on their own domains. They cost me less per month than a single Vercel Pro seat.
The blackout was the kick I needed.