Production MVP Case Study

Production MVP Case Study: Shipping a 186k-URL Multilingual Platform in Three Weeks

Thirteen locales. 186,000 canonical URLs. A full admin CMS with revisions, scheduled publishing, bulk operations, and CSV export. Shipped in sixteen calendar days on a split Next.js plus Cloudflare architecture. By launch day, Google Search Console reported 16.9K impressions and an average position of 17.8 across the indexed surface.

13Locales186,000Canonical URLs16 daysBuild windowLean MVP BuildTier

The brief

A public reference platform. Thirteen locales. A structured dataset with tens of thousands of rows, every row deserving its own landing page in every language. Administrators needed a full content system: authentication, role management, a rich-text editor, media pipeline, scheduled publishing, revisions, view analytics, and CSV export. The front end had to be fast, indexable, and translated end to end.

This production MVP case study covers exactly that brief: a multilingual platform shipped end to end in under three weeks under BeeMVP's Lean MVP engagement. The client brought the data and the brand. BeeMVP built the platform. The live site is at vndatabase.com.

First commit: 2026-03-25. Launch: 2026-04-10. Four hundred commits across sixteen calendar days, kept small, reviewed, test-covered, and deployed behind a one-command rollback. Speed came from scope discipline, not from cutting corners.

Why it shipped this quickly

BeeMVP did not build to a wish list. The team worked to a three-part production contract:

  1. One route for every entity in every locale, crawlable and pre-rendered. Organic traffic had to compound from day one.
  2. One editor surface for every content type. Articles, resources, media, translations, and metadata all pass through the same TipTap-based editor.
  3. A guarded one-command deploy pipeline that DevOps can run safely, with automatic rollback on failure.

Anything outside those three commitments was deferred.

The stack that was settled before code was written

  • Frontend: Next.js 16 with the App Router, React 19, Tailwind v4, a Radix-based shadcn/ui component layer, and next-intl for thirteen locales with localized pathnames (so /provinces, /provincias, and /provinzen all resolve to the same canonical entity).
  • API: A separate Hono Worker on Cloudflare handling authentication, admin endpoints, cron triggers, and public data queries. Better Auth runs inside the Worker with KV-backed sessions and Turnstile challenges on sensitive routes.
  • Database: Postgres on a managed dedicated server, fronted by Cloudflare Hyperdrive so the Worker gets pooled, low-latency reads from the edge.
  • ORM: Drizzle with versioned migrations under source control, plus an auto-generated auth.schema.ts that stays in sync with Better Auth.
  • Storage: Two R2 buckets, one for admin files and one for public media.
  • Editor: TipTap 3 with about forty extensions wired in: tables, math, code blocks with lowlight, drag handles, emoji, image captions, task lists, and more.
  • Tests: Playwright end-to-end suites organized as smoke, functional, i18n, visual, and accessibility (axe-core). Vitest for unit coverage.

The stack was not negotiated after the fact. It was chosen on day one and held. This stack pattern is the same foundation BeeMVP uses across every service tier.

The architectural pivot

The initial build was a single Cloudflare Worker, with Next.js running on OpenNext, the API routes co-located, and everything deployed via wrangler deploy. It served the authenticated admin surface cleanly and kept dynamic API latency at edge proximity. The public SEO surface, however, was a different shape of problem. This project sits at an unusual scale, and pre-rendering its full long-tail called for a build environment without a practical ceiling on output size.

The team wanted one sitemap URL per entity per locale. The math is unforgiving. The current dataset carries 34 provinces and 3,321 active wards. The historical layer adds another 63 provinces, 696 district-level units, and 10,035 ward-level units from the previous administrative era. That is 14,149 distinct entities. Multiplied across 13 locales it yields 183,937 entity URLs, and once static routes, resources, and blog posts are folded in, the final sitemap surface reaches 186,000 indexed URLs.

Cloudflare Workers fit dynamic, low-latency requests well, and the platform's design philosophy rewards keeping each Worker lean and focused. A build step that has to serialize and cache 186,000 pre-rendered HTML files alongside their RSC payloads is a different category of workload, one that benefits from a host built around long-running build processes and large static output. Recognizing that early let the team match each part of the system to the runtime it was designed for, rather than stretching one runtime to cover everything.

The alternative was to stay on Workers and shard sitemap generation across queues and R2. That path was viable but would have pushed build complexity into runtime and traded deterministic deploys for eventual consistency. For a site whose core promise is crawl coverage, determinism won.

The pivot landed in a single refactor: move Next.js to a managed dedicated server, keep the API on Workers. The front end could now scale its build step without bumping into platform limits, and every dynamic request still benefited from edge proximity. This kind of architecture shift is the reason fixed-scope projects benefit from BeeMVP's deeper service tiers, where production-grade scale is part of the brief from day one.

This is deliberate, not a retreat from the edge. Each runtime is matched to the workload it handles best:

  • Dedicated server runs Next.js because the build step is heavy: 186,000 SSG pages to compile, a sitemap generator that splits output by locale and category to stay well inside the 50,000-URL-per-file protocol limit, PM2 process management, zero-downtime symlink swaps, and CDN purge on every deploy.
  • Workers run the API because every request to /api/* benefits from edge proximity, Hyperdrive connection pooling, and KV-backed rate limiting. Scheduled cron triggers handle housekeeping inside the same Worker, with no separate scheduler to operate.

Two repositories support the platform: one for the application, one for a lightweight deployment dashboard exposed via Cloudflare Tunnel. The dashboard gives the team a web UI to trigger, monitor, and roll back deploys without SSHing into the server.

Scoping a similar build? If your project sits at similar scale — tens of thousands of entities, multiple locales, production-grade SEO — book a scoping call to map it to the right tier.

Solving the 186,000-URL sitemap problem

Next.js can generate sitemaps dynamically, but doing so at request time against Postgres, even via Hyperdrive, is the wrong shape for a search engine crawler that wants them available instantly. A flat sitemap of that size also crawls poorly: Google prioritizes well-structured indexes that group related URLs, refresh on real change signals, and stay under the protocol's 50,000-URL-per-file ceiling.

BeeMVP wrote scripts/generate-sitemaps.ts, a standalone Drizzle script that runs at deploy time, after migrations and before the Next.js build finalizes. The output is a three-tier hierarchy designed for crawl efficiency:

  • A root sitemap.xml index that points to thirteen per-locale sub-indexes.
  • Each per-locale sub-index points to category files: static.xml, provinces.xml, wards-*.xml, old-provinces.xml, old-districts-*.xml, and old-wards-*.xml. Splitting by locale and category lets a crawler request exactly the slice it needs and discover changes locally rather than re-parsing the full surface.
  • Every URL carries a per-route priority (homepage 1.0, province pages 0.9, ward pages 0.7, legal pages 0.3) and a tuned changefreq (daily for the home, weekly for high-priority hubs, monthly for long-tail entities). Crawl budget follows business value.
  • Every URL also carries a real lastmod pulled from the database updatedAt or effectiveFrom column, so search engines only re-crawl what actually changed.

Large categories use a safety ceiling of 49,900 URLs per file to stay under the sitemap protocol limit. At current data scale, the largest category file holds 14,222 URLs and most categories sit inside one file each. The whole generation completes as a single batched pass over indexed Postgres queries, not thousands of individual page renders. The result is a sitemap surface that Google can ingest the way it prefers: structured, small per file, accurately dated, and ranked by priority.

The one-command deploy pipeline, with automatic rollback

bash scripts/vps-deploy.sh does the following in one command:

  1. Pulls latest from origin/main with a hard reset. The server is never authoritative.
  2. Installs frozen dependencies with pnpm.
  3. Runs Drizzle migrations against the production Postgres.
  4. Runs the seed journal. Seeds are tracked by filename, so only new ones execute.
  5. Builds Next.js into a timestamped directory (.next_build_$(date +%s)) so the current build keeps serving until the new one is verified.
  6. Generates the 186k-URL sitemap set.
  7. Atomically swaps the .next symlink and restarts PM2.
  8. Verifies the new process is healthy within three seconds. If not, it rolls back automatically to the previous build directory.
  9. Purges the Cloudflare CDN cache via API.

The same script prints a [SSG_PAGES] count and the Next.js route table so the deploy dashboard can render a per-commit summary.

Production rigor

In a build window of under three weeks, BeeMVP still shipped five Playwright suites (smoke, functional, i18n, visual, and axe-core accessibility) plus Vitest units. The accessibility suite runs on every locale. The i18n suite catches hardcoded strings that slip past next-intl. Visual regression catches layout drift when the editor's CSS touches public pages.

Production monitoring was wired in before launch, not after. Error tracking runs on both the Worker and Next.js runtimes. Uptime checks watch the public homepage, the per-locale entity routes, and the admin login. Alerts route to the team in real time so issues are caught before users report them.

This is the difference between a prototype and a Production MVP Starter. The data is indexed and multilingual from day one. The CMS is not a spreadsheet. The deploy is one command, with automatic rollback and CDN purge built in.

Security from day one

Security was a layer-one decision, not a launch checklist. Every public response carries a strict Content Security Policy with explicit source allowlists, plus a full set of hardening headers: Strict-Transport-Security with a two-year max-age and preload directive, X-Frame-Options: DENY reinforced by CSP frame-ancestors 'none', X-Content-Type-Options: nosniff, Referrer-Policy: strict-origin-when-cross-origin, and a Permissions-Policy that disables camera, microphone, geolocation, and FLoC tracking by default.

The authentication surface is built on Better Auth with deliberate restrictions: public sign-up is disabled, so new accounts only exist by admin invitation. Magic-link login expires in five minutes. Role-based access separates admin from user, with impersonation sessions capped at one hour. Organization creation is locked to system administrators. Trusted origins for cross-origin auth requests are set per environment, not hardcoded.

Bot and abuse defense runs at multiple layers. Cloudflare Turnstile gates sensitive routes against automated traffic. The Worker API enforces per-IP rate limiting backed by Cloudflare KV, with rate-limit keys hashed via SHA-256 so query-string variations cannot exhaust the store. Page-view analytics hash visitor IPs before persistence, so the database never stores raw addresses. Uploads through the admin CMS are session-gated, capped at 2 MB, and restricted by file-type allowlist. The admin surface itself is locale-locked to a single canonical path, presenting a consistent (and easily monitored) attack surface. The Postgres layer is reached only via Hyperdrive with parameterized Drizzle queries, with no raw SQL paths into the public-facing runtime.

The origin server itself is invisible to the public internet. Inbound traffic on the dedicated server is firewalled to deny everything by default. The only path in is a Cloudflare Tunnel — an outbound-initiated connection from the server to Cloudflare's edge — which means there is no exposed port for an attacker to scan, brute-force, or fingerprint. Public visitors reach the Next.js frontend through the Cloudflare CDN, never touching the origin directly. Operator-only paths (the deploy dashboard, the database tunnel, internal admin tooling) sit behind Cloudflare Access, gated by identity-provider login and per-application policy. A leaked password without a valid Access session is useless: the connection cannot reach the server in the first place.

The full set is verifiable in the live response headers and JavaScript bundle of vndatabase.com.

Launch outcome and early SEO signal

The platform went live on April 10, 2026. Within sixteen days of being indexed by Google Search Console (March 25 through April 10, the build window), the site accumulated:

  • 16.9K impressions
  • 122 clicks
  • 0.7% average CTR
  • 17.8 average position

Impressions and clicks both ramped sharply in the days leading up to launch as Google began crawling the per-locale category sitemaps. An average position of 17.8 across a brand-new domain with 186,000 canonical URLs is a strong indexing signal: Google is reading the structured-data architecture the way it was designed and ranking it for relevant queries within the first two weeks. Live data is verifiable at vndatabase.com.

This is also where the architectural choice pays back. The site is positioned as evergreen reference content — an asset that compounds in organic value over time rather than decaying with news cycles. Every entity page is a long-tail landing that earns impressions on its own keyword surface, and the multilingual architecture multiplies that surface by thirteen.

What the client got

  • Thirteen locales, localized URLs, and hreflang-correct sitemaps covering 186,000 canonical pages.
  • A full admin CMS with Better Auth, revisions, scheduled publishing, bulk operations, CSV export, view analytics, and a TipTap editor rich enough for long-form pillar content.
  • A split deployment where the heavy build stays on a managed dedicated server and every public request is served edge-close via Cloudflare.
  • A disciplined database migration workflow: every schema change versioned through Drizzle, applied through the deploy pipeline, paired with scripted Postgres backups and health checks, and seeds that are safe to rerun.
  • A one-command deploy with automatic rollback, and a separate dashboard to trigger it without shell access.
  • Production monitoring from day one: error tracking on both runtimes, uptime checks on public routes and admin login, real-time alerting.
  • A hardened security baseline: Grade-A securityheaders.com profile, strict CSP plus HSTS preload, invitation-only admin signup, magic-link auth with five-minute expiry, KV-backed rate limiting, IP-hashed analytics, session-gated R2 uploads, and a locale-locked admin surface.
  • Origin lockdown: the dedicated server accepts no direct inbound traffic. Public requests reach Next.js only through the Cloudflare CDN; operator paths (deploy dashboard, database tunnel, admin tooling) sit behind Cloudflare Tunnel plus Cloudflare Access with identity-provider login and per-application policy.
  • Full handover: source repositories, Cloudflare Access policy rosters, server and Cloudflare credentials rotated to client ownership, plus a runbook covering deploy, rollback, backup restore, and migration workflow.

The roadmap from here

Phase 1 established the evergreen reference layer: structured data, fast multilingual delivery, and a sitemap surface large enough to compound organic traffic over months and years. With that foundation indexed and ranking, Phase 2 layers commercial content on top of the same architecture: travel and tourism information, booking flows, local business directories, and export-support resources for businesses operating in the regions the data already covers.

Because the architecture is split cleanly between presentation (dedicated server, fast SSG) and dynamic state (Workers API, Hyperdrive-pooled Postgres), each new commercial surface plugs into the existing system without re-architecting. The evergreen pages drive the discovery; the commercial features convert it.

Built in three weeks. Built to last.

Four hundred small, reviewed commits. Sixteen calendar days. Zero shortcuts on the production contract. A platform indexed by Google within its build window and positioned to compound organic value for years.

If you are scoping a multilingual platform of your own, the same architectural patterns covered in this production MVP case study are available across BeeMVP's fixed-scope packages. See the result live at vndatabase.com, or book a scoping call to map your project against the right tier.

Launch outcome

Indexed and ranking within the build window.

Google Search Console, March 25 – April 10, 2026.

16,900
Impressions
122
Clicks
0.7%
Avg CTR
17.8
Avg position

Stack

The production stack that shipped this.

Next.js 16React 19Tailwind v4Hono on Cloudflare WorkersCloudflare HyperdrivePostgresDrizzle ORMBetter AuthTipTap 3PlaywrightVitestPM2Cloudflare Tunnel + Access

Engagement tier

Shipped under Lean MVP Build

Multiple workflows, production-grade scope, real feature set. The tier for first commercial releases.

Scoping a similar build?

30-minute call. No pitch. We'll map your project against the right tier and tell you honestly whether BeeMVP is the right fit.