14 min read
Cloudflare rebuilt Next.js on Vite in one week with AI for $1,100. Here's what vinext actually does, how it works, and what it means for your stack.
I want to tell you about a Thursday afternoon I lost entirely to deployment plumbing.
I had a Next.js app. It worked perfectly in local dev. I needed it on Cloudflare Workers because I was using Durable Objects for real-time state, and frankly, I didn't want to pay Vercel's egress fees at scale. Should have been an afternoon. It was not an afternoon.
What followed was three days of OpenNext configuration archaeology: reverse-engineering the build output, patching module resolution, coaxing getPlatformProxy into faking my bindings during local dev, then discovering that next dev and the actual Workers runtime handled certain edge cases differently enough to matter. I got it working eventually. But I remember staring at the diff and thinking: this is a lot of glue code for something that should just work.
On February 24, 2026, Cloudflare published a blog post. One engineer, one AI model, one week. They didn't fix the OpenNext adapter. They threw it out and rebuilt Next.js from scratch — and called the result vinext.
Let me explain what that means, why the architecture is actually clever, and what it quietly says about the state of software in 2026.
To understand why vinext is interesting, you need to understand why the deployment problem exists in the first place.
Next.js is incredible from a developer experience standpoint. But it is built on Turbopack, Vercel's proprietary Rust-based bundler. Every part of the toolchain — dev server, HMR, production build — runs through that machinery. When you deploy Next.js to Cloudflare, Netlify, or AWS Lambda, you're taking the output of a bespoke build pipeline and trying to reshape it into something an entirely different runtime can execute.
That is the job OpenNext was built to do. And OpenNext deserves real credit — it's one of the most ambitious pieces of reverse-engineering in the frontend ecosystem. But reverse-engineering is inherently fragile. Next.js ships a version bump, the build output changes, and suddenly the adapter breaks in some subtle way that only surfaces in production on a specific route with a specific data shape. It's whack-a-mole by design, because you're building on top of undocumented internals.
Cloudflare has been collaborating with Vercel on a first-class adapters API. That's real progress. But even with a formal adapter, you still can't use platform-specific APIs like Durable Objects or AI bindings during next dev, because next dev runs in Node.js. The adapter solves build and deploy. It doesn't solve the local development gap.
vinext attacks the root cause: it reimplements the Next.js API surface on top of Vite instead of trying to adapt Turbopack's output.
vinext is not a wrapper around Next.js. It is not an adapter. It is a clean-room reimplementation of the Next.js API surface — routing, server rendering, React Server Components, server actions, caching, middleware — built entirely as a Vite plugin.
This is the key architectural insight. Vite is the build infrastructure that powers almost every modern React framework except Next.js: Astro, SvelteKit, Nuxt, Remix. It has a plugin API, a well-specified Environment API, and because of that, its output can run on any platform without modification. You build your Vite app once; deploying to Cloudflare Workers, Netlify, or a Node.js server is a targeting decision, not a rebuild.
npm install vinext
Replace next with vinext in your package.json scripts. Your existing app/, pages/, and next.config.js work without changes. The CLI surface is deliberately identical:
vinext dev # Development server with HMR
vinext build # Production build
vinext deploy # Build and deploy to Cloudflare Workers
Because the entire application — dev server included — runs on workerd, Cloudflare's Workers runtime, you can use Durable Objects, KV, R2, and AI bindings directly in local development. No getPlatformProxy. No environment shimming. The dev environment and production environment are the same runtime. That alone solves a class of bugs that have been plaguing Workers-based Next.js deployments for years.
I'm usually skeptical of benchmark tables in blog posts. This one deserves a closer look.
Cloudflare compared vinext against Next.js 16 using a 33-route App Router application. They controlled for the variables that matter: TypeScript type checking and ESLint were disabled in the Next.js build (Vite doesn't run these during build anyway), and force-dynamic was set to prevent Next.js from spending extra time pre-rendering static routes. The goal was pure bundler and compilation speed.
Production build time:
| Framework | Mean | vs Next.js |
|---|---|---|
| Next.js 16.1.6 (Turbopack) | 7.38s | baseline |
| vinext (Vite 7 / Rollup) | 4.64s | 1.6x faster |
| vinext (Vite 8 / Rolldown) | 1.67s | 4.4x faster |
Client bundle size (gzipped):
| Framework | Gzipped | vs Next.js |
|---|---|---|
| Next.js 16.1.6 | 168.9 KB | baseline |
| vinext (Rollup) | 74.0 KB | 56% smaller |
| vinext (Rolldown) | 72.9 KB | 57% smaller |
The Rolldown numbers are the headline. Rolldown is a Rust-based bundler that's coming in Vite 8 — the same architectural bet Turbopack represents, but sitting inside a pluggable, platform-agnostic ecosystem instead of a vertically integrated toolchain. A 4.4x build time improvement and 57% smaller client bundles aren't small numbers. They're the kind of numbers that change infrastructure decisions at scale.
The full benchmark methodology and historical results are public at benchmarks.vinext.workers.dev. That transparency matters. Take them as directional, not definitive — it's a single 33-route fixture — but the direction is clear.
The deployment story is genuinely simple, and I want to walk through it because simple is easy to undersell.
vinext deploy
That's it. vinext handles building, auto-generating the Worker configuration, and deploying. Both App Router and Pages Router work on Workers with full client-side hydration, interactive components, client-side navigation, and React state.
For production caching, vinext ships a Cloudflare KV cache handler that gives you Incremental Static Regeneration out of the box:
import { KVCacheHandler } from "vinext/cloudflare";
import { setCacheHandler } from "next/cache";
setCacheHandler(new KVCacheHandler(env.MY_KV_NAMESPACE));
The caching layer is designed to be pluggable. KV is a solid default for most apps. If you have large cached payloads or unusual access patterns, you can swap in R2. Cloudflare is also working on improvements to the Cache API that should provide strong caching with less configuration overhead. The setCacheHandler abstraction means you're not locked in.
For apps using Cloudflare-native primitives, Cloudflare has published a live example of Cloudflare Agents running in a Next.js-compatible app via vinext, using Durable Objects with zero workarounds. The source is at github.com/cloudflare/vinext-agents-example.
vinext doesn't yet support static pre-rendering at build time. generateStaticParams() is on the roadmap, but it's not shipping today. For purely static sites, this is a real gap — Cloudflare is direct about this, and they even suggest migrating to Astro if your content is 100% static.
But buried in the blog post is something more interesting than a roadmap item. They're calling it Traffic-aware Pre-Rendering (TPR), and it's a fundamentally different take on the static/dynamic tradeoff.
The insight is this: Next.js's approach to static pre-rendering is brute-force. It renders every page listed in generateStaticParams() at build time. A site with 10,000 product pages means 10,000 renders at build time, even if 99% of those pages receive essentially zero traffic. This is why large Next.js sites end up with 30-minute builds that block your entire CI pipeline.
Cloudflare is already your reverse proxy. They have your traffic data. They know which pages actually get visited. TPR uses Cloudflare's zone analytics at deploy time to pre-render only the pages that actually matter:
vinext deploy --experimental-tpr
Building...
Build complete (4.2s)
TPR (experimental): Analyzing traffic for my-store.com (last 24h)
TPR: 12,847 unique paths — 184 pages cover 90% of traffic
TPR: Pre-rendering 184 pages...
TPR: Pre-rendered 184 pages in 8.3s → KV cache
Deploying to Cloudflare Workers...
For a site with 100,000 product pages, the power law means 90% of traffic typically goes to 50–200 URLs. Pre-render those in seconds. Everything else falls back to on-demand SSR and gets cached via ISR after the first request. New deploys automatically refresh the set based on current traffic patterns. Pages that spike in traffic get picked up on the next deploy without any manual configuration changes.
This is still experimental and needs real-world testing at scale. But the idea of using your CDN's traffic intelligence to inform your build strategy is genuinely novel. It's the kind of feature that only makes sense if you control both the build tooling and the network layer — which is exactly Cloudflare's position.
The engineering story is worth taking seriously, not as AI hype, but as a real data point about what changed in 2026.
Cloudflare's Steve Faulkner — technically an engineering manager, not an individual contributor — spent roughly one week directing an AI model to rebuild Next.js. The first commit landed February 13. By the same evening, both Pages Router and App Router had basic SSR working. By day three, vinext deploy was shipping apps to Workers with full client hydration. By the end of the week, the project covered 94% of the Next.js 16 API surface. Total cost: approximately $1,100 in Claude API tokens across 800+ sessions in OpenCode.
Previous attempts to build this — at Cloudflare and elsewhere — had failed or stalled. The scope is genuinely enormous: two routers, 33+ module shims, SSR pipelines, RSC streaming, file-system routing, middleware, caching. Faulkner identifies four things that had to be simultaneously true for this to work: Next.js is extremely well-documented (the model knew the API surface cold), Next.js has an extensive test suite that could be ported directly as a specification, Vite provided the foundational infrastructure so nobody had to build a bundler, and the AI models had finally crossed a threshold of coherence over large codebases.
The workflow was disciplined, not magic. Define a task. Let AI write implementation and tests. Run the test suite. If tests fail, give the AI the error output and let it iterate. Repeat. AI code review agents handled PR review. Browser-level testing used agent-browser to catch hydration and client-navigation issues that unit tests miss. The quality gates — 1,700+ Vitest tests, 380 Playwright E2E tests, full TypeScript checking via tsgo, linting via oxlint — were all human-defined. The AI operated inside a specification. The human steered.
This distinction matters. Faulkner is direct about it: there were PRs that were confidently wrong. Architecture decisions, prioritization, and recognizing dead ends were all human judgment calls. The AI was exceptionally productive within a well-defined scope with good tooling. It was not autonomous.
I want to be clear about the limitations because they're real and the project itself is refreshingly honest about them.
vinext is less than two weeks old at the time of writing. It covers 94% of the Next.js 16 API surface, which sounds impressive until you're building in the 6%. The README documents what's not supported and what won't be. Read it before you commit.
Static pre-rendering at build time does not exist yet. If your application's architecture relies on generateStaticParams() rendering pages at build time and serving them as flat HTML, vinext will not work for you today. TPR is experimental and has not been tested at meaningful production traffic scale.
The only production case study published so far is CIO.gov, run by National Design Studio. That's a real government site with real traffic, which is encouraging, but it's a single data point. "Early benchmarks are promising" and "customers running it in production" are not the same as "battle-tested at scale."
The Cloudflare Workers deployment target is the only first-class target right now. A proof-of-concept running on Vercel took 30 minutes to build, which is genuinely impressive and suggests the abstraction is sound, but platform parity is an open question. Cloudflare is explicitly asking other providers to contribute deployment targets.
If you're building on next/image's advanced optimization features, internationalized routing with complex locale detection, or the parts of the API surface in that remaining 6%, test carefully before committing.
For projects where vinext makes sense, the migration story is deliberately simple. vinext ships an Agent Skill that handles migration automatically in Claude Code, Cursor, OpenCode, and other AI coding tools:
npx skills add cloudflare/vinext
Open your Next.js project in any supported tool and say:
migrate this project to vinext
The skill handles compatibility checking, dependency swaps, config generation, and dev server startup. It flags anything that needs manual attention.
For manual migration:
npx vinext init # Migrate an existing Next.js project
npx vinext dev # Start the development server
npx vinext deploy # Ship to Cloudflare Workers
Replace next with vinext in your package.json scripts. Your app/ directory, pages/ directory, and next.config.js work as-is. That's the promise, and based on the test coverage, it's largely true for the supported API surface.
I want to end with the idea I can't stop thinking about, because I think it's the real story here.
Faulkner raises it in the original post: most of the abstraction layers in software exist because humans need help managing complexity. We couldn't hold the whole system in our heads, so we built frameworks on top of frameworks, adapters on top of adapters, each layer making the next person's job slightly more tractable. That's how you end up with OpenNext: a framework that wraps the output of a framework to make it run on a platform.
AI doesn't have the same cognitive limits. Given a well-specified API contract, a solid build tool as a foundation, and a comprehensive test suite as a specification, an AI model can just write everything in between. The intermediate layers — the adapters, the shims, the glue — aren't necessary scaffolding anymore. They were scaffolding for human cognition.
vinext isn't just a faster way to deploy Next.js to Cloudflare. It's a demonstration that the cost of building software has changed structurally, and that the abstraction choices we made over the past decade are up for renegotiation. Some of those layers will turn out to be foundational. Some of them were crutches, and now we don't need them.
The question isn't whether this pattern repeats. It will. The question is which problem in your stack is the next one that has a well-documented API, a comprehensive test suite, and a solid primitive to build on — and whether someone else builds the $1,100 version before you do.
This is a project worth watching closely if you're a Next.js developer who deploys to Cloudflare Workers today and is currently maintaining OpenNext configuration or workaround code. You should test your app against vinext in a staging environment now.
It's essential reading if you're building an AI-native application that uses Durable Objects or other Workers-specific primitives, because the local dev story just got dramatically simpler.
If you're on Vercel and your deployment is working fine, there's no urgency. But it's worth understanding what vinext represents architecturally, because the Rolldown-powered build performance and traffic-aware pre-rendering are ideas that will influence the ecosystem regardless of whether vinext itself wins.
We are past the point where "one engineer rebuilt a major framework in a week" is science fiction. We are at the point where it's a Cloudflare blog post with a GitHub link. That warrants your attention.
Quick-Start Checklist
npx vinext init in your Next.js project to check compatibilitypackage.json scripts: replace next with vinextvinext dev and verify your routes render correctly in workerdgetPlatformProxy workarounds — your bindings work natively nowvinext deploy and validate production behavior--experimental-tpr flag if you have static content needs