Deploying Next.js on GCP
How we deploy our Next.js apps on Google Cloud Platform without relying on Vercel.

We recently moved our Next.js deployments to Google Cloud Platform. Here's why and how it went.
Why GCP?
LLM Gateway is a full-stack application with multiple APIs and frontends. Deploying everything through Vercel meant adding another tool to our stack—one more dashboard, one more set of credentials, one more thing to manage.
By running on GCP directly, we consolidate our infrastructure. Our APIs, databases, and frontends all live in the same place.
The Setup
All our services run on a Kubernetes cluster on GCP. Each service—API, Gateway, UI, Playground, Docs, Admin, and Worker—is deployed as a separate container. Kubernetes handles autoscaling based on resource usage, so we scale up during traffic spikes and scale down when things are quiet.
The build and deployment pipeline is fully automated via GitHub Actions. On every push to main, we build Docker images for each service and push them to GitHub Container Registry. You can see the workflow here: .github/workflows/images.yml.
For Next.js specifically, we build in standalone mode and package each app into its own container.
Performance
The results surprised us. Performance is excellent—response times are consistently fast, and the infrastructure handles our traffic without issues.
The common concern with self-hosted Next.js is SSR latency from running in a single region. In practice, this hasn't been a problem. The slight increase in latency for users far from our region is negligible compared to the operational simplicity we gained.
How to Self-Host Next.js
Here's how to do it yourself.
Step 1: Enable Standalone Mode
In your next.config.js:
module.exports = {output: "standalone",};
module.exports = {output: "standalone",};
This bundles your app into a self-contained folder with all dependencies.
Step 2: Dockerfile
FROM node:20-slim AS builderWORKDIR /appCOPY package.json pnpm-lock.yaml ./RUN npm install -g pnpm && pnpm install --frozen-lockfileCOPY . .RUN pnpm buildFROM node:20-slim AS runnerWORKDIR /appENV NODE_ENV=productionENV HOSTNAME="0.0.0.0"ENV PORT=80COPY --from=builder /app/.next/standalone ./COPY --from=builder /app/.next/static ./.next/staticCOPY --from=builder /app/public ./publicEXPOSE 80CMD ["node", "server.js"]
FROM node:20-slim AS builderWORKDIR /appCOPY package.json pnpm-lock.yaml ./RUN npm install -g pnpm && pnpm install --frozen-lockfileCOPY . .RUN pnpm buildFROM node:20-slim AS runnerWORKDIR /appENV NODE_ENV=productionENV HOSTNAME="0.0.0.0"ENV PORT=80COPY --from=builder /app/.next/standalone ./COPY --from=builder /app/.next/static ./.next/staticCOPY --from=builder /app/public ./publicEXPOSE 80CMD ["node", "server.js"]
Build and push:
docker build -t gcr.io/your-project/your-app:latest .docker push gcr.io/your-project/your-app:latest
docker build -t gcr.io/your-project/your-app:latest .docker push gcr.io/your-project/your-app:latest
Step 3a: Deploy to Cloud Run
gcloud run deploy your-app \--image gcr.io/your-project/your-app:latest \--platform managed \--region us-central1 \--allow-unauthenticated
gcloud run deploy your-app \--image gcr.io/your-project/your-app:latest \--platform managed \--region us-central1 \--allow-unauthenticated
Step 3b: Deploy to Kubernetes
apiVersion: apps/v1kind: Deploymentmetadata:name: your-appspec:replicas: 2selector:matchLabels:app: your-apptemplate:metadata:labels:app: your-appspec:containers:- name: your-appimage: gcr.io/your-project/your-app:latestports:- containerPort: 80resources:requests:memory: "256Mi"cpu: "100m"limits:memory: "512Mi"cpu: "500m"
apiVersion: apps/v1kind: Deploymentmetadata:name: your-appspec:replicas: 2selector:matchLabels:app: your-apptemplate:metadata:labels:app: your-appspec:containers:- name: your-appimage: gcr.io/your-project/your-app:latestports:- containerPort: 80resources:requests:memory: "256Mi"cpu: "100m"limits:memory: "512Mi"cpu: "500m"
Apply with kubectl apply -f deployment.yaml.
Bonus: Autoscaling
For automatic scaling based on CPU usage, add a HorizontalPodAutoscaler:
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata:name: your-appspec:scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: your-appminReplicas: 2maxReplicas: 10metrics:- type: Resourceresource:name: cputarget:type: UtilizationaverageUtilization: 70
apiVersion: autoscaling/v2kind: HorizontalPodAutoscalermetadata:name: your-appspec:scaleTargetRef:apiVersion: apps/v1kind: Deploymentname: your-appminReplicas: 2maxReplicas: 10metrics:- type: Resourceresource:name: cputarget:type: UtilizationaverageUtilization: 70
This scales your deployment between 2 and 10 replicas based on CPU utilization.
Bonus: CI/CD with GitHub Actions
Automate builds with GitHub Actions. This workflow builds and pushes to GitHub Container Registry on every push to main:
name: Build and Pushon:push:branches: [main]jobs:build:runs-on: ubuntu-latestpermissions:contents: readpackages: writesteps:- uses: actions/checkout@v4- uses: docker/setup-buildx-action@v3- uses: docker/login-action@v3with:registry: ghcr.iousername: ${{ github.actor }}password: ${{ secrets.GITHUB_TOKEN }}- uses: docker/build-push-action@v6with:context: .push: truetags: ghcr.io/${{ github.repository }}:latestcache-from: type=ghacache-to: type=gha,mode=max
name: Build and Pushon:push:branches: [main]jobs:build:runs-on: ubuntu-latestpermissions:contents: readpackages: writesteps:- uses: actions/checkout@v4- uses: docker/setup-buildx-action@v3- uses: docker/login-action@v3with:registry: ghcr.iousername: ${{ github.actor }}password: ${{ secrets.GITHUB_TOKEN }}- uses: docker/build-push-action@v6with:context: .push: truetags: ghcr.io/${{ github.repository }}:latestcache-from: type=ghacache-to: type=gha,mode=max
Save as .github/workflows/build.yml.
Takeaway
If you're already on GCP and considering whether to add Vercel to your stack, you might not need to. Kubernetes and Cloud Run handle Next.js well, and keeping everything in one place makes operations simpler.