Production-Ready Dockerfiles for Next.js: Caching, Multi-Stage Builds & Security
For developers at Sadeem informatique

Photo by ThisIsEngineering on Pexels.
Getting Docker working with Next.js is easy. Getting it right — with proper layer caching, no hardcoded values, and a secure production image — takes a bit more thought. This guide walks you through every decision in a production-grade Dockerfile, explaining the why behind each choice so you can adapt it confidently to your own project.
By the end, you'll have a Dockerfile that:
- Maximises Docker layer caching so rebuilds are fast
- Never invalidates your
node_modulescache when only application code changes - Passes all configurable values as build arguments (no hardcoded ports or URLs)
- Uses multi-stage builds to keep the final image lean
- Runs as a non-root user in production
Even if Dockerfile maintenance is primarily a DevOps responsibility in your team, developers should still understand Docker fundamentals. This helps them debug build/runtime issues faster, collaborate better across teams, and ship production-ready features with fewer deployment surprises.
The Full Dockerfile
Here's the complete file we'll be walking through. Read it once end-to-end, then we'll break it apart section by section.
# ===============================
# Base image
# ===============================
FROM node:22-slim AS base
WORKDIR /app
# ===============================
# Dependencies (isolated for caching)
# ===============================
FROM base AS deps
COPY package.json package-lock.json ./
RUN npm ci
# ===============================
# Build stage
# ===============================
FROM base AS builder
WORKDIR /app
ARG NEXT_PUBLIC_BACKEND_BASE_URL
ARG NEXT_PUBLIC_BACKEND_SIGNED_FILE_URL
ARG NEXT_PUBLIC_COOKIE_DOMAIN
ARG NEXT_PUBLIC_USE_SECURE_COOKIES
ARG NODE_MEMORY=512
ENV NEXT_PUBLIC_BACKEND_BASE_URL=${NEXT_PUBLIC_BACKEND_BASE_URL}
ENV NEXT_PUBLIC_BACKEND_SIGNED_FILE_URL=${NEXT_PUBLIC_BACKEND_SIGNED_FILE_URL}
ENV NEXT_PUBLIC_COOKIE_DOMAIN=${NEXT_PUBLIC_COOKIE_DOMAIN}
ENV NEXT_PUBLIC_USE_SECURE_COOKIES=${NEXT_PUBLIC_USE_SECURE_COOKIES}
ENV NODE_OPTIONS="--max-old-space-size=${NODE_MEMORY}"
ENV NEXT_TELEMETRY_DISABLED=1
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
# ===============================
# Runtime image
# ===============================
FROM node:22-slim AS runner
WORKDIR /app
ARG PORT=3005
ARG RUNTIME_NODE_MEMORY=384
ENV NODE_ENV=production
ENV NEXT_TELEMETRY_DISABLED=1
ENV PORT=${PORT}
ENV NODE_OPTIONS="--max-old-space-size=${RUNTIME_NODE_MEMORY}"
RUN addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nextjs
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
USER nextjs
EXPOSE ${PORT}
CMD ["node", "server.js"]
Stage 1: The Base Image
FROM node:22-slim AS base
WORKDIR /app
Why node:22-slim?
The slim variant strips out non-essential packages from the standard Debian image — things like build tools, documentation, and locale data you don't need at runtime. This results in a meaningfully smaller image while still being a well-maintained, official distribution.
Avoid node:22-alpine for Next.js unless you have a strong size constraint. Alpine uses musl libc instead of glibc, which can cause subtle compatibility issues with some native Node.js addons. The slim variant gives you most of the size benefit without the compatibility risk.
Naming the stage with AS base
By naming this base, all subsequent stages can build on top of it with FROM base. This creates a single source of truth for the Node.js version — update it in one place and all stages pick up the change.
Stage 2: The Dependency Layer — Docker Caching Done Right
This is the most important caching decision in the entire file.
FROM base AS deps
COPY package.json package-lock.json ./
RUN npm ci
The problem: cache invalidation
Docker builds images layer by layer and caches each one. When a layer changes, Docker invalidates that layer and every layer after it. This means the order of your COPY and RUN instructions matters enormously.
Consider this naive approach:
# ❌ Bad — installs dependencies on EVERY code change
COPY . .
RUN npm ci
RUN npm run build
Here COPY . . copies all your application source files first. Every time you change a single line of code, Docker sees a changed layer and re-runs npm ci — downloading and reinstalling hundreds of packages unnecessarily. On a modest connection, that's minutes wasted on every push.
The solution: isolate dependency installation
By separating the dependency stage from the build stage, we take advantage of the fact that package.json and package-lock.json change far less frequently than your application code:
# ✅ Good — node_modules only reinstalled when package files change
COPY package.json package-lock.json ./
RUN npm ci
Docker will only re-run npm ci when either package.json or package-lock.json actually changes. A code-only change skips straight past this stage, pulling node_modules from cache in milliseconds.
Why npm ci instead of npm install?
npm ci (Clean Install) is purpose-built for automated environments:
- Deletes and recreates
node_modulesfrom scratch on every run - Installs the exact versions locked in
package-lock.json(no implicit upgrades) - Fails if
package.jsonandpackage-lock.jsonare out of sync - Never modifies
package-lock.json
This guarantees deterministic, reproducible builds — essential in CI/CD pipelines and Docker.
Stage 3: The Build Stage — No Hardcoded Values
FROM base AS builder
WORKDIR /app
ARG NEXT_PUBLIC_BACKEND_BASE_URL
ARG NEXT_PUBLIC_BACKEND_SIGNED_FILE_URL
ARG NEXT_PUBLIC_COOKIE_DOMAIN
ARG NEXT_PUBLIC_USE_SECURE_COOKIES
ARG NODE_MEMORY=512
Build arguments over hardcoded values
Hardcoding environment-specific values into a Dockerfile is a common anti-pattern:
# ❌ Bad — hardcoded, not reusable across environments
ENV NEXT_PUBLIC_BACKEND_BASE_URL=https://api.myapp.com
ENV PORT=3005
This approach creates a different Dockerfile for each environment, and worse — it may bake secrets into image layers where they persist even after being "unset". The correct pattern is ARG:
# ✅ Good — values are injected at build time
ARG NEXT_PUBLIC_BACKEND_BASE_URL
ENV NEXT_PUBLIC_BACKEND_BASE_URL=${NEXT_PUBLIC_BACKEND_BASE_URL}
ARG declares a variable that can be passed in at build time with --build-arg. It is not present in the final image's environment — it's only available during the build. You then promote it to ENV so the running application can access it.
Passing build args
- Docker CLI
- Docker Compose
- GitHub Actions
docker build \
--build-arg NEXT_PUBLIC_BACKEND_BASE_URL=https://api.staging.myapp.com \
--build-arg NEXT_PUBLIC_COOKIE_DOMAIN=staging.myapp.com \
--build-arg NODE_MEMORY=1024 \
-t myapp:staging .
services:
web:
build:
context: .
args:
NEXT_PUBLIC_BACKEND_BASE_URL: ${NEXT_PUBLIC_BACKEND_BASE_URL}
NEXT_PUBLIC_COOKIE_DOMAIN: ${NEXT_PUBLIC_COOKIE_DOMAIN}
NODE_MEMORY: 1024
- name: Build Docker image
run: |
docker build \
--build-arg NEXT_PUBLIC_BACKEND_BASE_URL=${{ vars.BACKEND_URL }} \
--build-arg NEXT_PUBLIC_COOKIE_DOMAIN=${{ vars.COOKIE_DOMAIN }} \
-t myapp:${{ github.sha }} .
Default values for non-sensitive args
Note that NODE_MEMORY=512 sets a sensible default. This means developers can build locally without specifying every argument, while CI/CD pipelines can override it for resource-constrained environments:
ARG NODE_MEMORY=512
ENV NODE_OPTIONS="--max-old-space-size=${NODE_MEMORY}"
NEXT_TELEMETRY_DISABLED
ENV NEXT_TELEMETRY_DISABLED=1
Next.js collects anonymous telemetry data during builds by default. Setting this to 1 disables it in the build stage — good practice in automated pipelines both for privacy and to eliminate the small network call during every build.
Bringing in node_modules from the deps stage
COPY --from=deps /app/node_modules ./node_modules
COPY . .
RUN npm run build
Rather than re-running npm ci in the builder, we pull node_modules directly from the deps stage using --from=deps. This keeps the two concerns cleanly separated: dependency installation happens once in its own cached stage, and the builder simply uses the result.
The order here is intentional: copy node_modules first, then copy application source. If you reversed this, a source file change would invalidate the node_modules copy step — which defeats the purpose.
Stage 4: The Runtime Image — Lean and Secure
FROM node:22-slim AS runner
WORKDIR /app
This stage starts completely fresh from node:22-slim. It has no knowledge of the builder stage except for what we explicitly copy into it. This is the core benefit of multi-stage builds: your production image contains only what is needed to run the application.
It does not contain:
- Source code
node_modules(the standalone output bundles its own minimal dependencies)- Build tools
- Intermediate build artifacts
Configurable port and memory
ARG PORT=3005
ARG RUNTIME_NODE_MEMORY=384
ENV PORT=${PORT}
ENV NODE_OPTIONS="--max-old-space-size=${RUNTIME_NODE_MEMORY}"
The same principle applies here: no hardcoded port. The default of 3005 is used if nothing is passed, but your orchestration layer (Kubernetes, ECS, Compose) can override it for any environment.
Note the separate RUNTIME_NODE_MEMORY argument (defaulting to 384 MB) versus the build-time NODE_MEMORY (defaulting to 512 MB). The build process is memory-hungry — TypeScript compilation and tree-shaking for a large Next.js app can easily spike over 512 MB. The runtime process is far lighter; constraining it prevents memory bloat in production.
Running as a non-root user
This is one of the most important security practices for production containers.
RUN addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nextjs
By default, Docker containers run as root. This means that if your application process is ever compromised — through a vulnerability in a dependency, a deserialization flaw, or any other attack vector — the attacker has root access inside the container. Depending on how Docker is configured on the host, this can sometimes translate into host-level risk.
Creating a dedicated system user (nextjs) with a fixed UID/GID (1001) and then switching to it with USER nextjs ensures the application process has only the permissions it needs:
USER nextjs
The --system flag creates the user/group without a home directory, a password, or login shell — appropriate for a service account.
The fixed GID/UID 1001 matters for file permission consistency. If your container mounts volumes or shares a filesystem with the host, predictable UIDs make permission management straightforward.
Copying only what's needed
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/public ./public
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
This relies on Next.js's output: 'standalone' mode (configured in next.config.js). In standalone mode, the Next.js build produces a self-contained server.js that bundles only the minimal dependencies required at runtime, without the full node_modules tree.
The three things copied into the runner are:
| Path | What it contains |
|---|---|
.next/standalone | The standalone server, server.js, and its bundled runtime dependencies |
public/ | Static assets served directly (images, fonts, etc.) |
.next/static | Client-side JavaScript chunks and CSS |
Every file is --chown'd to the nextjs:nodejs user/group at copy time. This is more efficient than a separate RUN chown command, which would create an extra layer.
Make sure next.config.js has this set before building:
/** @type {import('next').NextConfig} */
const nextConfig = {
output: 'standalone',
}
module.exports = nextConfig
Without this, the .next/standalone directory won't be produced.
EXPOSE and CMD
EXPOSE ${PORT}
CMD ["node", "server.js"]
EXPOSE is documentation — it tells Docker (and anyone reading the Dockerfile) which port the container listens on. It does not actually publish the port; that happens with -p at runtime or in your Compose/Kubernetes config. Using the $PORT variable here keeps it consistent with the ENV PORT declaration above.
CMD ["node", "server.js"] uses the exec form (a JSON array), not the shell form (CMD node server.js). The exec form runs node as PID 1, which means it receives OS signals (like SIGTERM for graceful shutdown) directly. The shell form wraps it in /bin/sh -c, which often drops those signals — a common source of containers that take 30 seconds to stop because they wait for the shutdown timeout.
Caching Strategy — A Visual Summary
Here's how cache invalidation flows through the stages for the most common scenarios:
┌──────────────────────────────────────────────────────┐
│ What changed? │
└──────────────────────────────────────────────────────┘
│
┌────────────────────────────┼────────────────────────────┐
▼ ▼ ▼
package.json or Application code only Dockerfile args only
package-lock.json (e.g. a component change) (e.g. different PORT)
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ deps: MISS │ │ deps: HIT ✓ │ │ deps: HIT ✓ │
│ npm ci runs │ │ (cached) │ │ (cached) │
└───────┬───────┘ └────────┬─────────┘ └────────┬─────────┘
│ │ │
▼ ▼ ▼
┌───────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ builder: MISS │ │ builder: MISS │ │ runner: MISS │
│ npm run build│ │ npm run build │ │ (rebuilt only) │
└───────┬───────┘ └────────┬─────────┘ └──────────────────┘
│ │
▼ ▼
┌───────────────┐ ┌──────────────────┐
│ runner: MISS │ │ runner: MISS │
└───────────────┘ └──────────────────┘
The key takeaway: only changing application code never touches the deps stage. Your node_modules layer stays cached and you skip the most time-consuming part of the build.
Building and Running
# Build for local development
docker build -t myapp:dev .
# Build for staging with custom args
docker build \
--build-arg NEXT_PUBLIC_BACKEND_BASE_URL=https://api.staging.myapp.com \
--build-arg NEXT_PUBLIC_COOKIE_DOMAIN=staging.myapp.com \
--build-arg PORT=3005 \
--build-arg NODE_MEMORY=1024 \
-t myapp:staging .
# Run the container
docker run -p 3005:3005 myapp:staging
# Run with a custom port at runtime
docker run -p 8080:8080 -e PORT=8080 myapp:staging
NEXT_PUBLIC_* variables are baked into the JavaScript bundle at build time and cannot be changed at runtime — they need to be passed as --build-arg. Server-only environment variables (those without the NEXT_PUBLIC_ prefix) can be injected at runtime with -e or via your orchestrator's secrets management.
Checklist
Before pushing your Dockerfile to production, verify the following:
-
output: 'standalone'is set innext.config.js - Dependencies are installed in a separate stage from the build
-
package.jsonandpackage-lock.jsonare copied before application source - No environment-specific URLs, ports, or credentials are hardcoded
- All configurable values are declared as
ARGwith sensible defaults - The runner stage starts
FROMthe base image, not the builder - A non-root system user owns and runs the application
-
CMDuses exec form (JSON array), not shell form -
NEXT_TELEMETRY_DISABLED=1is set in both builder and runner stages
Conclusion
A well-structured Dockerfile is not just about getting the application running in a container — it's about making your build pipeline fast, your configuration flexible, and your production environment secure. The four principles covered in this guide — smart layer caching, isolated dependency stages, build arguments over hardcoded values, and non-root execution — are straightforward to implement and pay dividends every time you push a change.
