Production-Ready Dockerfiles for NestJS: Caching, Multi-Stage Builds & Security
For developers at Sadeem informatique

Image source: NestJS official assets.
Getting Docker working with NestJS is easy. Getting it right with proper layer caching, deterministic installs, Prisma client generation, and a secure runtime image takes a bit more structure. This guide walks through each Dockerfile decision so you can adapt it safely to your own services.
By the end, you'll have a Dockerfile that:
- Maximises Docker layer caching so rebuilds are fast
- Never reinstalls
node_moduleswhen only application code changes - Generates Prisma client in the build stage
- Uses multi-stage builds to keep the final image lean
- Runs as a non-root user in production
Even if Dockerfile ownership sits mostly with DevOps, backend developers should still understand container build fundamentals. It makes debugging CI/CD issues faster and keeps deployment constraints visible during feature work.
This guide targets a production Dockerfile. For day-to-day local development and testing, use a separate Dockerfile.local tailored for fast iteration (for example, bind mounts, hot reload, and dev dependencies).
The Full Dockerfile
Here's the complete file we'll walk through first, then break down stage by stage.
# ===============================
# Base image
# ===============================
FROM node:22-alpine AS base
WORKDIR /usr/src/app
# ===============================
# Dependencies (build deps)
# ===============================
FROM base AS deps
COPY package.json package-lock.json ./
RUN npm ci
# ===============================
# Build stage
# ===============================
FROM base AS builder
WORKDIR /usr/src/app
ARG NODE_MEMORY=512
ENV NODE_OPTIONS="--max-old-space-size=${NODE_MEMORY}"
COPY --from=deps /usr/src/app/node_modules ./node_modules
COPY . .
RUN npx prisma generate
RUN npm run build
# ===============================
# Production dependencies only
# ===============================
FROM base AS prod-deps
COPY package.json package-lock.json ./
RUN npm ci --omit=dev && npm cache clean --force
# ===============================
# Runtime image
# ===============================
FROM base AS runner
WORKDIR /usr/src/app
ARG PORT=3000
ARG RUNTIME_NODE_MEMORY=384
ENV NODE_ENV=production
ENV PORT=${PORT}
ENV NODE_OPTIONS="--enable-source-maps --max-old-space-size=${RUNTIME_NODE_MEMORY}"
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=builder --chown=appuser:appgroup /usr/src/app/dist ./dist
COPY --from=prod-deps --chown=appuser:appgroup /usr/src/app/node_modules ./node_modules
COPY --from=builder --chown=appuser:appgroup /usr/src/app/package.json ./package.json
COPY --from=builder --chown=appuser:appgroup /usr/src/app/prisma ./prisma
USER appuser
EXPOSE ${PORT}
CMD ["node", "dist/main.js"]
If your project does not use Prisma, remove npx prisma generate, the prisma/ copy step, and the migration script section.
Stage 1: The Base Image
FROM node:22-alpine AS base
WORKDIR /usr/src/app
Why node:22-alpine?
For many NestJS services, Alpine provides a good size/performance balance. If your stack has native dependencies that require glibc, switch to node:22-slim for compatibility.
Naming the stage with AS base
Using a shared base stage gives one source of truth for Node version and workdir across all later stages.
Stage 2: The Dependency Layer, Docker Caching Done Right
This is the most important caching decision in the file.
FROM base AS deps
COPY package.json package-lock.json ./
RUN npm ci
The problem: cache invalidation
Docker caches layer by layer. If a layer changes, every following layer is invalidated. If you copy all source before installing dependencies, Docker reinstalls packages on every code change.
# Bad pattern
COPY . .
RUN npm ci
RUN npm run build
The solution: isolate dependency installation
By copying only manifest files in the dependency stage, Docker reuses npm ci unless dependency files change.
# Good pattern
COPY package.json package-lock.json ./
RUN npm ci
Why npm ci instead of npm install?
npm ci is deterministic and CI-friendly:
- Installs exactly what
package-lock.jsondefines - Fails if lockfile and manifest are out of sync
- Avoids implicit upgrades
- Produces reproducible builds across environments
Stage 3: The Build Stage, No Hardcoded Values
FROM base AS builder
WORKDIR /usr/src/app
ARG NODE_MEMORY=512
ENV NODE_OPTIONS="--max-old-space-size=${NODE_MEMORY}"
Build arguments over hardcoded values
Hardcoding configuration in images creates environment-specific artifacts. Prefer ARG defaults that can be overridden during build.
ARG NODE_MEMORY=512
ENV NODE_OPTIONS="--max-old-space-size=${NODE_MEMORY}"
Passing build args
- Docker CLI
- Docker Compose
- GitHub Actions
docker build \
--build-arg NODE_MEMORY=1024 \
-t nest-app:staging .
services:
api:
build:
context: .
args:
NODE_MEMORY: 1024
- name: Build Docker image
run: |
docker build \
--build-arg NODE_MEMORY=1024 \
-t nest-app:${{ github.sha }} .
Bringing in node_modules from the deps stage
COPY --from=deps /usr/src/app/node_modules ./node_modules
COPY . .
RUN npx prisma generate
RUN npm run build
The stage reuses cached dependencies and then compiles NestJS to dist/. Prisma generation belongs here because it depends on schema and build-time tooling.
tsconfig.build.json for predictable dist/ output
Make sure your NestJS build config writes only compiled source into dist:
{
"extends": "./tsconfig.json",
"compilerOptions": {
"outDir": "./dist",
"rootDir": "./src"
},
"include": ["src/**/*"],
"exclude": ["node_modules", "test", "dist", "**/*spec.ts"]
}
Stage 4: The Runtime Image, Lean and Secure
FROM base AS runner
WORKDIR /usr/src/app
This stage starts fresh and receives only runtime artifacts. Build tooling and source-only files stay out of production.
Configurable port and memory
ARG PORT=3000
ARG RUNTIME_NODE_MEMORY=384
ENV NODE_ENV=production
ENV PORT=${PORT}
ENV NODE_OPTIONS="--enable-source-maps --max-old-space-size=${RUNTIME_NODE_MEMORY}"
Use separate memory limits for build and runtime. Build usually needs more memory than the long-running API process.
Running as a non-root user
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser
This reduces blast radius if the process is compromised.
Copying only what is needed
COPY --from=builder --chown=appuser:appgroup /usr/src/app/dist ./dist
COPY --from=prod-deps --chown=appuser:appgroup /usr/src/app/node_modules ./node_modules
COPY --from=builder --chown=appuser:appgroup /usr/src/app/package.json ./package.json
COPY --from=builder --chown=appuser:appgroup /usr/src/app/prisma ./prisma
The runtime image includes only:
| Path | Purpose |
|---|---|
dist/ | Compiled NestJS output |
node_modules/ | Production-only dependencies |
package.json | Runtime metadata |
prisma/ | Schema/migration files if needed at runtime |
EXPOSE and CMD
EXPOSE ${PORT}
CMD ["node", "dist/main.js"]
Use exec-form CMD for proper signal handling in containers. Keep this command focused on serving the NestJS app only; do not add migration commands to the Dockerfile CMD.
.dockerignore for clean context
node_modules
dist
.git
.gitignore
Dockerfile
npm-debug.log
.env
.env.*
coverage
This keeps build context small and avoids leaking local files into image layers.
Caching Strategy, Visual Summary
┌──────────────────────────────────────────────┐
│ What changed? │
└──────────────────────────────────────────────┘
│
┌───────────────────────────┼─────────────────────────────┐
▼ ▼ ▼
package.json / lockfile application code only runtime args only
│ │ │
▼ ▼ ▼
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ deps: MISS │ │ deps: HIT │ │ deps: HIT │
│ npm ci runs │ │ cached reuse │ │ cached reuse │
└────────┬─────────┘ └────────┬─────────┘ └────────┬─────────┘
│ │ │
▼ ▼ ▼
┌──────────────────┐ ┌──────────────────┐ ┌──────────────────┐
│ builder: MISS │ │ builder: MISS │ │ runner: MISS │
│ prisma + build │ │ prisma + build │ │ env-only rebuild │
└────────┬─────────┘ └────────┬─────────┘ └──────────────────┘
│ │
▼ ▼
┌──────────────────┐ ┌──────────────────┐
│ runner: MISS │ │ runner: MISS │
└──────────────────┘ └──────────────────┘
Key point: source-only changes should not invalidate your dependency installation layer.
Building and Running
# Build local image
docker build -t nest-app:dev .
# Build with custom args
docker build \
--build-arg NODE_MEMORY=1024 \
--build-arg PORT=3000 \
-t nest-app:staging .
# Run
docker run -p 3000:3000 --env-file .env nest-app:staging
# Run on custom runtime port
docker run -p 8080:8080 -e PORT=8080 nest-app:staging
Build arguments (ARG) are fixed when image is built. Runtime environment variables (-e) can change per container start. Keep secrets and environment-specific runtime config as runtime env vars.
Prisma Migration Note
Backend developers should include a clear migration command in package.json so every environment can call the same script.
First, define the migration script in package.json:
{
"scripts": {
"migration:run": "prisma migrate deploy"
}
}
Run it with your package manager of choice, for example:
npm run migration:run
Checklist
Before shipping to production, verify:
-
package.jsonandpackage-lock.jsonare copied before full source - Dependencies are installed in a dedicated cached stage
- Prisma client is generated during build
-
tsconfig.build.jsonoutputs app code todist/ - Runtime image contains only required runtime artifacts
- Runtime installs omit dev dependencies
- Container runs as a non-root user
-
CMDuses exec form -
.dockerignoreexcludes local and sensitive files
Conclusion
A production-grade NestJS Dockerfile is about repeatability, speed, and operational safety. The patterns in this guide isolate dependency caching, keep runtime images lean, and apply practical security defaults so your service is easier to build, deploy, and maintain.
