Secrets exposed in container images are among the most common findings in security audits of containerized environments. They appear in Dockerfile RUN commands that are intended to be temporary. They appear in configuration files copied into images. They appear in environment variables baked into the image during build. They persist in layer history even after the developer removes them from the current layer.
The container image format stores every intermediate build state. A secret added in layer 3 and removed in layer 4 is still present in layer 3, accessible to anyone who can pull the image from the registry. The docker history command reveals these transient secrets. Registry scanners that examine image layers surface them during security assessments.
How Secrets End Up in Container Images?
Temporary credentials in RUN commands:
# Secret persists in layer history
RUN npm config set registry https://npm.company.com/ && \
npm config set //npm.company.com/:_authToken $NPM_TOKEN && \
npm install && \
npm config delete //npm.company.com/:_authToken
The npm config delete removes the token from the final filesystem state. It does not remove it from the intermediate layer that captured the state when the registry token was set.
Configuration files copied from the build context:
# If .aws/credentials is not in .dockerignore, it may be copied
COPY . /app
A developer who copies their entire project directory and has not set up .dockerignore may inadvertently include credential files, SSH keys, or environment files.
Build arguments passed to the image:
ARG API_KEY
ENV API_KEY=$API_KEY # Secret is now in the image manifest
Build arguments visible in the image manifest persist permanently.
Prevention at the Build Stage
The primary defense is build-stage secrets management that ensures credentials are never written to image layers:
Multi-stage builds with secret mount:
# Build stage – secrets available during build, not in final image
FROM node:20 AS builder
RUN –mount=type=secret,id=npm_token \
npm config set //npm.company.com/:_authToken=$(cat /run/secrets/npm_token) && \
npm install –production
# Runtime stage – only the built artifacts, no build tools or secrets
FROM node:20-slim AS runtime
COPY –from=builder /app/node_modules /app/node_modules
COPY –from=builder /app/dist /app/dist
CMD [“node”, “dist/server.js”]
The –mount=type=secret flag makes the secret available during the RUN command but does not write it to any image layer. The final image has no record of the secret’s content.
.dockerignore enforcement:
# .dockerignore – prevent credential files from entering the build context
.env
.env.*
.aws/
.ssh/
*.pem
*.key
*_rsa
*_dsa
credentials
secrets/
A pre-commit hook that verifies .dockerignore contains patterns for common credential file locations prevents the “copy entire directory” mistake from including credentials.
Detection: Container Layer Analysis
When prevention fails, detection identifies exposed secrets before they reach production.
Container security layer analysis tools that examine each image layer (including intermediate layers accessible through image history) for secret patterns:
- API key formats (common vendor patterns: AWS AKIA…, GitHub ghp_…, Slack xoxb-…)
- Private key headers (—–BEGIN RSA PRIVATE KEY—–)
- Password patterns in configuration file formats
- Environment variable assignments containing sensitive strings
Scanning at the registry level, before images can be pulled for production deployment, catches secrets that made it through the build without developer detection.
The challenge with layer scanning: intermediate layers that contain secrets but were “cleaned up” in later layers are still present in the image manifest but may be marked as deleted. Scanners that only examine the final filesystem state miss these.
Layer-aware secret scanning that examines the full layer history, not just the final image state, finds secrets that surface in historical layers.
Secure Container Software and Secret Surface Reduction
Container hardening that removes unused files and packages reduces the attack surface available for secret discovery. A minimal container image has fewer configuration files, fewer utility scripts, and fewer package manager artifacts that might contain cached credentials.
Specific reductions relevant to secret exposure:
- Package manager caches (npm cache, pip cache) that may contain cached authentication tokens
- Build tool configuration files that may contain registry credentials
- Development utility configuration that may contain API keys
Automated hardening removes these artifacts as part of reducing the overall container footprint. The security benefit is orthogonal to CVE reduction but falls naturally out of the same process.
Frequently Asked Questions
How do secrets end up in container images in DevOps pipelines?
Secrets enter container images through several common mistakes: credentials in RUN commands that persist in intermediate image layers even after deletion, configuration files copied from the build context without a proper .dockerignore, and build arguments passed as ARG or ENV instructions that become permanently visible in the image manifest. The container image format stores every intermediate build state, so a secret added in layer 3 and removed in layer 4 remains accessible in layer 3 to anyone who can pull the image.
What is the most effective way to prevent secrets exposure in DevOps pipelines?
The most effective prevention is build-stage secrets management using Docker’s –mount=type=secret flag in multi-stage builds. This makes secrets available during the RUN command without writing them to any image layer. Combined with a strict .dockerignore that excludes .env, .aws/, .ssh/, and credential files from the build context, and enforced by a pre-commit hook, this prevents secrets from entering the image in the first place.
How can teams detect secrets already baked into container images?
Layer-aware secret scanning tools that examine the full image layer history — not just the final filesystem state — detect secrets in intermediate layers that were “cleaned up” in later layers. Scanners should check for API key formats (AWS AKIA prefixes, GitHub ghp_ tokens, Slack xoxb- tokens), private key headers, and credential patterns in configuration file formats. Running these scanners at the registry level before images reach production deployment catches secrets that slipped through the build.
How should runtime secrets be managed in containerized environments?
Runtime secrets should never be stored in container images. The correct approach is to inject them at container startup via Kubernetes Secrets or Vault-injected environment variables, or to mount secrets from external systems (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) as volumes at runtime. This means the container image contains no secret content; the running pod receives credentials from the cluster’s secret management system, keeping secrets out of the image entirely.
Runtime Secrets Management
For secrets that must be available at runtime (database passwords, API keys the running service uses), the container image is the wrong place to store them:
Environment variable injection at runtime: Kubernetes Secrets or Vault-injected environment variables provide credentials at pod startup without embedding them in the image. The image contains no secrets; the running pod receives them from the cluster’s secret management system.
Volume-mounted secrets: Secrets stored in external systems (Vault, AWS Secrets Manager, Azure Key Vault) and mounted as volumes at runtime. The container image has no secret content; the secret is retrieved and mounted when the container starts.
# Kubernetes deployment with Vault-injected secrets
spec:
containers:
– name: api
env:
– name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-credentials # Kubernetes Secret, not baked into image
key: password
The security architecture for containerized secrets is: build-time secrets are never written to image layers, runtime secrets are injected at container start from external systems, and container layer scanning provides a detection backstop that catches mistakes before they reach production.