Home
DevOps & Cloud Engineering / Lesson 11 — Container Image Security

Container Image Security

Vulnerabilities, supply chain attacks, image signing — keeping your containers from becoming the entry point.


Why Image Security Matters

Every Docker image you run executes code on your servers. The image contains not just YOUR code but every library, every transitive dependency, the OS itself.

A typical image has thousands of files from hundreds of packages. A vulnerability in any of them is your problem.

Real incidents driven by container security gaps:
• Compromised base images shipping cryptominers (Docker Hub had several incidents)
• Public images with embedded backdoors (typo-squatting attacks)
• Outdated base images with known CVEs (years-old vulnerabilities still running)
• Secrets accidentally baked into images via COPY .
• Build-time secrets leaked through layer history

This lesson covers the practices that keep your image supply chain trustworthy.


Choose Base Images Carefully

The base image is your largest attack surface. Pick well.

Trustworthy bases:
• Official images from Docker Hub (node, python, postgres) — maintained by upstream projects
• Images from cloud providers (gcr.io/distroless/*, AWS ECR Public)
• Verified Publisher images on Docker Hub

Untrustworthy:
• Random user accounts on Docker Hub
• Anything you can't trace to a real maintainer

Specific recommendations:
• Prefer alpine or distroless — smaller = less attack surface
• Pin to specific tags, ideally with SHA256 digest:

Bash
FROM node:20.11.1-alpine@sha256:abc1234567...

With digest, you're pinning to an EXACT image — even if the tag is republished, you get the original.

Distroless deserves special mention. Google's distroless images contain just your app's runtime — no shell, no package manager, no curl, no apt-get. Drastically smaller attack surface.

Bash
FROM gcr.io/distroless/nodejs20-debian12
COPY --from=builder /app /app
WORKDIR /app
CMD ["server.js"]

If an attacker breaks in, they can't even run a shell. Standard for Go and Rust services.


Scan for Vulnerabilities

Your image has hundreds of packages. New CVEs are published daily. You need automated scanning.

Tools:
• Trivy — open source, fast, comprehensive, the de facto standard
• Grype — open source by Anchore
• Snyk — commercial, broader (deps, secrets, IaC)
• AWS ECR scanning, Google Artifact Analysis — built into cloud registries

In CI:

YAML
- name: Build image
  run: docker build -t myapp:${{ github.sha }} .

- name: Scan with Trivy
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: myapp:${{ github.sha }}
    format: 'sarif'
    severity: 'CRITICAL,HIGH'
    exit-code: '1'                  # fail the build on critical/high
    ignore-unfixed: true            # don't fail on vulns with no fix yet

What to do with results:
• CRITICAL — block deploy, fix before merging
• HIGH — block deploy or require review
• MEDIUM — track, fix in regular maintenance
• LOW — log only

Beware noise. A naive scanner config will block every deploy on day one. Calibrate by:
• Ignoring unfixed vulnerabilities (no patch available)
• Suppressing false positives (the CVE doesn't apply to your usage)
• Setting reasonable severity thresholds for your risk profile

Reporting that's easy to read:

YAML
- name: Generate SARIF report
  uses: aquasecurity/trivy-action@master
  with:
    image-ref: myapp:${{ github.sha }}
    format: 'sarif'
    output: 'trivy-results.sarif'

- name: Upload to GitHub Security
  uses: github/codeql-action/upload-sarif@v3
  with:
    sarif_file: 'trivy-results.sarif'

Now vulns appear in GitHub's Security tab with line-level annotations on your Dockerfile.


The SBOM — Knowing What's In Your Image

SBOM = Software Bill of Materials. A list of every component (package, version, license) in your image.

Why it matters: when a CVE drops at 2 AM, can you answer "do we run this package anywhere?" In minutes, not days.

Generating SBOMs:

Bash
# With syft
syft myapp:abc1234 -o spdx-json > sbom.json

# With docker
docker buildx build --sbom=true -t myapp:abc1234 .

Standards:
• SPDX — older, broader use
• CycloneDX — OWASP, focused on security

Most tools support both. Pick one and stick with it.

In CI, generate the SBOM with the build and store it alongside the image:

YAML
- uses: docker/build-push-action@v5
  with:
    push: true
    sbom: true
    provenance: true
    tags: myapp:${{ github.sha }}

When a new CVE drops:
1. Search your SBOMs for the affected package
2. Identify which deployments use which images
3. Patch and redeploy

Without SBOMs, this becomes "manually grep through Dockerfiles, hope dependencies are pinned, hope you don't miss anything." With SBOMs, it's a database query.


Don't Bake Secrets Into Images

Every layer of your image is permanent. Even if you RUN rm secret.txt, the secret is in the previous layer's history.

Bad:

Bash
COPY .env /app/.env             # secret in layer
RUN ./install.sh && rm .env     # too late — already in history
Bash
docker history myapp            # shows every layer
docker save myapp | tar -x      # extract every layer for inspection

Anyone who can pull your image can extract every layer.

Good:
1. Use BuildKit's --mount=type=secret:

Bash
# syntax=docker/dockerfile:1.4
RUN --mount=type=secret,id=npmrc,target=/root/.npmrc npm install
Bash
docker build --secret id=npmrc,src=$HOME/.npmrc -t myapp .

The secret is mounted during the RUN step, not copied into the image.

2. Use environment variables at runtime:

Bash
docker run -e DATABASE_URL=$DATABASE_URL myapp

Or better, use a secrets backend (Vault, AWS Secrets Manager, Kubernetes Secrets).

3. Never copy .env files into images. Add to .dockerignore:

Text
.env
.env.*
*.pem
*.key
.aws/
.ssh/

If you've accidentally pushed an image with secrets:
1. Rotate the secret immediately
2. Delete the image from the registry (this doesn't undo the leak — anyone who pulled it has it)
3. Add scanners to prevent recurrence


Image Signing & Provenance

How do you know the image you're pulling is the one your CI built? An attacker who compromises your registry could push a malicious image with the same tag.

Image signing solves this. The build server signs the image; the deployer verifies the signature before running.

Tools:
• Cosign (Sigstore) — modern, simple, the 2026 standard
• Docker Content Trust (older, less popular now)
• Notation (CNCF, used in some enterprise setups)

Cosign with keyless signing (no key management):

Bash
# Build and push
docker build -t ghcr.io/me/myapp:v1 .
docker push ghcr.io/me/myapp:v1

# Sign — uses GitHub OIDC, no keys needed
COSIGN_EXPERIMENTAL=1 cosign sign ghcr.io/me/myapp:v1

# Later, verify before deploy
cosign verify ghcr.io/me/myapp:v1 \
  --certificate-identity-regexp 'https://github.com/me/.*' \
  --certificate-oidc-issuer https://token.actions.githubusercontent.com

In Kubernetes, admission controllers like Sigstore policy controller or Kyverno can refuse to run unsigned images.

Provenance — going beyond signing, you can attest WHO built this image, WHEN, FROM WHICH commit, with WHICH tools. SLSA (Supply-chain Levels for Software Artifacts) defines levels:
• Level 1 — automated build process
• Level 2 — version-controlled source, build service
• Level 3 — non-falsifiable provenance from build platform
• Level 4 — two-person review, hermetic builds

For most teams, GitHub Actions + Cosign + SLSA Level 2 is achievable today and a meaningful security upgrade.


Runtime Hardening

A few additional defenses for runtime security:

1. Run as non-root:

Bash
RUN addgroup -S app && adduser -S app -G app
USER app

If the container is compromised, the attacker has limited privileges.

2. Read-only root filesystem:

Bash
docker run --read-only --tmpfs /tmp myapp

Most apps don't need to write to / — make it read-only and explicitly mount /tmp if needed. Stops attackers from dropping files.

3. Drop capabilities:

Bash
docker run --cap-drop=ALL --cap-add=NET_BIND_SERVICE nginx

Linux capabilities are fine-grained privileges. Drop everything, add only what you need.

4. Limit resources:

Bash
docker run --memory=512m --cpus=1 myapp

Prevents one container from starving others (common in Kubernetes too).

5. Don't expose the Docker socket:

Bash
docker run -v /var/run/docker.sock:/var/run/docker.sock ...   # AVOID

A container with the Docker socket can launch other containers, including privileged ones — effectively root on the host.

In Kubernetes, equivalent settings live in securityContext:

YAML
securityContext:
  runAsNonRoot: true
  runAsUser: 1000
  readOnlyRootFilesystem: true
  allowPrivilegeEscalation: false
  capabilities:
    drop: [ALL]

These should be the defaults for everything you deploy.


⁂ Back to all modules