> "The next Pearl Harbor we confront could very well be a cyber attack that cripples our power systems, our grid, our security systems, our financial systems, our governmental systems." -- Leon Panetta
Learning Objectives
- Understand the anatomy of software supply chains and their attack surfaces
- Identify dependency confusion, typosquatting, and CI/CD pipeline attack vectors
- Implement code signing, integrity verification, and third-party risk assessment
- Apply SLSA, SBOM, and modern supply chain security frameworks
- Conduct authorized supply chain security assessments
In This Chapter
- 34.1 The Anatomy of a Software Supply Chain
- 34.2 Dependency Confusion and Typosquatting
- 34.3 CI/CD Pipeline Attacks
- 34.4 Code Signing and Integrity Verification
- 34.5 Third-Party Risk Assessment
- 34.6 Supply Chain Security Frameworks and Tools
- 34.7 Conducting Supply Chain Security Assessments
- 34.8 Emerging Threats and Future Directions
- 34.9 Putting It All Together
- Summary
Chapter 34: Supply Chain Security
"The next Pearl Harbor we confront could very well be a cyber attack that cripples our power systems, our grid, our security systems, our financial systems, our governmental systems." -- Leon Panetta
"In a supply chain attack, you don't need to break into the castle. You poison the water supply that feeds it." -- Security Researcher, 2021
When the SolarWinds Orion compromise was discovered in December 2020, it sent shockwaves through the cybersecurity world not because the techniques were novel in isolation, but because they exploited the most fundamental assumption in enterprise IT: that software updates from trusted vendors are safe. Approximately 18,000 organizations installed a trojanized update, including the U.S. Treasury, Department of Homeland Security, and Fortune 500 companies. The attackers had not breached these organizations directly. They had compromised the supply chain.
Supply chain attacks represent a paradigm shift in how adversaries think about access. Rather than attacking a single target, a sophisticated threat actor compromises a component, dependency, or tool that many targets rely upon, achieving massive scale with a single operation. For ethical hackers and security professionals, understanding supply chain security is no longer optional. It is a core competency that touches every engagement, every code review, and every risk assessment you will conduct.
This chapter dissects the software supply chain from end to end. You will learn how attackers exploit dependency management, CI/CD pipelines, and code distribution mechanisms. More importantly, you will learn how to assess, detect, and defend against these attacks using frameworks like SLSA and tools like Software Bills of Materials. Every technique in this chapter is presented within the context of authorized security testing and defensive hardening.
34.1 The Anatomy of a Software Supply Chain
A software supply chain encompasses every component, tool, process, and human involved in creating, building, testing, and distributing software. Understanding this anatomy is the first step toward securing it.
34.1.1 Components of the Modern Supply Chain
Modern software is not written from scratch. The average enterprise application depends on hundreds or thousands of third-party libraries, frameworks, and tools. Consider the typical supply chain components:
Source Code and Repositories. Developers write code and store it in version control systems like Git. Platforms such as GitHub, GitLab, and Bitbucket host millions of repositories. The integrity of source code depends on access controls, branch protection rules, commit signing, and the security of the hosting platform itself.
Dependencies and Package Managers. Languages and frameworks rely on package ecosystems: npm for JavaScript, PyPI for Python, Maven Central for Java, NuGet for .NET, RubyGems for Ruby, and crates.io for Rust. When a developer adds a dependency, they implicitly trust that package's author, its transitive dependencies, and the registry infrastructure.
Build Systems and CI/CD Pipelines. Continuous Integration and Continuous Deployment pipelines (Jenkins, GitHub Actions, GitLab CI, CircleCI, Azure DevOps) automate compilation, testing, and deployment. These systems have broad access: they read source code, download dependencies, execute arbitrary commands, and deploy to production environments. A compromised CI/CD pipeline is a skeleton key.
Artifact Repositories and Distribution. Built artifacts are stored in registries (Docker Hub, Artifactory, npm registry) and distributed to end users or deployed to servers. The integrity of distribution channels determines whether end users receive authentic software.
Development Tools and IDEs. The tools developers use (IDEs, linters, code formatters, extensions) are themselves software with supply chains. Compromised IDE extensions or developer tools can inject malicious code silently.
34.1.2 Trust Relationships and Attack Surfaces
Every link in the supply chain represents a trust relationship, and every trust relationship is a potential attack surface. The developer trusts the package registry. The organization trusts the CI/CD platform. The end user trusts the software vendor. An attacker who compromises any link in this chain can propagate malicious code downstream.
The key attack surfaces include:
- Source code manipulation -- Compromising developer accounts or source repositories
- Dependency injection -- Introducing malicious packages into the dependency tree
- Build system compromise -- Tampering with build processes to inject code during compilation
- Distribution channel manipulation -- Altering artifacts after they are built but before delivery
- Update mechanism abuse -- Leveraging auto-update features to distribute malicious payloads
Blue Team Perspective: Inventory every component in your supply chain. You cannot secure what you do not know exists. Start with a complete Software Bill of Materials (SBOM) for every application and service in your environment.
34.1.3 The Scale of the Problem
The numbers are staggering. As of 2025, the npm registry hosts over 2.5 million packages. PyPI has surpassed 500,000 projects. A single JavaScript web application may have over 1,000 transitive dependencies. Research by Sonatype documented a 742% increase in supply chain attacks between 2019 and 2023. The European Union Agency for Cybersecurity (ENISA) predicted that supply chain attacks would quadruple from 2020 levels by 2025.
MedSecure Running Example: MedSecure's patient portal application uses a React frontend with 847 npm dependencies (including transitive) and a Python Flask backend with 134 PyPI packages. Their deployment pipeline runs on GitHub Actions with 12 third-party actions. During a supply chain security assessment, you discover that none of these dependencies are pinned to specific hashes, three GitHub Actions are referenced by mutable tags rather than commit SHAs, and there is no SBOM generation in their pipeline.
34.2 Dependency Confusion and Typosquatting
Dependency confusion and typosquatting are among the most accessible supply chain attack vectors, requiring relatively low sophistication but yielding potentially devastating results.
34.2.1 Dependency Confusion Attacks
In February 2021, security researcher Alex Birsan published research demonstrating how he had gained code execution inside the networks of Apple, Microsoft, PayPal, Tesla, Uber, and dozens of other companies using a technique he called "dependency confusion."
The attack exploits how package managers resolve dependencies when both private (internal) and public registries are configured. Many organizations host private packages on internal registries (Artifactory, GitHub Packages, Azure Artifacts). If a private package name does not exist on the public registry, an attacker can register that name on the public registry with a higher version number. Some package managers will prioritize the public registry's higher version, downloading and executing the attacker's malicious package.
How it works in detail:
- The attacker identifies internal package names (through leaked package.json files, error messages, documentation, or OSINT)
- The attacker publishes packages with those same names on public registries (npm, PyPI)
- The malicious packages have artificially high version numbers (e.g., 99.0.0)
- When the target's build system resolves dependencies, it may fetch the higher-versioned public package instead of the lower-versioned private one
- The attacker's code executes during package installation (via install scripts, setup.py, etc.)
Mitigation strategies:
- Namespace prefixing: Use scoped packages (e.g.,
@mycompany/internal-lib) that cannot be squatted on public registries - Registry configuration: Configure package managers to only use private registries for internal packages
- Version pinning: Pin all dependencies to exact versions and use lockfiles
- Upstream registry blocking: Configure private registries to block proxy/passthrough to public registries for internal package names
# .npmrc configuration to prevent dependency confusion
@mycompany:registry=https://npm.internal.mycompany.com/
//npm.internal.mycompany.com/:_authToken=${NPM_TOKEN}
always-auth=true
34.2.2 Typosquatting and Name Confusion
Typosquatting in the context of package management means registering packages with names similar to popular, legitimate packages, hoping that developers will accidentally install the malicious version. This is the digital equivalent of setting up a shop with a name nearly identical to a famous brand.
Real-world examples:
crossenv(malicious) vs.cross-env(legitimate) -- The malicious npm package stole environment variablespython3-dateutil(malicious) vs.python-dateutil(legitimate) -- Contained cryptocurrency mining codejeIlyfish(malicious, with uppercase I) vs.jellyfish(legitimate) -- Stole SSH keyscolouramavs.colorama-- Injected cryptocurrency clipboard hijacker
Detection approaches:
- Name similarity analysis: Tools can scan registries for packages with names similar to popular packages using Levenshtein distance and other string metrics
- Behavioral analysis: Monitor package install scripts for suspicious behavior (network connections, file access, environment variable exfiltration)
- Provenance verification: Check package metadata for consistency (author history, repository links, download patterns)
- Policy enforcement: Implement allow-lists for approved packages in organizational package managers
34.2.3 Combating Dependency Attacks
ShopStack Running Example: ShopStack's e-commerce platform uses an internal npm package called shopstack-payments. During an authorized supply chain assessment, you successfully demonstrate dependency confusion by registering shopstack-payments version 99.0.0 on the public npm registry (in a controlled test environment). The package contains a benign beacon that phones home to your assessment server. Within minutes of the next CI build, you receive the beacon, confirming that ShopStack's build system is vulnerable. Your remediation report recommends scoped packages, registry lockdown, and hash-pinned lockfiles.
Warning
Never publish malicious packages to public registries, even for testing. Use private registries and controlled environments. Birsan's research was conducted with explicit coordination and responsible disclosure protocols.
34.3 CI/CD Pipeline Attacks
CI/CD pipelines are high-value targets because they automate code execution with elevated privileges. A compromised pipeline can inject malicious code into every build, affecting every deployment and every customer.
34.3.1 Attack Vectors in CI/CD
Poisoned Pipeline Execution (PPE). In a PPE attack, an adversary modifies the CI/CD pipeline configuration file (such as .github/workflows/*.yml, .gitlab-ci.yml, or Jenkinsfile) to execute malicious commands. If the pipeline runs on pull requests from forks, an attacker can submit a malicious PR that modifies the pipeline definition.
There are three variants:
- Direct PPE (D-PPE): The attacker modifies the pipeline configuration file directly (requires write access to the repository)
- Indirect PPE (I-PPE): The attacker modifies files that the pipeline references (scripts, Makefiles, test configurations) without modifying the pipeline definition itself
- Public PPE (3PE): The attacker exploits pipelines that run on public pull requests from forks
Secret Exfiltration. CI/CD systems store secrets (API keys, deployment credentials, signing keys) as environment variables or secure vaults. Attackers target these secrets because they provide access to production environments, cloud infrastructure, and distribution channels.
Dependency Poisoning in Pipelines. Pipelines download and install dependencies during build time. An attacker who can manipulate these dependencies (through techniques discussed in Section 34.2) gains code execution within the pipeline context.
Third-Party Action/Plugin Abuse. GitHub Actions, Jenkins plugins, and CircleCI orbs are third-party code that runs within your pipeline. A compromised action or plugin can steal secrets, modify build outputs, or establish persistence.
34.3.2 GitHub Actions Security
GitHub Actions has become one of the most popular CI/CD platforms, and its security model deserves special attention.
Key risks:
-
Mutable action references: Using
uses: actions/checkout@v3references a mutable tag. The action's author could update whatv3points to at any time. Use commit SHA pinning instead:uses: actions/checkout@8e5e7e5ab8b370d6c329ec480221332ada57f0ab -
pull_request_targetevent: This event runs workflows in the context of the base branch (with access to secrets) even when triggered by a fork's pull request. This is a common source of critical vulnerabilities. -
Script injection: Untrusted inputs (issue titles, PR bodies, branch names) used in
run:steps can lead to command injection:
# VULNERABLE - attacker can inject commands via PR title
- run: echo "Processing PR: ${{ github.event.pull_request.title }}"
# SECURE - use environment variable
- run: echo "Processing PR: $PR_TITLE"
env:
PR_TITLE: ${{ github.event.pull_request.title }}
- Workflow permissions: By default, the
GITHUB_TOKENmay have write permissions to the repository. Apply the principle of least privilege using thepermissionskey.
34.3.3 Securing CI/CD Pipelines
A comprehensive CI/CD security program includes:
Pipeline-as-code review: Treat pipeline configuration files with the same rigor as application code. Require code reviews for all pipeline changes. Use CODEOWNERS files to ensure security team review.
Ephemeral build environments: Use fresh, isolated containers or VMs for each build. Do not reuse build agents across projects or organizations.
Secret management: Never store secrets in code or pipeline configuration files. Use dedicated secret management solutions (HashiCorp Vault, AWS Secrets Manager). Rotate secrets regularly. Limit secret access to specific pipeline stages.
Build provenance: Implement SLSA (Supply-chain Levels for Software Artifacts) to ensure build provenance. Record who built what, when, and how.
Network segmentation: Restrict pipeline network access to only necessary resources. Block outbound internet access from build agents where possible.
Blue Team Perspective: Implement comprehensive logging for all CI/CD pipeline activities. Monitor for unusual build patterns: builds at odd hours, builds that access unusual secrets, builds with modified pipeline configurations. Integrate CI/CD logs into your SIEM.
34.4 Code Signing and Integrity Verification
Code signing and integrity verification mechanisms ensure that software has not been tampered with between creation and consumption. These mechanisms form the cryptographic backbone of supply chain security.
34.4.1 Code Signing Fundamentals
Code signing uses digital signatures to verify the authenticity and integrity of software artifacts. The process involves:
- The developer or build system generates a cryptographic hash of the artifact
- The hash is signed with a private key
- The signature, public key (or certificate), and artifact are distributed together
- The consumer verifies the signature using the public key, confirming both authenticity and integrity
Types of code signing:
- Author signing: The developer signs with their personal key
- Build system signing: The CI/CD pipeline signs artifacts automatically
- Repository signing: The package registry signs all hosted packages
- Notarization: A third party (like Apple's notarization service) attests that code has been scanned and verified
34.4.2 Sigstore and Keyless Signing
Traditional code signing suffers from key management problems: private keys must be securely stored, certificates must be managed, and key compromise is catastrophic. Sigstore addresses these challenges with keyless signing.
Sigstore ecosystem components:
- Cosign: Signs and verifies container images and other artifacts
- Fulcio: Certificate authority that issues short-lived certificates based on OIDC identity (e.g., Google account, GitHub identity)
- Rekor: Transparency log that records all signing events, enabling public verification and accountability
- Gitsign: Applies Sigstore to Git commits
How keyless signing works:
- The signer authenticates with an OIDC provider (e.g., GitHub, Google)
- Fulcio issues a short-lived certificate binding the signer's identity to an ephemeral key pair
- The artifact is signed with the ephemeral private key
- The signing event is recorded in the Rekor transparency log
- The ephemeral private key is discarded
- Verification uses the transparency log and the OIDC identity, not a long-lived key
# Sign a container image with Cosign (keyless)
cosign sign ghcr.io/myorg/myapp:v1.2.3
# Verify a signed container image
cosign verify ghcr.io/myorg/myapp:v1.2.3 \
--certificate-identity=user@example.com \
--certificate-oidc-issuer=https://accounts.google.com
# Sign a binary artifact (blob)
cosign sign-blob --output-signature=file.sig myartifact.tar.gz
# Verify a signed blob
cosign verify-blob --signature=file.sig \
--certificate-identity=build@myorg.com \
--certificate-oidc-issuer=https://github.com/login/oauth \
myartifact.tar.gz
34.4.3 Package Integrity Mechanisms
Different ecosystems implement integrity verification differently:
npm: Uses lockfiles (package-lock.json) with SHA-512 integrity hashes. The npm audit signatures command verifies registry-level signatures. npm supports provenance attestations linking packages to their source repository and build system.
Python/PyPI: PEP 458 and PEP 480 define TUF (The Update Framework) integration for PyPI. Tools like pip-audit check for known vulnerabilities. Hash verification is supported through --require-hashes in pip.
Container images: Docker Content Trust (DCT) uses Notary to sign and verify images. OCI distribution supports artifact signing with Sigstore/Cosign.
Go: Go modules use a global transparency log (sum.golang.org) and a checksum database to verify module integrity. This is built into the Go toolchain by default.
34.4.4 Reproducible Builds
Reproducible builds ensure that given the same source code, build environment, and build instructions, the exact same binary artifact is produced every bit-for-bit identical time. This allows independent verification that a distributed binary was genuinely built from a claimed source.
Challenges to reproducibility:
- Timestamps embedded in build artifacts
- Non-deterministic compiler optimizations
- Randomized data structures (hash map ordering)
- Embedded build paths
- Non-deterministic linking order
Achieving reproducibility:
- Use deterministic build tools (Bazel, Nix)
- Pin all tool and dependency versions
- Normalize timestamps (set to epoch or source commit time)
- Use hermetic build environments (no network access, fixed filesystem)
- Document and automate the complete build process
Student Home Lab Exercise: Set up a local build system that produces reproducible builds. Start with a simple Go program (Go's toolchain supports reproducible builds natively). Build it twice in different directories and compare the SHA-256 hashes. Then try the same with a C program using GCC -- you will likely see differences. Investigate what causes those differences and how to eliminate them.
34.5 Third-Party Risk Assessment
Organizations consume software from hundreds of vendors and open-source projects. Assessing the risk each one introduces is critical to supply chain security.
34.5.1 Vendor Security Assessment
For commercial software vendors, a structured risk assessment process includes:
Pre-engagement assessment: - Security certifications (SOC 2 Type II, ISO 27001, FedRAMP) - Security questionnaire responses (SIG Lite, CAIQ) - Independent penetration test results - Incident history and response track record - Business continuity and disaster recovery plans
Technical evaluation: - How does the vendor distribute updates? - What access does the vendor have to your environment? - How does the vendor handle your data? - What is the vendor's vulnerability disclosure and patching process? - Does the vendor provide SBOMs for their products?
Ongoing monitoring: - Continuous monitoring of vendor security posture - Regular reassessment (annually at minimum) - Threat intelligence feeds for vendor compromise indicators - Contractual security requirements and audit rights
34.5.2 Open-Source Risk Assessment
Open-source software presents unique risk assessment challenges because there is no vendor relationship to leverage:
OpenSSF Scorecard: The Open Source Security Foundation provides automated security assessments of open-source projects. Scorecard evaluates factors like:
- Branch protection rules
- CI/CD configuration security
- Dependency update practices
- Code review requirements
- Vulnerability disclosure process
- Signed releases
- Active maintenance status
# Run OpenSSF Scorecard against a repository
scorecard --repo=github.com/example/project
# Check specific checks
scorecard --repo=github.com/example/project --checks=Branch-Protection,Code-Review
OpenSSF Criticality Score: Measures how critical an open-source project is to the ecosystem based on factors like dependent project count, contributor count, commit frequency, and organizational diversity.
Health metrics to evaluate:
- Maintenance activity: When was the last commit? Are issues being addressed?
- Community diversity: Is the project maintained by one person or a diverse community? Bus factor analysis.
- Security practices: Does the project have a SECURITY.md? Does it use memory-safe languages? Are there known unpatched vulnerabilities?
- License compliance: Is the license compatible with your usage? Are there licensing risks?
34.5.3 MedSecure Third-Party Assessment
MedSecure Running Example: As part of your supply chain security assessment for MedSecure, you evaluate their top 20 most critical dependencies. You discover that their Python backend relies on a medical data parsing library maintained by a single developer who hasn't committed code in 14 months. The library has three known CVEs, two rated high severity, that remain unpatched. This library processes patient data, making it a critical risk. Your assessment recommends: (1) immediate patching or forking of the library, (2) implementation of an organizational dependency governance policy, and (3) automated monitoring of all dependency health metrics.
34.6 Supply Chain Security Frameworks and Tools
Several frameworks and tools have emerged to systematically address supply chain security. Understanding these is essential for any security professional.
34.6.1 SLSA (Supply-chain Levels for Software Artifacts)
SLSA (pronounced "salsa") is a framework developed by Google and adopted by the OpenSSF to provide a structured approach to supply chain security. It defines four levels of increasing assurance:
SLSA Level 1 -- Build L1: Documentation of the build process. The build platform automatically generates provenance (metadata about how an artifact was built). This is the minimum baseline.
SLSA Level 2 -- Build L2: Provenance is generated by a hosted build platform (not on the developer's machine). The provenance is signed and can be verified. This prevents many tampering scenarios.
SLSA Level 3 -- Build L3: The build platform implements hardened security controls. Build environments are isolated and ephemeral. The platform prevents developers from influencing the build process outside of the build configuration.
SLSA Provenance: A SLSA provenance statement answers three questions: 1. Who built the artifact? (The build platform identity) 2. What sources and dependencies were used? (Git commit, dependency hashes) 3. How was it built? (Build configuration, platform, environment)
{
"_type": "https://in-toto.io/Statement/v0.1",
"predicateType": "https://slsa.dev/provenance/v0.2",
"subject": [
{
"name": "myapp",
"digest": { "sha256": "abc123..." }
}
],
"predicate": {
"builder": { "id": "https://github.com/actions/runner" },
"buildType": "https://github.com/actions/workflow@v1",
"invocation": {
"configSource": {
"uri": "git+https://github.com/myorg/myapp@refs/heads/main",
"digest": { "sha1": "def456..." },
"entryPoint": ".github/workflows/build.yml"
}
},
"materials": [
{
"uri": "git+https://github.com/myorg/myapp",
"digest": { "sha1": "def456..." }
}
]
}
}
34.6.2 Software Bill of Materials (SBOM)
An SBOM is a comprehensive inventory of all components in a software artifact, analogous to an ingredient list for food. SBOMs enable organizations to quickly assess their exposure when a new vulnerability is discovered.
SBOM formats:
- SPDX (Software Package Data Exchange): ISO/IEC 5962:2021 standard. Originally focused on license compliance, now expanded to security.
- CycloneDX: OWASP standard designed specifically for security use cases. Supports vulnerability attribution, services, and formulation (build process documentation).
SBOM generation tools:
| Tool | Supported Ecosystems | Output Formats |
|---|---|---|
| Syft | Container images, filesystems, archives | SPDX, CycloneDX |
| Trivy | Container images, filesystems, Git repos | SPDX, CycloneDX |
| cdxgen | JavaScript, Python, Java, Go, .NET, Rust | CycloneDX |
| SPDX tools | Various | SPDX |
| Microsoft SBOM Tool | .NET, npm, pip, Maven | SPDX |
# Generate SBOM with Syft (CycloneDX format)
syft packages dir:./myproject -o cyclonedx-json > sbom.json
# Generate SBOM for a container image
syft packages ghcr.io/myorg/myapp:latest -o spdx-json > sbom-spdx.json
# Scan SBOM for vulnerabilities with Grype
grype sbom:./sbom.json
SBOM lifecycle:
- Generation: Produce SBOMs during the build process (ideally as a CI/CD step)
- Storage: Store SBOMs alongside artifacts in registries
- Distribution: Share SBOMs with consumers (required by Executive Order 14028 for U.S. federal software)
- Monitoring: Continuously scan SBOMs against updated vulnerability databases
- Response: Use SBOMs to rapidly identify affected systems when new vulnerabilities are disclosed
34.6.3 The Update Framework (TUF)
TUF is a framework for securing software update systems. It addresses specific attacks against update mechanisms:
- Arbitrary software installation: Attacker convinces client to install malware
- Rollback attacks: Attacker provides an older, vulnerable version
- Indefinite freeze attacks: Attacker prevents client from learning about new updates
- Mix-and-match attacks: Attacker provides inconsistent set of files
TUF uses multiple signing roles (root, targets, snapshot, timestamp) with different keys and expiration policies to provide defense in depth. PEP 458 integrates TUF into PyPI's package distribution.
34.6.4 in-toto: Supply Chain Layout Verification
in-toto (Latin for "as a whole") is a framework that verifies the integrity of the entire software supply chain. It allows project owners to define a layout: a document that specifies exactly which steps should be performed, by whom, and in what order.
Key concepts:
- Layout: Defines the expected supply chain steps, authorized performers, and inspection criteria
- Link metadata: Recorded by each authorized step performer, documenting inputs, outputs, and commands
- Verification: The end consumer verifies that all steps were performed as specified and that outputs from one step match inputs to the next
Layout:
Step 1: Developer writes code (signed by developer key)
Step 2: Reviewer approves code (signed by reviewer key)
Step 3: CI builds artifact (signed by CI key)
Step 4: Security scan passes (signed by scanner key)
Verification at installation:
- All four link metadata files are present
- Each is signed by the correct key
- Outputs of step N match inputs of step N+1
- No unauthorized modifications occurred between steps
34.6.5 Practical Tool Integration
ShopStack Running Example: You are helping ShopStack implement a comprehensive supply chain security program. The implementation plan includes:
- Week 1-2: Generate SBOMs for all applications using Syft in CI/CD pipelines. Store SBOMs alongside container images in their registry.
- Week 3-4: Implement continuous vulnerability scanning with Grype against stored SBOMs. Configure alerting for critical and high-severity vulnerabilities.
- Week 5-6: Pin all CI/CD pipeline actions to commit SHAs. Implement branch protection and CODEOWNERS for pipeline configuration files.
- Week 7-8: Deploy Sigstore/Cosign for container image signing. Configure Kubernetes admission controllers to reject unsigned images.
- Week 9-12: Achieve SLSA Level 2 by implementing signed provenance in build pipelines. Begin work toward Level 3 with hardened build environments.
34.7 Conducting Supply Chain Security Assessments
As an ethical hacker, you will be called upon to assess the supply chain security posture of organizations. This section provides a structured methodology.
34.7.1 Assessment Methodology
Phase 1: Inventory and Mapping - Enumerate all software dependencies (direct and transitive) - Map CI/CD pipeline configurations and secrets - Identify third-party services and integrations - Document package manager configurations and registry settings - Catalog build tools, development tools, and IDE extensions
Phase 2: Vulnerability Assessment - Scan all dependencies for known vulnerabilities (CVEs) - Check for end-of-life or unmaintained dependencies - Evaluate dependency health metrics (OpenSSF Scorecard) - Test for dependency confusion vulnerabilities - Assess typosquatting exposure for internal package names
Phase 3: Configuration Review - Review CI/CD pipeline security configurations - Evaluate secret management practices - Check code signing and artifact verification - Assess branch protection and code review policies - Review package manager security settings
Phase 4: Threat Modeling - Model adversary capabilities and motivations - Identify highest-impact attack paths through the supply chain - Evaluate detection and response capabilities - Assess blast radius for supply chain compromise scenarios
Phase 5: Reporting and Remediation - Prioritize findings by risk (impact times likelihood) - Provide specific, actionable remediation guidance - Map findings to frameworks (SLSA, NIST SSDF, CIS Supply Chain Security) - Recommend monitoring and continuous improvement measures
34.7.2 Common Findings
Based on industry assessments, the most common supply chain security findings include:
- No SBOM generation or management -- Organizations cannot identify which dependencies they use
- Unpinned dependencies -- Using version ranges instead of exact versions or hashes
- Stale dependencies -- Running versions with known, patched vulnerabilities
- Overly permissive CI/CD -- Pipeline tokens with excessive permissions
- No code signing -- No verification of artifact integrity or authenticity
- Missing lockfiles -- Allowing dependency resolution to vary between builds
- Third-party action/plugin risk -- Using unvetted CI/CD plugins with full pipeline access
- No dependency confusion protection -- Private package names not reserved on public registries
- Exposed build secrets -- API keys, tokens, and credentials accessible in build logs or environment
- Single maintainer risk -- Critical dependencies maintained by a single individual
34.7.3 Testing Techniques
Dependency confusion testing (authorized):
# Step 1: Extract internal package names from lockfiles/manifests
# Step 2: Check if those names exist on public registries
pip index versions internal-package-name 2>/dev/null
npm view internal-package-name 2>/dev/null
# Step 3: In a controlled environment, register a benign test package
# Step 4: Trigger a build and observe whether the public package is fetched
# Step 5: Document findings and remediate
CI/CD pipeline analysis:
# Identify GitHub Actions with excessive permissions
# Look for pull_request_target with checkout of PR code
# Check for script injection in run: steps
# Verify action references use commit SHAs, not tags
Artifact integrity verification:
# Verify container image signatures
cosign verify ghcr.io/org/app:latest \
--certificate-identity-regexp='.*@org\.com' \
--certificate-oidc-issuer='https://accounts.google.com'
# Check npm package provenance
npm audit signatures
# Verify Go module checksums
GONOSUMCHECK="" go mod verify
34.8 Emerging Threats and Future Directions
The supply chain threat landscape continues to evolve rapidly. Several emerging trends deserve attention.
34.8.1 AI/ML Supply Chain Risks
Machine learning models introduce new supply chain concerns:
- Model poisoning: Compromised training data or model weights can introduce backdoors
- Model registries: Platforms like Hugging Face host pre-trained models that could be tampered with
- Pickle deserialization: Many ML frameworks use pickle serialization, which executes arbitrary code during deserialization
- Supply chain for training data: Poisoned datasets can influence model behavior in subtle ways
34.8.2 Infrastructure-as-Code Supply Chains
Terraform modules, Ansible roles, and Kubernetes Helm charts are increasingly targeted:
- Malicious Terraform providers can exfiltrate cloud credentials
- Compromised Helm charts can deploy cryptocurrency miners or backdoors
- Ansible Galaxy roles from untrusted sources can modify system configurations
34.8.3 Hardware Supply Chain Considerations
While this chapter focuses on software, hardware supply chain risks are real and growing:
- Counterfeit components with potential backdoors
- Firmware compromise during manufacturing or distribution
- Pre-installed malware on devices (documented cases with Android phones)
34.8.4 Regulatory Landscape
Governments worldwide are mandating supply chain security:
- U.S. Executive Order 14028 requires SBOMs for software sold to the federal government
- EU Cyber Resilience Act mandates vulnerability handling and SBOM provision for products with digital elements
- NIST Secure Software Development Framework (SSDF) provides guidelines for secure development practices
- FedRAMP and StateRAMP incorporate supply chain requirements for cloud services
34.9 Putting It All Together
Supply chain security is not a single tool or practice. It is a comprehensive approach that touches every aspect of software development, delivery, and operations.
Key principles to remember:
- Trust, but verify. Every component in your supply chain should be verified cryptographically.
- Minimize your attack surface. Reduce dependencies, pin versions, use lockfiles.
- Assume compromise. Design your supply chain so that compromise of any single component is detectable and containable.
- Automate security. Supply chain security must be embedded in CI/CD pipelines, not performed manually.
- Inventory everything. You cannot protect what you do not know about. SBOMs are foundational.
Student Home Lab Exercise: Build a complete supply chain security lab:
- Create a small Python or JavaScript project with several dependencies
- Generate an SBOM using Syft
- Scan for vulnerabilities using Grype or Trivy
- Set up a GitHub Actions pipeline with pinned actions and signed artifacts
- Implement Cosign signing for container images
- Configure Dependency-Track for continuous SBOM monitoring
- Run OpenSSF Scorecard against open-source dependencies
Blue Team Perspective: Supply chain security is a team sport. Development, security, operations, and procurement teams must collaborate. Establish a cross-functional supply chain security working group with representatives from each team. Define policies for dependency approval, vulnerability response SLAs, and SBOM requirements for all software acquisitions.
Summary
This chapter explored the rapidly evolving field of software supply chain security. We examined the anatomy of modern software supply chains, with their complex webs of dependencies, build systems, and distribution channels. We dissected dependency confusion and typosquatting attacks, understanding how simple naming tricks can lead to code execution inside major corporations. We analyzed CI/CD pipeline security, recognizing that build systems are high-value targets with broad access to secrets and production environments.
We studied code signing and integrity verification mechanisms, from traditional PKI-based signing to modern keyless approaches with Sigstore. We explored third-party risk assessment methodologies for both commercial vendors and open-source projects. We examined the frameworks and tools that systematize supply chain security: SLSA for build provenance, SBOMs for component inventory, TUF for update security, and in-toto for end-to-end verification.
The chapter concluded with practical assessment methodologies, common findings from real-world engagements, and a look at emerging threats including AI/ML supply chain risks and the expanding regulatory landscape. Supply chain attacks are increasing in frequency and sophistication. As ethical hackers and security professionals, your ability to assess and strengthen supply chains is essential to protecting the organizations you serve.
In the next chapter, we will explore Red Team Operations, where the supply chain attack techniques covered here become one component of comprehensive adversary emulation exercises.