This project implements a CI/CD pipeline where security checks run automatically at every stage. The goal was not to make deployments slower with security, but to catch issues early enough that fixing them is cheap — before they reach production or even a pull request review.
Every push to main and every pull request runs through the full pipeline. If any security check fails, the pipeline stops and the deployment does not happen.
Push / Pull Request
│
▼
1. Secret Detection (Gitleaks)
Scans the commit for hardcoded credentials, API keys, and tokens.
Fails immediately if any are found — before any other step runs.
│
▼
2. SAST — Static Analysis (Bandit + Semgrep)
Analyzes Python source code for security vulnerabilities without running it.
Catches SQL injection, hardcoded passwords, insecure deserialization, etc.
│
▼
3. Dependency Scanning (Safety + pip-audit)
Checks all Python dependencies against known CVE databases.
Fails if any dependency has a high or critical vulnerability.
│
▼
4. IaC Scanning (Checkov)
Scans all Terraform files for misconfigurations before terraform apply.
Catches open security groups, unencrypted S3 buckets, missing logging, etc.
│
▼
5. Container Image Scanning (Trivy)
Builds the Docker image and scans it for OS and library vulnerabilities.
Fails if any HIGH or CRITICAL CVEs are found in the image.
│
▼
6. Policy-as-Code (OPA + Conftest)
Validates Terraform plan output against custom Rego policies.
Enforces org-wide rules: no public S3, all resources must have tags, etc.
│
▼
7. Deploy to ECS (least-privilege)
Pushes image to ECR and deploys to ECS Fargate.
Uses a deployment role scoped only to the actions needed for this deploy.
| Tool | Stage | What it checks |
|---|---|---|
| Gitleaks | Pre-build | Hardcoded secrets in source code and git history |
| Bandit | SAST | Python-specific security vulnerabilities |
| Semgrep | SAST | General security patterns across multiple languages |
| Safety | Dependencies | Python packages with known CVEs |
| pip-audit | Dependencies | Secondary CVE check against PyPI advisory database |
| Checkov | IaC | Terraform misconfigurations against CIS and NIST benchmarks |
| Trivy | Container | OS packages and library CVEs inside Docker images |
| Conftest + OPA | Policy | Custom organizational policies written in Rego |
Two SAST tools instead of one: Bandit is Python-specific and catches things like subprocess.call with shell=True, use of pickle, and MD5 for passwords. Semgrep catches broader patterns and can be extended with custom rules. They have different rule sets so running both reduces false negatives.
Two dependency scanners: Safety uses a curated database maintained by PyUp. pip-audit queries the PyPI advisory database directly. They occasionally return different results, so running both gives better coverage.
Checkov before terraform apply: Scanning the Terraform source files catches misconfigurations before any AWS API calls are made. This is faster and cheaper than deploying and then finding issues. The pipeline never runs terraform apply if Checkov finds violations.
OPA in addition to Checkov: Checkov checks against known benchmarks. OPA enforces custom organizational policies that may not be covered by any benchmark — things like required tagging conventions, approved regions, or internal naming standards.
.
├── .github/
│ └── workflows/
│ ├── pipeline.yml # Main pipeline — runs on every push and PR
│ └── scheduled-scan.yml # Weekly full scan of dependencies and images
├── app/
│ ├── main.py # Sample Python application
│ ├── requirements.txt
│ └── Dockerfile
├── modules/
│ ├── ecr/ # ECR repository with image scanning enabled
│ ├── iam/ # Deployment role with least-privilege permissions
│ ├── s3/ # S3 bucket for pipeline artifacts
│ └── ecs/ # ECS Fargate cluster and service definition
├── policies/
│ └── terraform.rego # OPA policies for Terraform plan validation
├── scripts/
│ └── check-failures.sh # Aggregates tool output and reports findings
├── docs/
│ └── adding-security-gates.md # How to add new security checks to the pipeline
├── main.tf
├── variables.tf
└── outputs.tf
git clone https://github.com/OueSan/aws-devsecops-secure-pipeline
cd aws-devsecops-secure-pipeline
terraform init
cp terraform.tfvars.example terraform.tfvars
# fill in your values
terraform plan
terraform applyThen configure the following GitHub Actions secrets in your repository settings:
AWS_ROLE_ARN — ARN of the GitHub Actions OIDC role
AWS_REGION — AWS region for deployments
ECR_REPOSITORY — ECR repository URI
ECS_CLUSTER — ECS cluster name
ECS_SERVICE — ECS service name
ECS_TASK_DEFINITION — ECS task definition name
The most important lesson was understanding the difference between blocking gates and informational gates. Secret detection and dependency scanning with critical CVEs should always be blocking — there is no acceptable reason to deploy code with hardcoded credentials. But SAST results sometimes have false positives, so it is worth reviewing those rather than blindly blocking on every finding.
The OPA policies were interesting to write because they forced me to think about what organizational rules actually matter enough to enforce automatically. Things like "every resource must have an Environment tag" sound obvious but are rarely enforced consistently without automation.
OIDC authentication between GitHub Actions and AWS was also new to me. The traditional approach of storing AWS access keys as GitHub secrets is a bad practice — those keys are long-lived and if the repository is ever compromised, the keys are too. OIDC lets GitHub Actions assume an IAM role directly using a short-lived token, with no static credentials stored anywhere.