This guide provides comprehensive deployment instructions for the multi-tenant logging pipeline for both local development and production environments.
- Podman for container builds and LocalStack
- Go 1.21+ for log processor development
- Terraform for infrastructure as code
- Make for development workflow automation
- AWS CLI configured with appropriate permissions
- kubectl configured for your Kubernetes clusters
- Access to ECR for container image storage (production only)
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ LocalStack │───▶│ Terraform │───▶│ Make │
│ │ │ │ │ │
│ S3, DynamoDB │ │ Multi-Account│ │ Workflow │
│ IAM, Lambda │ │ Simulation │ │ Automation │
└──────────────┘ └──────────────┘ └──────────────┘
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Clusters │───▶│ Central │───▶│ Customer │
│ │ │ │ │ │
│ Vector │ │ S3, DynamoDB│ │ CloudWatch │
│ Collection │ │ IAM Roles │ │ S3 │
└─────────────┘ └─────────────┘ └─────────────┘
# View all available commands
make help
# Start LocalStack
make start
# Build the log processor container
make build
# Deploy infrastructure to LocalStack
make deploy
# Run integration tests
make test-e2e
# Clean up everything
make cleandocker compose up -d
# Wait for LocalStack to be ready
curl http://localhost:4566/_localstack/healthcd container/
podman build -f Containerfile.processor_go -t log-processor:local .cd terraform/local/
# Initialize Terraform
terraform init
# Plan deployment
terraform plan
# Deploy infrastructure
terraform apply -auto-approvecd container/
go test -count=1 -tags=integration ./integration -v -timeout 5m- Set up OIDC Provider for your cluster:
# For OpenShift/ROSA clusters
OIDC_URL=$(oc get authentication.config.openshift.io cluster -o json | \
jq -r .spec.serviceAccountIssuer | sed 's|https://||')
# For EKS clusters
OIDC_URL=$(aws eks describe-cluster --name YOUR_CLUSTER \
--query "cluster.identity.oidc.issuer" --output text | sed 's|https://||')
# Create OIDC provider in AWS
aws iam create-open-id-connect-provider \
--url https://${OIDC_URL} \
--client-id-list openshift # or "sts.amazonaws.com" for EKS- Deploy Cluster-Specific IAM Roles (see production infrastructure documentation)
# Create logging namespace
kubectl create namespace logging
# Deploy using Kustomize base configuration
kubectl apply -k k8s/collector/base
# Verify deployment
kubectl get pods -n logging
kubectl logs -n logging daemonset/vector-logs# Create logging namespace
kubectl create namespace logging
# Deploy using OpenShift overlay with SecurityContextConstraints
kubectl apply -k k8s/collector/overlays/cuppett
# Verify deployment
kubectl get pods -n logging
kubectl get scc vector-sccUpdate Vector ConfigMap with your environment values:
# In k8s/collector/overlays/YOUR-ENV/vector-config-patch.yaml
configMapGenerator:
- name: vector-config
behavior: merge
literals:
- AWS_REGION=us-east-1
- S3_BUCKET_NAME=your-central-logging-bucket
- S3_WRITER_ROLE_ARN=arn:aws:iam::ACCOUNT:role/your-s3-writer-role
- CLUSTER_ID=your-cluster-identifierFor Kubernetes-based processing instead of Lambda:
# Deploy log processor
kubectl apply -k k8s/processor/overlays/cuppett
# Verify deployment
kubectl get pods -n logging
kubectl logs -n logging deployment/log-processorFor production deployments, customers need to:
- Set up IAM roles for cross-account log delivery
- Configure S3 buckets (if using S3 delivery)
- Set up CloudWatch Log Groups (if using CloudWatch delivery)
- Provide role ARNs back to the logging service provider
See production infrastructure documentation for detailed IAM role requirements.
Tenant configurations are automatically created by Terraform in LocalStack:
# View tenant configs
TABLE_NAME=$(cd terraform/local && terraform output -raw central_dynamodb_table)
aws --endpoint-url=http://localhost:4566 dynamodb scan --table-name $TABLE_NAME
# Check specific tenant
aws --endpoint-url=http://localhost:4566 dynamodb get-item \
--table-name $TABLE_NAME \
--key '{"tenant_id":{"S":"customer1"},"type":{"S":"cloudwatch"}}'For production deployments, configure tenants via the API or directly in DynamoDB with appropriate IAM permissions.
# Run full integration test suite
make test-e2e
# Or manually
cd container/
go test -count=1 -tags=integration ./integration -v -timeout 5m# Test Vector log routing to customer buckets
make validate-vector-flow# Check LocalStack health
curl http://localhost:4566/_localstack/health
# View LocalStack logs
make logs
# Check Vector status (if deployed to cluster)
kubectl get pods -n logging
kubectl logs -n logging daemonset/vector-logs --tail=50For detailed troubleshooting information, see the Troubleshooting Guide.
# Check LocalStack health
curl http://localhost:4566/_localstack/health
# View LocalStack logs
make logs
# Check terraform state
cd terraform/local && terraform show
# Verify tenant configurations
TABLE_NAME=$(cd terraform/local && terraform output -raw central_dynamodb_table)
aws --endpoint-url=http://localhost:4566 dynamodb scan --table-name $TABLE_NAMEAfter successful local setup:
- Run Tests: Validate functionality with
make test-e2e - Explore Terraform: Review infrastructure in
terraform/local/ - Modify Configuration: Adjust tenant configs in Terraform
- Test Vector Flow: Run
make validate-vector-flow - Production Planning: Review architecture and IAM requirements
For ongoing development, see:
- Development Guide - Local development workflow
- API Management Guide - Tenant configuration API
- Architecture Documentation - System design
- Makefile - Available development commands