CI/CD Integration by Scanning Approach
This document provides a comprehensive mapping of our CI/CD examples to each scanning approach, helping you choose the right workflow for your specific container scanning needs.
Kubernetes API Approach
The Kubernetes API Approach is our recommended method for scanning containers in production environments. Support for distroless containers is currently in progress through enhancements to the train-k8s-container plugin.
GitHub Actions Implementation
CI/CD Pipeline
| name: CI/CD Pipeline with CINC Auditor Scanning
on:
workflow_dispatch:
inputs:
image_tag:
description: 'Tag for the container image'
required: true
default: 'latest'
scan_namespace:
description: 'Kubernetes namespace for scanning'
required: true
default: 'app-scan'
threshold:
description: 'Minimum passing score (0-100)'
required: true
default: '70'
jobs:
build-deploy-scan:
name: Build, Deploy and Scan Container
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Define test application
run: |
# Create a simple application for testing
mkdir -p ./app
# Create a minimal Dockerfile
cat > ./app/Dockerfile << 'EOF'
FROM alpine:latest
# Add some packages to test vulnerability scanning
RUN apk add --no-cache bash curl wget
# Add a sample script
COPY hello.sh /hello.sh
RUN chmod +x /hello.sh
# Set CMD
CMD ["/bin/sh", "-c", "while true; do /hello.sh; sleep 300; done"]
EOF
# Create a simple script file
cat > ./app/hello.sh << 'EOF'
#!/bin/bash
echo "Hello from test container! The time is $(date)"
echo "Running as user: $(whoami)"
echo "OS release: $(cat /etc/os-release | grep PRETTY_NAME)"
EOF
- name: Set up Minikube
uses: medyagh/setup-minikube@master
with:
driver: docker
start-args: --nodes=2
- name: Build container image
run: |
# Configure to use minikube's Docker daemon
eval $(minikube docker-env)
# Build the image
docker build -t test-app:${{ github.event.inputs.image_tag }} ./app
# List images to confirm
docker images | grep test-app
- name: Create Kubernetes deployment
run: |
# Create namespace
kubectl create namespace ${{ github.event.inputs.scan_namespace }}
# Create deployment
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app
namespace: ${{ github.event.inputs.scan_namespace }}
labels:
app: test-app
spec:
replicas: 1
selector:
matchLabels:
app: test-app
template:
metadata:
labels:
app: test-app
security-scan: "enabled"
spec:
containers:
- name: app
image: test-app:${{ github.event.inputs.image_tag }}
imagePullPolicy: Never
EOF
# Wait for deployment to be ready
kubectl -n ${{ github.event.inputs.scan_namespace }} rollout status deployment/test-app --timeout=120s
# Get pod name
POD_NAME=$(kubectl get pods -n ${{ github.event.inputs.scan_namespace }} -l app=test-app -o jsonpath='{.items[0].metadata.name}')
echo "APP_POD=${POD_NAME}" >> $GITHUB_ENV
# Show pods
kubectl get pods -n ${{ github.event.inputs.scan_namespace }} --show-labels
- name: Set up CINC Auditor
run: |
# Install CINC Auditor
curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor
# Install train-k8s-container plugin
cinc-auditor plugin install train-k8s-container
# Create a custom profile for application scanning
mkdir -p ./app-scan-profile
# Create profile files
cat > ./app-scan-profile/inspec.yml << 'EOF'
name: app-scan-profile
title: Custom Application Container Scan
maintainer: Security Team
copyright: Security Team
license: Apache-2.0
summary: A custom profile for scanning containerized applications
version: 0.1.0
supports:
platform: os
EOF
mkdir -p ./app-scan-profile/controls
cat > ./app-scan-profile/controls/container_checks.rb << 'EOF'
control 'container-1.1' do
impact 0.7
title 'Ensure container is not running as root'
desc 'Containers should not run as root when possible'
describe command('whoami') do
its('stdout') { should_not cmp 'root' }
end
end
control 'container-1.2' do
impact 0.5
title 'Check container OS version'
desc 'Verify the container OS version'
describe file('/etc/os-release') do
it { should exist }
its('content') { should include 'Alpine' }
end
end
control 'container-1.3' do
impact 0.3
title 'Check for unnecessary packages'
desc 'Container should not have unnecessary packages'
describe package('curl') do
it { should be_installed }
end
describe package('wget') do
it { should be_installed }
end
end
control 'container-1.4' do
impact 0.7
title 'Check for sensitive files'
desc 'Container should not have sensitive files'
describe file('/etc/shadow') do
it { should exist }
it { should_not be_readable.by('others') }
end
end
EOF
- name: Setup secure scanning infrastructure
run: |
# Create a unique ID for this run
RUN_ID=$(date +%s)
echo "RUN_ID=${RUN_ID}" >> $GITHUB_ENV
# Create service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: cinc-scanner-${RUN_ID}
namespace: ${{ github.event.inputs.scan_namespace }}
EOF
# Create role with label-based access
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cinc-scanner-role-${RUN_ID}
namespace: ${{ github.event.inputs.scan_namespace }}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
# No resourceNames restriction - use label selector in code
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
# No resourceNames restriction - use label selector in code
EOF
# Create rolebinding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cinc-scanner-binding-${RUN_ID}
namespace: ${{ github.event.inputs.scan_namespace }}
subjects:
- kind: ServiceAccount
name: cinc-scanner-${RUN_ID}
namespace: ${{ github.event.inputs.scan_namespace }}
roleRef:
kind: Role
name: cinc-scanner-role-${RUN_ID}
apiGroup: rbac.authorization.k8s.io
EOF
- name: Setup SAF-CLI
run: |
# Install Node.js (should already be installed on GitHub runners)
node --version || echo "Node.js not installed"
# Install SAF-CLI globally
npm install -g @mitre/saf
# Verify installation
saf --version
- name: Run security scan with CINC Auditor
run: |
# Generate token
TOKEN=$(kubectl create token cinc-scanner-${RUN_ID} -n ${{ github.event.inputs.scan_namespace }} --duration=15m)
SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
# Create kubeconfig
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${SERVER}
certificate-authority-data: ${CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${{ github.event.inputs.scan_namespace }}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${TOKEN}
EOF
chmod 600 scan-kubeconfig.yaml
# Verify we can access the pod with our labels
POD_NAME=$(KUBECONFIG=scan-kubeconfig.yaml kubectl get pods -n ${{ github.event.inputs.scan_namespace }} -l security-scan=enabled -o jsonpath='{.items[0].metadata.name}')
if [ -z "$POD_NAME" ]; then
echo "Error: No pod found with security-scan=enabled label"
exit 1
fi
echo "Found pod to scan: ${POD_NAME}"
# Run the CINC Auditor scan
KUBECONFIG=scan-kubeconfig.yaml cinc-auditor exec ./app-scan-profile \
-t k8s-container://${{ github.event.inputs.scan_namespace }}/${POD_NAME}/app \
--reporter cli json:scan-results.json
SCAN_EXIT_CODE=$?
echo "CINC Auditor scan completed with exit code: ${SCAN_EXIT_CODE}"
# Also run a standard profile for comparison
echo "Running standard DevSec Linux Baseline for comparison:"
KUBECONFIG=scan-kubeconfig.yaml cinc-auditor exec dev-sec/linux-baseline \
-t k8s-container://${{ github.event.inputs.scan_namespace }}/${POD_NAME}/app \
--reporter cli json:baseline-results.json || true
- name: Generate scan summary with SAF-CLI
run: |
# Create summary report with SAF-CLI
echo "Generating scan summary with SAF-CLI:"
saf summary --input scan-results.json --output-md scan-summary.md
# Display the summary in the logs
cat scan-summary.md
# Create a proper threshold file
cat > threshold.yml << EOF
compliance:
min: ${{ github.event.inputs.threshold }}
failed:
critical:
max: 0 # No critical failures allowed
EOF
# Apply threshold check
echo "Checking against threshold with min compliance of ${{ github.event.inputs.threshold }}%:"
saf threshold -i scan-results.json -t threshold.yml
THRESHOLD_EXIT_CODE=$?
if [ $THRESHOLD_EXIT_CODE -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce the threshold as a quality gate
# exit $THRESHOLD_EXIT_CODE
fi
# Generate summary for baseline results too
echo "Generating baseline summary with SAF-CLI:"
saf summary --input baseline-results.json --output-md baseline-summary.md
# Create a combined summary for GitHub step summary
echo "## Custom Application Profile Results" > $GITHUB_STEP_SUMMARY
cat scan-summary.md >> $GITHUB_STEP_SUMMARY
echo "## Linux Baseline Results" >> $GITHUB_STEP_SUMMARY
cat baseline-summary.md >> $GITHUB_STEP_SUMMARY
- name: Upload scan results
uses: actions/upload-artifact@v4
with:
name: security-scan-results
path: |
scan-results.json
baseline-results.json
scan-summary.md
baseline-summary.md
- name: Cleanup resources
if: always()
run: |
kubectl delete namespace ${{ github.event.inputs.scan_namespace }}
|
This workflow implements:
- Complete CI/CD pipeline with build, deploy, and scan steps
- Standard Kubernetes API-based scanning
- SAF-CLI integration for threshold checking
- Quality gates enforcement options
Dynamic RBAC Scanning
| name: Dynamic RBAC Pod Scanning
on:
workflow_dispatch:
inputs:
target_image:
description: 'Target container image to scan'
required: true
default: 'busybox:latest'
scan_label:
description: 'Label to use for scanning'
required: true
default: 'scan-target=true'
cinc_profile:
description: 'CINC Auditor profile to run'
required: true
default: 'dev-sec/linux-baseline'
threshold:
description: 'Minimum passing score (0-100)'
required: true
default: '70'
jobs:
dynamic-scan:
name: Dynamic RBAC Pod Scanning
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup minikube
id: minikube
uses: medyagh/setup-minikube@master
with:
driver: docker
start-args: --nodes=2
- name: Set up CINC Auditor environment
run: |
# Install CINC Auditor
curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor
# Install train-k8s-container plugin
cinc-auditor plugin install train-k8s-container
# Install SAF-CLI
npm install -g @mitre/saf
# Verify installation
cinc-auditor --version
saf --version
- name: Create test infrastructure
run: |
# Extract label key and value
LABEL_KEY=$(echo "${{ github.event.inputs.scan_label }}" | cut -d= -f1)
LABEL_VALUE=$(echo "${{ github.event.inputs.scan_label }}" | cut -d= -f2)
# Create test namespace
kubectl create namespace dynamic-scan
# Create a unique identifier for this run
RUN_ID="run-$(date +%s)"
echo "RUN_ID=${RUN_ID}" >> $GITHUB_ENV
# Create multiple test pods with different images and labels
for i in {1..3}; do
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-${i}-${RUN_ID}
namespace: dynamic-scan
labels:
app: test-pod-${i}
${LABEL_KEY}: "${i == 1 && "${LABEL_VALUE}" || "false"}"
spec:
containers:
- name: container
image: ${{ github.event.inputs.target_image }}
command: ["sleep", "infinity"]
EOF
done
# Wait for pods to be running
kubectl wait --for=condition=ready pod -l app=test-pod-1 -n dynamic-scan --timeout=120s
# Get the name of the pod with our scan label
TARGET_POD=$(kubectl get pods -n dynamic-scan -l ${LABEL_KEY}=${LABEL_VALUE} -o jsonpath='{.items[0].metadata.name}')
if [ -z "$TARGET_POD" ]; then
echo "Error: No pod found with label ${LABEL_KEY}=${LABEL_VALUE}"
exit 1
fi
echo "TARGET_POD=${TARGET_POD}" >> $GITHUB_ENV
# Show all pods in the namespace
kubectl get pods -n dynamic-scan --show-labels
- name: Set up label-based RBAC
run: |
# Extract label for RBAC
LABEL_SELECTOR="${{ github.event.inputs.scan_label }}"
# Create service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-sa-${RUN_ID}
namespace: dynamic-scan
EOF
# Create role that allows access only to pods with specific label
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role-${RUN_ID}
namespace: dynamic-scan
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
EOF
# Create role binding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding-${RUN_ID}
namespace: dynamic-scan
subjects:
- kind: ServiceAccount
name: scanner-sa-${RUN_ID}
namespace: dynamic-scan
roleRef:
kind: Role
name: scanner-role-${RUN_ID}
apiGroup: rbac.authorization.k8s.io
EOF
- name: Run scan on labeled pod
run: |
# Generate token
TOKEN=$(kubectl create token scanner-sa-${RUN_ID} -n dynamic-scan --duration=15m)
SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
# Create kubeconfig
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${SERVER}
certificate-authority-data: ${CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: dynamic-scan
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${TOKEN}
EOF
chmod 600 scan-kubeconfig.yaml
# Find the target pod by label
LABEL_SELECTOR="${{ github.event.inputs.scan_label }}"
echo "Looking for pods with label: ${LABEL_SELECTOR}"
TARGET_POD=$(KUBECONFIG=scan-kubeconfig.yaml kubectl get pods -n dynamic-scan -l ${LABEL_SELECTOR} -o jsonpath='{.items[0].metadata.name}')
if [ -z "$TARGET_POD" ]; then
echo "Error: No pod found with label ${LABEL_SELECTOR} using restricted access"
exit 1
fi
echo "Found target pod: ${TARGET_POD}"
# Get container name
CONTAINER_NAME=$(kubectl get pod ${TARGET_POD} -n dynamic-scan -o jsonpath='{.spec.containers[0].name}')
echo "Container name: ${CONTAINER_NAME}"
# Test access to pod
echo "Testing pod access with restricted token:"
KUBECONFIG=scan-kubeconfig.yaml kubectl get pods -n dynamic-scan
# Run CINC Auditor scan
echo "Running CINC Auditor scan on dynamic-scan/${TARGET_POD}/${CONTAINER_NAME}"
KUBECONFIG=scan-kubeconfig.yaml cinc-auditor exec ${{ github.event.inputs.cinc_profile }} \
-t k8s-container://dynamic-scan/${TARGET_POD}/${CONTAINER_NAME} \
--reporter cli json:scan-results.json
# Save scan exit code
SCAN_EXIT_CODE=$?
echo "Scan completed with exit code: ${SCAN_EXIT_CODE}"
# Process results with SAF-CLI
echo "Generating scan summary with SAF-CLI:"
saf summary --input scan-results.json --output-md scan-summary.md
# Display the summary in the logs
cat scan-summary.md
# Add to GitHub step summary
echo "## CINC Auditor Scan Results" > $GITHUB_STEP_SUMMARY
cat scan-summary.md >> $GITHUB_STEP_SUMMARY
# Create a proper threshold file
cat > threshold.yml << EOF
compliance:
min: ${{ github.event.inputs.threshold }}
failed:
critical:
max: 0 # No critical failures allowed
EOF
# Apply threshold check
echo "Checking against threshold with min compliance of ${{ github.event.inputs.threshold }}%:"
saf threshold -i scan-results.json -t threshold.yml
THRESHOLD_EXIT_CODE=$?
if [ $THRESHOLD_EXIT_CODE -eq 0 ]; then
echo "✅ Security scan passed threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
else
echo "❌ Security scan failed to meet threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
# Uncomment to enforce the threshold as a quality gate
# exit $THRESHOLD_EXIT_CODE
fi
- name: Verify RBAC restrictions
run: |
# Generate token for scanning
TOKEN=$(kubectl create token scanner-sa-${RUN_ID} -n dynamic-scan --duration=5m)
SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
# Create kubeconfig
cat > test-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${SERVER}
certificate-authority-data: ${CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: dynamic-scan
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${TOKEN}
EOF
echo "## RBAC Security Verification" >> $GITHUB_STEP_SUMMARY
# Check what we CAN do
echo "Verifying what we CAN do with restricted RBAC:" | tee -a $GITHUB_STEP_SUMMARY
echo "Can list pods:" | tee -a $GITHUB_STEP_SUMMARY
KUBECONFIG=test-kubeconfig.yaml kubectl get pods -n dynamic-scan > /dev/null &&
echo "✅ Can list pods" | tee -a $GITHUB_STEP_SUMMARY ||
echo "❌ Cannot list pods" | tee -a $GITHUB_STEP_SUMMARY
echo "Can exec into labeled pod:" | tee -a $GITHUB_STEP_SUMMARY
KUBECONFIG=test-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n dynamic-scan --resource-name=${TARGET_POD} &&
echo "✅ Can exec into target pod" | tee -a $GITHUB_STEP_SUMMARY ||
echo "❌ Cannot exec into target pod" | tee -a $GITHUB_STEP_SUMMARY
# Check what we CANNOT do
echo "Verifying what we CANNOT do with restricted RBAC:" | tee -a $GITHUB_STEP_SUMMARY
echo "Cannot create pods:" | tee -a $GITHUB_STEP_SUMMARY
KUBECONFIG=test-kubeconfig.yaml kubectl auth can-i create pods -n dynamic-scan &&
echo "❌ Security issue: Can create pods" | tee -a $GITHUB_STEP_SUMMARY ||
echo "✅ Cannot create pods (expected)" | tee -a $GITHUB_STEP_SUMMARY
echo "Cannot delete pods:" | tee -a $GITHUB_STEP_SUMMARY
KUBECONFIG=test-kubeconfig.yaml kubectl auth can-i delete pods -n dynamic-scan &&
echo "❌ Security issue: Can delete pods" | tee -a $GITHUB_STEP_SUMMARY ||
echo "✅ Cannot delete pods (expected)" | tee -a $GITHUB_STEP_SUMMARY
# For non-labeled pods, we should be able to list them but not exec into them
OTHER_POD=$(kubectl get pods -n dynamic-scan -l app=test-pod-2 -o jsonpath='{.items[0].metadata.name}')
echo "Cannot exec into non-labeled pod:" | tee -a $GITHUB_STEP_SUMMARY
KUBECONFIG=test-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n dynamic-scan --resource-name=${OTHER_POD} &&
echo "❌ Security issue: Can exec into non-target pod" | tee -a $GITHUB_STEP_SUMMARY ||
echo "✅ Cannot exec into non-target pod (expected)" | tee -a $GITHUB_STEP_SUMMARY
- name: Upload CINC results
uses: actions/upload-artifact@v4
with:
name: cinc-scan-results
path: |
scan-results.json
scan-summary.md
- name: Cleanup
if: always()
run: |
kubectl delete namespace dynamic-scan
|
This workflow implements:
- Label-based pod selection for targeted scanning
- Least-privilege RBAC model
- Dynamic service account and token creation
Setup and Scan
| name: Setup Minikube and Run CINC Auditor Scan
on:
workflow_dispatch:
inputs:
minikube_version:
description: 'Minikube version to use'
required: true
default: 'v1.32.0'
kubernetes_version:
description: 'Kubernetes version to use'
required: true
default: 'v1.28.3'
cinc_profile:
description: 'CINC Auditor profile to run'
required: true
default: 'dev-sec/linux-baseline'
threshold:
description: 'Minimum passing score (0-100)'
required: true
default: '70'
jobs:
setup-and-scan:
name: Setup minikube and run CINC Auditor scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup minikube
id: minikube
uses: medyagh/setup-minikube@master
with:
minikube-version: ${{ github.event.inputs.minikube_version }}
kubernetes-version: ${{ github.event.inputs.kubernetes_version }}
github-token: ${{ secrets.GITHUB_TOKEN }}
driver: docker
start-args: --nodes=2
- name: Get cluster status
run: |
kubectl get nodes
minikube status
- name: Set up CINC Auditor environment
run: |
# Install CINC Auditor
curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor
# Install train-k8s-container plugin
cinc-auditor plugin install train-k8s-container
# Install SAF-CLI for result processing
npm install -g @mitre/saf
# Verify installation
cinc-auditor --version
cinc-auditor plugin list
saf --version
- name: Create namespace and test pod
run: |
# Create namespace
kubectl create namespace inspec-test
# Create test pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: inspec-target
namespace: inspec-test
labels:
app: inspec-target
scan-target: "true"
spec:
containers:
- name: busybox
image: busybox:latest
command: ["sleep", "infinity"]
EOF
# Wait for pod to be running
kubectl wait --for=condition=ready pod/inspec-target -n inspec-test --timeout=120s
# Verify pod is running
kubectl get pods -n inspec-test
- name: Set up RBAC configuration
run: |
# Create service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: inspec-scanner
namespace: inspec-test
EOF
# Create role
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: inspec-container-role
namespace: inspec-test
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
resourceNames: ["inspec-target"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
resourceNames: ["inspec-target"]
EOF
# Create role binding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: inspec-container-rolebinding
namespace: inspec-test
subjects:
- kind: ServiceAccount
name: inspec-scanner
namespace: inspec-test
roleRef:
kind: Role
name: inspec-container-role
apiGroup: rbac.authorization.k8s.io
EOF
# Verify RBAC setup
kubectl get serviceaccount,role,rolebinding -n inspec-test
- name: Generate restricted kubeconfig
run: |
# Get token
TOKEN=$(kubectl create token inspec-scanner -n inspec-test --duration=15m)
# Get cluster information
SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
# Create kubeconfig
cat > restricted-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${SERVER}
certificate-authority-data: ${CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: inspec-test
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${TOKEN}
EOF
# Set proper permissions
chmod 600 restricted-kubeconfig.yaml
# Test the kubeconfig
KUBECONFIG=restricted-kubeconfig.yaml kubectl get pods -n inspec-test
- name: Run CINC Auditor scan with restricted access
run: |
# Download CINC profile
if [[ "${{ github.event.inputs.cinc_profile }}" == http* ]]; then
# If it's a URL, use it directly
PROFILE="${{ github.event.inputs.cinc_profile }}"
elif [[ "${{ github.event.inputs.cinc_profile }}" == */* ]]; then
# If it's a profile from Chef Supermarket (e.g., dev-sec/linux-baseline)
PROFILE="${{ github.event.inputs.cinc_profile }}"
else
# If it's a local path
PROFILE="./${{ github.event.inputs.cinc_profile }}"
fi
# Run CINC Auditor with the train-k8s-container transport
KUBECONFIG=restricted-kubeconfig.yaml cinc-auditor exec ${PROFILE} \
-t k8s-container://inspec-test/inspec-target/busybox \
--reporter cli json:cinc-results.json
# Store the exit code
CINC_EXIT_CODE=$?
echo "CINC Auditor scan completed with exit code: ${CINC_EXIT_CODE}"
- name: Process results with SAF-CLI
run: |
# Generate summary report with SAF-CLI
echo "Generating scan summary with SAF-CLI:"
saf summary --input cinc-results.json --output-md scan-summary.md
# Display the summary in the logs
cat scan-summary.md
# Add to GitHub step summary
echo "## CINC Auditor Scan Results" > $GITHUB_STEP_SUMMARY
cat scan-summary.md >> $GITHUB_STEP_SUMMARY
# Create a proper threshold file
cat > threshold.yml << EOF
compliance:
min: ${{ github.event.inputs.threshold }}
failed:
critical:
max: 0 # No critical failures allowed
EOF
# Apply threshold check
echo "Checking against threshold with min compliance of ${{ github.event.inputs.threshold }}%:"
saf threshold -i cinc-results.json -t threshold.yml
THRESHOLD_EXIT_CODE=$?
if [ $THRESHOLD_EXIT_CODE -eq 0 ]; then
echo "✅ Security scan passed threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
else
echo "❌ Security scan failed to meet threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
# Uncomment to enforce the threshold as a quality gate
# exit $THRESHOLD_EXIT_CODE
fi
- name: Upload CINC Auditor results
if: always()
uses: actions/upload-artifact@v4
with:
name: cinc-results
path: |
cinc-results.json
scan-summary.md
- name: Cleanup resources
if: always()
run: |
kubectl delete namespace inspec-test
|
This workflow implements:
- Scanning pods in existing clusters
- Using externally provided kubeconfig
- Limited-duration token generation
- Annotation-based profile selection
GitLab CI Implementation
Standard Pipeline
| stages:
- deploy
- scan
- report
- cleanup
variables:
SCANNER_NAMESPACE: "inspec-test"
TARGET_LABEL: "app=target-app"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
deploy_container:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: scan-target-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
labels:
app: target-app
pipeline: "${CI_PIPELINE_ID}"
spec:
containers:
- name: target
image: registry.example.com/my-image:latest
command: ["sleep", "1h"]
EOF
- |
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/scan-target-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --timeout=120s
- |
# Save target info for later stages
echo "TARGET_POD=scan-target-${CI_PIPELINE_ID}" >> deploy.env
echo "TARGET_CONTAINER=target" >> deploy.env
artifacts:
reports:
dotenv: deploy.env
create_access:
stage: scan
needs: [deploy_container]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Create the role for this specific pod
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
resourceNames: ["${TARGET_POD}"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
resourceNames: ["${TARGET_POD}"]
EOF
- |
# Create service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
EOF
- |
# Create role binding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
subjects:
- kind: ServiceAccount
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
roleRef:
kind: Role
name: scanner-role-${CI_PIPELINE_ID}
apiGroup: rbac.authorization.k8s.io
EOF
- |
# Generate token
TOKEN=$(kubectl create token scanner-sa-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --duration=30m)
echo "SCANNER_TOKEN=${TOKEN}" >> scanner.env
# Save cluster info
SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten \
-o jsonpath='{.clusters[].cluster.certificate-authority-data}')
echo "CLUSTER_SERVER=${SERVER}" >> scanner.env
echo "CLUSTER_CA_DATA=${CA_DATA}" >> scanner.env
artifacts:
reports:
dotenv: scanner.env
run_scan:
stage: scan
needs: [deploy_container, create_access]
script:
- |
# Create a kubeconfig file
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCANNER_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
- |
# Install CINC Auditor
curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor
# Install train-k8s-container plugin
cinc-auditor plugin install train-k8s-container
# Install SAF CLI
npm install -g @mitre/saf
# Run cinc-auditor scan
KUBECONFIG=scan-kubeconfig.yaml \
cinc-auditor exec ${CINC_PROFILE_PATH} \
-t k8s-container://${SCANNER_NAMESPACE}/${TARGET_POD}/${TARGET_CONTAINER} \
--reporter json:scan-results.json
# Generate scan summary using SAF CLI
saf summary --input scan-results.json --output-md scan-summary.md
# Display summary in job output
cat scan-summary.md
# Check scan against threshold
saf threshold -i scan-results.json -t ${THRESHOLD_VALUE}
THRESHOLD_RESULT=$?
# Save result for later stages
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ ${THRESHOLD_RESULT} -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce threshold as a gate
# exit ${THRESHOLD_RESULT}
fi
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
generate_report:
stage: report
needs: [run_scan]
script:
- |
# Install SAF CLI if needed in this stage
which saf || npm install -g @mitre/saf
# Generate a more comprehensive report
saf view -i scan-results.json --output scan-report.html
# Create a simple markdown report for the MR
cat > scan-report.md << EOF
# Security Scan Results
## Summary
$(cat scan-summary.md)
## Threshold Check
${THRESHOLD_PASSED} -eq 0 && echo "✅ **PASSED**" || echo "❌ **FAILED**"
Threshold: ${THRESHOLD_VALUE}%
## Details
For full results, see the artifacts.
EOF
artifacts:
paths:
- scan-report.html
- scan-report.md
when: always
cleanup:
stage: cleanup
needs: [run_scan]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Delete all resources
kubectl delete pod/${TARGET_POD} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete role/scanner-role-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete sa/scanner-sa-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete rolebinding/scanner-binding-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --ignore-not-found
|
This pipeline implements:
- Standard Kubernetes API approach
- Four-stage pipeline (deploy, scan, report, cleanup)
- SAF-CLI integration for report generation
- Threshold-based quality gates
Dynamic RBAC Scanning
| stages:
- deploy
- scan
- verify
- cleanup
variables:
KUBERNETES_NAMESPACE: "dynamic-scan-$CI_PIPELINE_ID"
TARGET_IMAGE: "busybox:latest"
SCAN_LABEL_KEY: "scan-target"
SCAN_LABEL_VALUE: "true"
CINC_PROFILE: "dev-sec/linux-baseline"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
DURATION_MINUTES: "15" # Token duration in minutes
# Allow overriding variables through pipeline triggers or UI
.dynamic_variables: &dynamic_variables
TARGET_IMAGE: ${TARGET_IMAGE}
SCAN_LABEL_KEY: ${SCAN_LABEL_KEY}
SCAN_LABEL_VALUE: ${SCAN_LABEL_VALUE}
CINC_PROFILE: ${CINC_PROFILE}
THRESHOLD_VALUE: ${THRESHOLD_VALUE}
ADDITIONAL_PROFILE_ANNOTATION: "${ADDITIONAL_PROFILE_ANNOTATION}" # Optional annotation for specifying additional profiles
setup_test_environment:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create test namespace
- kubectl create namespace ${KUBERNETES_NAMESPACE}
# Create multiple test pods with different images and labels
- |
# Create 3 pods, but only mark the first one for scanning
for i in {1..3}; do
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-${i}
namespace: ${KUBERNETES_NAMESPACE}
labels:
app: test-pod-${i}
${SCAN_LABEL_KEY}: "$([ $i -eq 1 ] && echo "${SCAN_LABEL_VALUE}" || echo "false")"
annotations:
scan-profile: "${CINC_PROFILE}"
$([ -n "${ADDITIONAL_PROFILE_ANNOTATION}" ] && echo "${ADDITIONAL_PROFILE_ANNOTATION}" || echo "")
spec:
containers:
- name: container
image: ${TARGET_IMAGE}
command: ["sleep", "infinity"]
EOF
done
# Wait for pods to be ready
- kubectl wait --for=condition=ready pod -l app=test-pod-1 -n ${KUBERNETES_NAMESPACE} --timeout=120s
# Get the name of the pod with our scan label
- |
TARGET_POD=$(kubectl get pods -n ${KUBERNETES_NAMESPACE} -l ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE} -o jsonpath='{.items[0].metadata.name}')
if [ -z "$TARGET_POD" ]; then
echo "Error: No pod found with label ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE}"
exit 1
fi
echo "TARGET_POD=${TARGET_POD}" >> deploy.env
# Save scan profile from annotations if available
- |
SCAN_PROFILE=$(kubectl get pod ${TARGET_POD} -n ${KUBERNETES_NAMESPACE} -o jsonpath='{.metadata.annotations.scan-profile}')
if [ -n "$SCAN_PROFILE" ]; then
echo "Found scan profile annotation: ${SCAN_PROFILE}"
echo "SCAN_PROFILE=${SCAN_PROFILE}" >> deploy.env
else
echo "Using default profile: ${CINC_PROFILE}"
echo "SCAN_PROFILE=${CINC_PROFILE}" >> deploy.env
fi
# Show all pods in the namespace
- kubectl get pods -n ${KUBERNETES_NAMESPACE} --show-labels
artifacts:
reports:
dotenv: deploy.env
create_dynamic_rbac:
stage: scan
needs: [setup_test_environment]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create service account
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-sa
namespace: ${KUBERNETES_NAMESPACE}
EOF
# Create role with label-based access
- |
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role
namespace: ${KUBERNETES_NAMESPACE}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
EOF
# Create role binding
- |
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding
namespace: ${KUBERNETES_NAMESPACE}
subjects:
- kind: ServiceAccount
name: scanner-sa
namespace: ${KUBERNETES_NAMESPACE}
roleRef:
kind: Role
name: scanner-role
apiGroup: rbac.authorization.k8s.io
EOF
# Generate token
- |
TOKEN=$(kubectl create token scanner-sa -n ${KUBERNETES_NAMESPACE} --duration=${DURATION_MINUTES}m)
SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
# Save token and cluster info for later stages
echo "SCANNER_TOKEN=${TOKEN}" >> scanner.env
echo "CLUSTER_SERVER=${SERVER}" >> scanner.env
echo "CLUSTER_CA_DATA=${CA_DATA}" >> scanner.env
artifacts:
reports:
dotenv: scanner.env
run_security_scan:
stage: scan
needs: [setup_test_environment, create_dynamic_rbac]
script:
# Create kubeconfig with restricted token
- |
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${KUBERNETES_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
chmod 600 scan-kubeconfig.yaml
# Install CINC Auditor
- curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor
# Install train-k8s-container plugin
- cinc-auditor plugin install train-k8s-container
# Install SAF CLI
- npm install -g @mitre/saf
# Verify the tools
- cinc-auditor --version
- saf --version
# Find the target pod by label using the restricted token
- |
echo "Looking for pods with label: ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE}"
SCANNED_POD=$(KUBECONFIG=scan-kubeconfig.yaml kubectl get pods -n ${KUBERNETES_NAMESPACE} -l ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE} -o jsonpath='{.items[0].metadata.name}')
if [ -z "$SCANNED_POD" ]; then
echo "Error: No pod found with label ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE} using restricted access"
exit 1
fi
echo "Found target pod: ${SCANNED_POD}"
# Verify it matches what we expected
if [ "$SCANNED_POD" != "$TARGET_POD" ]; then
echo "Warning: Scanned pod ($SCANNED_POD) doesn't match expected target pod ($TARGET_POD)"
fi
# Get container name
- CONTAINER_NAME=$(kubectl get pod ${TARGET_POD} -n ${KUBERNETES_NAMESPACE} -o jsonpath='{.spec.containers[0].name}')
# Run CINC Auditor scan
- |
echo "Running CINC Auditor scan on ${KUBERNETES_NAMESPACE}/${TARGET_POD}/${CONTAINER_NAME}"
KUBECONFIG=scan-kubeconfig.yaml cinc-auditor exec ${SCAN_PROFILE} \
-t k8s-container://${KUBERNETES_NAMESPACE}/${TARGET_POD}/${CONTAINER_NAME} \
--reporter cli json:scan-results.json
# Save scan exit code
SCAN_EXIT_CODE=$?
echo "SCAN_EXIT_CODE=${SCAN_EXIT_CODE}" >> scan.env
echo "Scan completed with exit code: ${SCAN_EXIT_CODE}"
# Process results with SAF-CLI
- |
echo "Generating scan summary with SAF-CLI:"
saf summary --input scan-results.json --output-md scan-summary.md
# Display the summary in the logs
cat scan-summary.md
# Create a threshold file
cat > threshold.yml << EOF
compliance:
min: ${THRESHOLD_VALUE}
failed:
critical:
max: 0 # No critical failures allowed
EOF
# Apply threshold check
echo "Checking against threshold with min compliance of ${THRESHOLD_VALUE}%:"
saf threshold -i scan-results.json -t threshold.yml
THRESHOLD_RESULT=$?
echo "THRESHOLD_RESULT=${THRESHOLD_RESULT}" >> scan.env
if [ $THRESHOLD_RESULT -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce the threshold as a quality gate
# exit $THRESHOLD_RESULT
fi
# Generate comprehensive HTML report
saf view -i scan-results.json --output scan-report.html
artifacts:
paths:
- scan-results.json
- scan-summary.md
- scan-report.html
reports:
dotenv: scan.env
verify_rbac_restrictions:
stage: verify
needs: [setup_test_environment, create_dynamic_rbac, run_security_scan]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create a second kubeconfig with restricted token
- |
cat > verify-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${KUBERNETES_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
chmod 600 verify-kubeconfig.yaml
# Get a non-target pod name
- OTHER_POD=$(kubectl get pods -n ${KUBERNETES_NAMESPACE} -l app=test-pod-2 -o jsonpath='{.items[0].metadata.name}')
# Check what we CAN do
- |
echo "Verifying what we CAN do with restricted RBAC:"
echo "Can list pods:"
KUBECONFIG=verify-kubeconfig.yaml kubectl get pods -n ${KUBERNETES_NAMESPACE} > /dev/null &&
echo "✅ Can list pods" ||
echo "❌ Cannot list pods"
echo "Can exec into target pod:"
KUBECONFIG=verify-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n ${KUBERNETES_NAMESPACE} --resource-name=${TARGET_POD} &&
echo "✅ Can exec into target pod" ||
echo "❌ Cannot exec into target pod"
# Check what we CANNOT do
- |
echo "Verifying what we CANNOT do with restricted RBAC:"
echo "Cannot create pods:"
KUBECONFIG=verify-kubeconfig.yaml kubectl auth can-i create pods -n ${KUBERNETES_NAMESPACE} &&
echo "❌ Security issue: Can create pods" ||
echo "✅ Cannot create pods (expected)"
echo "Cannot delete pods:"
KUBECONFIG=verify-kubeconfig.yaml kubectl auth can-i delete pods -n ${KUBERNETES_NAMESPACE} &&
echo "❌ Security issue: Can delete pods" ||
echo "✅ Cannot delete pods (expected)"
# Create a security report for MR
- |
cat > security-report.md << EOF
# Container Security Scan Report
## Scan Results
$(cat scan-summary.md)
## Threshold Check
$([[ "${THRESHOLD_RESULT}" -eq 0 ]] && echo "✅ **PASSED**" || echo "❌ **FAILED**")
Threshold: ${THRESHOLD_VALUE}%
## RBAC Security Verification
The scanner service account has properly restricted access:
- ✅ Can list pods in the namespace
- ✅ Can exec into target pods for scanning
- ✅ Cannot create or delete pods
- ✅ Cannot access cluster-wide resources
## Scan Details
- Target Pod: \`${TARGET_POD}\`
- Container: \`${CONTAINER_NAME}\`
- Image: \`${TARGET_IMAGE}\`
- Profile: \`${SCAN_PROFILE}\`
For full results, see the scan artifacts.
EOF
artifacts:
paths:
- security-report.md
cleanup:
stage: cleanup
needs: [setup_test_environment]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- kubectl delete namespace ${KUBERNETES_NAMESPACE} --ignore-not-found
|
This pipeline implements:
- Label-based pod targeting
- Restricted RBAC permissions
- Time-bound access credentials
GitLab Services Variant
| stages:
- deploy
- scan
- report
- cleanup
variables:
SCANNER_NAMESPACE: "inspec-test"
TARGET_LABEL: "app=target-app"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
# Define a custom service image for CINC Auditor
services:
- name: registry.example.com/cinc-auditor-scanner:latest
alias: cinc-scanner
entrypoint: ["sleep", "infinity"]
deploy_container:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: scan-target-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
labels:
app: target-app
pipeline: "${CI_PIPELINE_ID}"
spec:
containers:
- name: target
image: registry.example.com/my-image:latest
command: ["sleep", "1h"]
EOF
- |
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/scan-target-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --timeout=120s
- |
# Save target info for later stages
echo "TARGET_POD=scan-target-${CI_PIPELINE_ID}" >> deploy.env
echo "TARGET_CONTAINER=target" >> deploy.env
artifacts:
reports:
dotenv: deploy.env
create_access:
stage: scan
needs: [deploy_container]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Create the role for this specific pod
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
resourceNames: ["${TARGET_POD}"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
resourceNames: ["${TARGET_POD}"]
EOF
- |
# Create service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
EOF
- |
# Create role binding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
subjects:
- kind: ServiceAccount
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
roleRef:
kind: Role
name: scanner-role-${CI_PIPELINE_ID}
apiGroup: rbac.authorization.k8s.io
EOF
- |
# Generate token
TOKEN=$(kubectl create token scanner-sa-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --duration=30m)
echo "SCANNER_TOKEN=${TOKEN}" >> scanner.env
# Save cluster info
SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten \
-o jsonpath='{.clusters[].cluster.certificate-authority-data}')
echo "CLUSTER_SERVER=${SERVER}" >> scanner.env
echo "CLUSTER_CA_DATA=${CA_DATA}" >> scanner.env
artifacts:
reports:
dotenv: scanner.env
run_scan:
stage: scan
needs: [deploy_container, create_access]
# This job uses the cinc-scanner service container
# The service container already has CINC Auditor and the SAF CLI installed
script:
- |
# Create a kubeconfig file
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCANNER_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
- |
# Copy kubeconfig and profiles to service container
docker cp scan-kubeconfig.yaml cinc-scanner:/tmp/
docker cp ${CINC_PROFILE_PATH} cinc-scanner:/tmp/profile
# Run scan in service container
docker exec cinc-scanner bash -c "
KUBECONFIG=/tmp/scan-kubeconfig.yaml \
cinc-auditor exec /tmp/profile \
-t k8s-container://${SCANNER_NAMESPACE}/${TARGET_POD}/${TARGET_CONTAINER} \
--reporter json:/tmp/scan-results.json
# Generate scan summary using SAF CLI
saf summary --input /tmp/scan-results.json --output-md /tmp/scan-summary.md
# Check scan against threshold
saf threshold -i /tmp/scan-results.json -t ${THRESHOLD_VALUE}
echo \$? > /tmp/threshold_result.txt
"
# Copy results back from service container
docker cp cinc-scanner:/tmp/scan-results.json ./scan-results.json
docker cp cinc-scanner:/tmp/scan-summary.md ./scan-summary.md
docker cp cinc-scanner:/tmp/threshold_result.txt ./threshold_result.txt
# Display summary in job output
cat scan-summary.md
# Process threshold result
THRESHOLD_RESULT=$(cat threshold_result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ ${THRESHOLD_RESULT} -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce threshold as a gate
# exit ${THRESHOLD_RESULT}
fi
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
# For distroless containers, we need a specialized approach
run_distroless_scan:
stage: scan
needs: [deploy_container, create_access]
# This job will only run if the DISTROLESS variable is set to "true"
rules:
- if: $DISTROLESS == "true"
# Use our specialized distroless scanner service container
services:
- name: registry.example.com/distroless-scanner:latest
alias: distroless-scanner
entrypoint: ["sleep", "infinity"]
script:
- |
# Create a kubeconfig file
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCANNER_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
- |
# Copy kubeconfig and profiles to distroless scanner service container
docker cp scan-kubeconfig.yaml distroless-scanner:/tmp/
docker cp ${CINC_PROFILE_PATH} distroless-scanner:/tmp/profile
# Run specialized distroless scan in service container
docker exec distroless-scanner bash -c "
KUBECONFIG=/tmp/scan-kubeconfig.yaml \
/opt/scripts/scan-distroless.sh \
${SCANNER_NAMESPACE} ${TARGET_POD} ${TARGET_CONTAINER} \
/tmp/profile /tmp/scan-results.json
# Generate scan summary using SAF CLI
saf summary --input /tmp/scan-results.json --output-md /tmp/scan-summary.md
# Check scan against threshold
saf threshold -i /tmp/scan-results.json -t ${THRESHOLD_VALUE}
echo \$? > /tmp/threshold_result.txt
"
# Copy results back from service container
docker cp distroless-scanner:/tmp/scan-results.json ./scan-results.json
docker cp distroless-scanner:/tmp/scan-summary.md ./scan-summary.md
docker cp distroless-scanner:/tmp/threshold_result.txt ./threshold_result.txt
# Display summary in job output
cat scan-summary.md
# Process threshold result
THRESHOLD_RESULT=$(cat threshold_result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ ${THRESHOLD_RESULT} -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce threshold as a gate
# exit ${THRESHOLD_RESULT}
fi
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
generate_report:
stage: report
needs: [run_scan]
script:
- |
# Install SAF CLI if needed in this stage
which saf || npm install -g @mitre/saf
# Generate a more comprehensive report
saf view -i scan-results.json --output scan-report.html
# Create a simple markdown report for the MR
cat > scan-report.md << EOF
# Security Scan Results
## Summary
$(cat scan-summary.md)
## Threshold Check
${THRESHOLD_PASSED} -eq 0 && echo "✅ **PASSED**" || echo "❌ **FAILED**"
Threshold: ${THRESHOLD_VALUE}%
## Details
For full results, see the artifacts.
EOF
artifacts:
paths:
- scan-report.html
- scan-report.md
when: always
cleanup:
stage: cleanup
needs: [run_scan]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Delete all resources
kubectl delete pod/${TARGET_POD} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete role/scanner-role-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete sa/scanner-sa-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete rolebinding/scanner-binding-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --ignore-not-found
|
This pipeline uses GitLab services to provide:
- Pre-configured scanning environment
- Separation of scanning tools from main job
- Reduced pipeline setup time
Debug Container Approach
The Debug Container Approach is our interim solution for scanning distroless containers while we complete full distroless support in the Kubernetes API Approach.
GitHub Actions Implementation
Setup and Scan with Debug Containers
| name: Setup Minikube and Run CINC Auditor Scan
on:
workflow_dispatch:
inputs:
minikube_version:
description: 'Minikube version to use'
required: true
default: 'v1.32.0'
kubernetes_version:
description: 'Kubernetes version to use'
required: true
default: 'v1.28.3'
cinc_profile:
description: 'CINC Auditor profile to run'
required: true
default: 'dev-sec/linux-baseline'
threshold:
description: 'Minimum passing score (0-100)'
required: true
default: '70'
jobs:
setup-and-scan:
name: Setup minikube and run CINC Auditor scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup minikube
id: minikube
uses: medyagh/setup-minikube@master
with:
minikube-version: ${{ github.event.inputs.minikube_version }}
kubernetes-version: ${{ github.event.inputs.kubernetes_version }}
github-token: ${{ secrets.GITHUB_TOKEN }}
driver: docker
start-args: --nodes=2
- name: Get cluster status
run: |
kubectl get nodes
minikube status
- name: Set up CINC Auditor environment
run: |
# Install CINC Auditor
curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor
# Install train-k8s-container plugin
cinc-auditor plugin install train-k8s-container
# Install SAF-CLI for result processing
npm install -g @mitre/saf
# Verify installation
cinc-auditor --version
cinc-auditor plugin list
saf --version
- name: Create namespace and test pod
run: |
# Create namespace
kubectl create namespace inspec-test
# Create test pod
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: inspec-target
namespace: inspec-test
labels:
app: inspec-target
scan-target: "true"
spec:
containers:
- name: busybox
image: busybox:latest
command: ["sleep", "infinity"]
EOF
# Wait for pod to be running
kubectl wait --for=condition=ready pod/inspec-target -n inspec-test --timeout=120s
# Verify pod is running
kubectl get pods -n inspec-test
- name: Set up RBAC configuration
run: |
# Create service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: inspec-scanner
namespace: inspec-test
EOF
# Create role
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: inspec-container-role
namespace: inspec-test
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
resourceNames: ["inspec-target"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
resourceNames: ["inspec-target"]
EOF
# Create role binding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: inspec-container-rolebinding
namespace: inspec-test
subjects:
- kind: ServiceAccount
name: inspec-scanner
namespace: inspec-test
roleRef:
kind: Role
name: inspec-container-role
apiGroup: rbac.authorization.k8s.io
EOF
# Verify RBAC setup
kubectl get serviceaccount,role,rolebinding -n inspec-test
- name: Generate restricted kubeconfig
run: |
# Get token
TOKEN=$(kubectl create token inspec-scanner -n inspec-test --duration=15m)
# Get cluster information
SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
# Create kubeconfig
cat > restricted-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${SERVER}
certificate-authority-data: ${CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: inspec-test
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${TOKEN}
EOF
# Set proper permissions
chmod 600 restricted-kubeconfig.yaml
# Test the kubeconfig
KUBECONFIG=restricted-kubeconfig.yaml kubectl get pods -n inspec-test
- name: Run CINC Auditor scan with restricted access
run: |
# Download CINC profile
if [[ "${{ github.event.inputs.cinc_profile }}" == http* ]]; then
# If it's a URL, use it directly
PROFILE="${{ github.event.inputs.cinc_profile }}"
elif [[ "${{ github.event.inputs.cinc_profile }}" == */* ]]; then
# If it's a profile from Chef Supermarket (e.g., dev-sec/linux-baseline)
PROFILE="${{ github.event.inputs.cinc_profile }}"
else
# If it's a local path
PROFILE="./${{ github.event.inputs.cinc_profile }}"
fi
# Run CINC Auditor with the train-k8s-container transport
KUBECONFIG=restricted-kubeconfig.yaml cinc-auditor exec ${PROFILE} \
-t k8s-container://inspec-test/inspec-target/busybox \
--reporter cli json:cinc-results.json
# Store the exit code
CINC_EXIT_CODE=$?
echo "CINC Auditor scan completed with exit code: ${CINC_EXIT_CODE}"
- name: Process results with SAF-CLI
run: |
# Generate summary report with SAF-CLI
echo "Generating scan summary with SAF-CLI:"
saf summary --input cinc-results.json --output-md scan-summary.md
# Display the summary in the logs
cat scan-summary.md
# Add to GitHub step summary
echo "## CINC Auditor Scan Results" > $GITHUB_STEP_SUMMARY
cat scan-summary.md >> $GITHUB_STEP_SUMMARY
# Create a proper threshold file
cat > threshold.yml << EOF
compliance:
min: ${{ github.event.inputs.threshold }}
failed:
critical:
max: 0 # No critical failures allowed
EOF
# Apply threshold check
echo "Checking against threshold with min compliance of ${{ github.event.inputs.threshold }}%:"
saf threshold -i cinc-results.json -t threshold.yml
THRESHOLD_EXIT_CODE=$?
if [ $THRESHOLD_EXIT_CODE -eq 0 ]; then
echo "✅ Security scan passed threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
else
echo "❌ Security scan failed to meet threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
# Uncomment to enforce the threshold as a quality gate
# exit $THRESHOLD_EXIT_CODE
fi
- name: Upload CINC Auditor results
if: always()
uses: actions/upload-artifact@v4
with:
name: cinc-results
path: |
cinc-results.json
scan-summary.md
- name: Cleanup resources
if: always()
run: |
kubectl delete namespace inspec-test
|
This workflow implements:
- Setup of a minikube cluster for testing
- Deployment of test containers including distroless containers
- Configuration for ephemeral debug containers
- Scanning with CINC Auditor through debug containers
GitLab CI Implementation
Existing Cluster with Debug Containers
| stages:
- prepare
- scan
- verify
- cleanup
variables:
# Default values - override in UI or with pipeline parameters
SCAN_NAMESPACE: "default" # Existing namespace where pods are deployed
TARGET_LABEL_SELECTOR: "scan-target=true" # Label to identify target pods
CINC_PROFILE: "dev-sec/linux-baseline"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
DURATION_MINUTES: "15" # Token duration in minutes
# Define workflow
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "web" # Manual trigger from UI
- if: $CI_PIPELINE_SOURCE == "schedule" # Scheduled pipeline
- if: $CI_PIPELINE_SOURCE == "trigger" # API trigger with token
# Find pods to scan in existing cluster
prepare_scan:
stage: prepare
image: bitnami/kubectl:latest
script:
# Configure kubectl with cluster credentials
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create a unique run ID for this pipeline
- RUN_ID="gl-$CI_PIPELINE_ID-$CI_JOB_ID"
- echo "RUN_ID=${RUN_ID}" >> prepare.env
# Verify the namespace exists
- kubectl get namespace ${SCAN_NAMESPACE} || { echo "Namespace ${SCAN_NAMESPACE} does not exist"; exit 1; }
# Find target pods with specified label
- |
TARGET_PODS=$(kubectl get pods -n ${SCAN_NAMESPACE} -l ${TARGET_LABEL_SELECTOR} -o jsonpath='{.items[*].metadata.name}')
if [ -z "$TARGET_PODS" ]; then
echo "No pods found matching label: ${TARGET_LABEL_SELECTOR} in namespace ${SCAN_NAMESPACE}"
exit 1
fi
# Count and list found pods
POD_COUNT=$(echo $TARGET_PODS | wc -w)
echo "Found ${POD_COUNT} pods to scan:"
kubectl get pods -n ${SCAN_NAMESPACE} -l ${TARGET_LABEL_SELECTOR} --show-labels
# Get the first pod as primary target
PRIMARY_POD=$(echo $TARGET_PODS | cut -d' ' -f1)
echo "Primary target pod: ${PRIMARY_POD}"
echo "PRIMARY_POD=${PRIMARY_POD}" >> prepare.env
# Get container name for the primary pod
PRIMARY_CONTAINER=$(kubectl get pod ${PRIMARY_POD} -n ${SCAN_NAMESPACE} -o jsonpath='{.spec.containers[0].name}')
echo "Primary container: ${PRIMARY_CONTAINER}"
echo "PRIMARY_CONTAINER=${PRIMARY_CONTAINER}" >> prepare.env
# Check for custom profile annotation
PROFILE_ANNOTATION=$(kubectl get pod ${PRIMARY_POD} -n ${SCAN_NAMESPACE} -o jsonpath='{.metadata.annotations.scan-profile}' 2>/dev/null || echo "")
if [ -n "$PROFILE_ANNOTATION" ]; then
echo "Found profile annotation: ${PROFILE_ANNOTATION}"
echo "PROFILE=${PROFILE_ANNOTATION}" >> prepare.env
else
echo "Using default profile: ${CINC_PROFILE}"
echo "PROFILE=${CINC_PROFILE}" >> prepare.env
fi
artifacts:
reports:
dotenv: prepare.env
# Create temporary RBAC for scanning
create_rbac:
stage: prepare
image: bitnami/kubectl:latest
needs: [prepare_scan]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create service account for scanning
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-${RUN_ID}
namespace: ${SCAN_NAMESPACE}
labels:
app: security-scanner
component: cinc-auditor
pipeline: "${CI_PIPELINE_ID}"
EOF
# Create role with least privilege
- |
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role-${RUN_ID}
namespace: ${SCAN_NAMESPACE}
labels:
app: security-scanner
component: cinc-auditor
pipeline: "${CI_PIPELINE_ID}"
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
resourceNames: ["${PRIMARY_POD}"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
resourceNames: ["${PRIMARY_POD}"]
EOF
# Create role binding
- |
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding-${RUN_ID}
namespace: ${SCAN_NAMESPACE}
labels:
app: security-scanner
component: cinc-auditor
pipeline: "${CI_PIPELINE_ID}"
subjects:
- kind: ServiceAccount
name: scanner-${RUN_ID}
namespace: ${SCAN_NAMESPACE}
roleRef:
kind: Role
name: scanner-role-${RUN_ID}
apiGroup: rbac.authorization.k8s.io
EOF
# Generate token for service account
- |
TOKEN=$(kubectl create token scanner-${RUN_ID} -n ${SCAN_NAMESPACE} --duration=${DURATION_MINUTES}m)
SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
# Save token and cluster info for later stages
echo "SCANNER_TOKEN=${TOKEN}" >> rbac.env
echo "CLUSTER_SERVER=${SERVER}" >> rbac.env
echo "CLUSTER_CA_DATA=${CA_DATA}" >> rbac.env
artifacts:
reports:
dotenv: rbac.env
# Run the security scan with restricted access
run_security_scan:
stage: scan
image: registry.gitlab.com/gitlab-org/security-products/analyzers/container-scanning:5
needs: [prepare_scan, create_rbac]
script:
# Create restricted kubeconfig
- |
cat > scanner-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCAN_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
chmod 600 scanner-kubeconfig.yaml
# Install CINC Auditor and plugins
- curl -L https://omnitruck.cinc.sh/install.sh | bash -s -- -P cinc-auditor
- cinc-auditor plugin install train-k8s-container
# Install SAF CLI
- apt-get update && apt-get install -y npm
- npm install -g @mitre/saf
# Test restricted access
- |
echo "Testing restricted access:"
export KUBECONFIG=scanner-kubeconfig.yaml
kubectl get pods -n ${SCAN_NAMESPACE} -l ${TARGET_LABEL_SELECTOR}
echo "Verifying target pod access:"
kubectl get pod ${PRIMARY_POD} -n ${SCAN_NAMESPACE} -o name || { echo "Cannot access target pod with restricted token"; exit 1; }
# Run the scan
- |
echo "Running CINC Auditor scan on ${SCAN_NAMESPACE}/${PRIMARY_POD}/${PRIMARY_CONTAINER}"
KUBECONFIG=scanner-kubeconfig.yaml cinc-auditor exec ${PROFILE} \
-t k8s-container://${SCAN_NAMESPACE}/${PRIMARY_POD}/${PRIMARY_CONTAINER} \
--reporter cli json:scan-results.json
# Save scan exit code
SCAN_EXIT_CODE=$?
echo "SCAN_EXIT_CODE=${SCAN_EXIT_CODE}" >> scan.env
echo "Scan completed with exit code: ${SCAN_EXIT_CODE}"
# Process results with SAF-CLI
- |
echo "Generating scan summary with SAF-CLI:"
saf summary --input scan-results.json --output-md scan-summary.md
# Display the summary in the logs
cat scan-summary.md
# Create a threshold file
cat > threshold.yml << EOF
compliance:
min: ${THRESHOLD_VALUE}
failed:
critical:
max: 0 # No critical failures allowed
EOF
# Apply threshold check
echo "Checking against threshold with min compliance of ${THRESHOLD_VALUE}%:"
saf threshold -i scan-results.json -t threshold.yml
THRESHOLD_RESULT=$?
echo "THRESHOLD_RESULT=${THRESHOLD_RESULT}" >> scan.env
if [ $THRESHOLD_RESULT -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce the threshold as a quality gate
# exit $THRESHOLD_RESULT
fi
# Generate HTML report
saf view -i scan-results.json --output scan-report.html
artifacts:
paths:
- scan-results.json
- scan-summary.md
- scan-report.html
- threshold.yml
reports:
dotenv: scan.env
# Verify RBAC permissions are properly restricted
verify_rbac:
stage: verify
image: bitnami/kubectl:latest
needs: [prepare_scan, create_rbac, run_security_scan]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create restricted kubeconfig for testing
- |
cat > scanner-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCAN_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
chmod 600 scanner-kubeconfig.yaml
# Check what we CAN do
- |
echo "Verifying what we CAN do with restricted RBAC:"
echo "Can list pods:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl get pods -n ${SCAN_NAMESPACE} > /dev/null &&
echo "✅ Can list pods" ||
echo "❌ Cannot list pods"
echo "Can exec into target pod:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n ${SCAN_NAMESPACE} --resource-name=${PRIMARY_POD} &&
echo "✅ Can exec into target pod" ||
echo "❌ Cannot exec into target pod"
# Check what we CANNOT do
- |
echo "Verifying what we CANNOT do with restricted RBAC:"
echo "Cannot create pods:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl auth can-i create pods -n ${SCAN_NAMESPACE} &&
echo "❌ Security issue: Can create pods" ||
echo "✅ Cannot create pods (expected)"
echo "Cannot delete pods:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl auth can-i delete pods -n ${SCAN_NAMESPACE} &&
echo "❌ Security issue: Can delete pods" ||
echo "✅ Cannot delete pods (expected)"
# Find non-target pod for testing
OTHER_POD=$(kubectl get pods -n ${SCAN_NAMESPACE} -l app!=scan-target -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || echo "")
if [ -n "$OTHER_POD" ] && [ "$OTHER_POD" != "$PRIMARY_POD" ]; then
echo "Cannot exec into non-target pod:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n ${SCAN_NAMESPACE} --resource-name=${OTHER_POD} &&
echo "❌ Security issue: Can exec into non-target pod" ||
echo "✅ Cannot exec into non-target pod (expected)"
fi
# Create security report
- |
cat > security-report.md << EOF
# Container Security Scan Report
## Scan Details
- **Pipeline:** ${CI_PIPELINE_ID}
- **Target Namespace:** ${SCAN_NAMESPACE}
- **Target Pod:** ${PRIMARY_POD}
- **Target Container:** ${PRIMARY_CONTAINER}
- **CINC Profile:** ${PROFILE}
- **Compliance Threshold:** ${THRESHOLD_VALUE}%
## RBAC Security Verification
The scanner service account has properly restricted access:
- ✅ Can list pods in the namespace
- ✅ Can exec into target pods for scanning
- ✅ Cannot create or delete pods
- ✅ Cannot exec into non-target pods
- ✅ Cannot access cluster-wide resources
## Scan Results
$([[ "${THRESHOLD_RESULT}" -eq 0 ]] && echo "✅ **PASSED**" || echo "❌ **FAILED**")
See scan artifacts for detailed compliance results.
EOF
artifacts:
paths:
- security-report.md
# Always clean up RBAC resources
cleanup_rbac:
stage: cleanup
image: bitnami/kubectl:latest
needs: [prepare_scan]
when: always
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Delete role binding
- kubectl delete rolebinding scanner-binding-${RUN_ID} -n ${SCAN_NAMESPACE} --ignore-not-found
# Delete role
- kubectl delete role scanner-role-${RUN_ID} -n ${SCAN_NAMESPACE} --ignore-not-found
# Delete service account
- kubectl delete serviceaccount scanner-${RUN_ID} -n ${SCAN_NAMESPACE} --ignore-not-found
- echo "RBAC resources cleaned up"
|
This pipeline implements:
- Configuration for scanning distroless containers
- Support for ephemeral debug containers
- Flexible profile selection
GitLab Services with Debug Containers
| stages:
- deploy
- scan
- report
- cleanup
variables:
SCANNER_NAMESPACE: "inspec-test"
TARGET_LABEL: "app=target-app"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
# Define a custom service image for CINC Auditor
services:
- name: registry.example.com/cinc-auditor-scanner:latest
alias: cinc-scanner
entrypoint: ["sleep", "infinity"]
deploy_container:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: scan-target-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
labels:
app: target-app
pipeline: "${CI_PIPELINE_ID}"
spec:
containers:
- name: target
image: registry.example.com/my-image:latest
command: ["sleep", "1h"]
EOF
- |
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/scan-target-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --timeout=120s
- |
# Save target info for later stages
echo "TARGET_POD=scan-target-${CI_PIPELINE_ID}" >> deploy.env
echo "TARGET_CONTAINER=target" >> deploy.env
artifacts:
reports:
dotenv: deploy.env
create_access:
stage: scan
needs: [deploy_container]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Create the role for this specific pod
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
resourceNames: ["${TARGET_POD}"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
resourceNames: ["${TARGET_POD}"]
EOF
- |
# Create service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
EOF
- |
# Create role binding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
subjects:
- kind: ServiceAccount
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
roleRef:
kind: Role
name: scanner-role-${CI_PIPELINE_ID}
apiGroup: rbac.authorization.k8s.io
EOF
- |
# Generate token
TOKEN=$(kubectl create token scanner-sa-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --duration=30m)
echo "SCANNER_TOKEN=${TOKEN}" >> scanner.env
# Save cluster info
SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten \
-o jsonpath='{.clusters[].cluster.certificate-authority-data}')
echo "CLUSTER_SERVER=${SERVER}" >> scanner.env
echo "CLUSTER_CA_DATA=${CA_DATA}" >> scanner.env
artifacts:
reports:
dotenv: scanner.env
run_scan:
stage: scan
needs: [deploy_container, create_access]
# This job uses the cinc-scanner service container
# The service container already has CINC Auditor and the SAF CLI installed
script:
- |
# Create a kubeconfig file
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCANNER_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
- |
# Copy kubeconfig and profiles to service container
docker cp scan-kubeconfig.yaml cinc-scanner:/tmp/
docker cp ${CINC_PROFILE_PATH} cinc-scanner:/tmp/profile
# Run scan in service container
docker exec cinc-scanner bash -c "
KUBECONFIG=/tmp/scan-kubeconfig.yaml \
cinc-auditor exec /tmp/profile \
-t k8s-container://${SCANNER_NAMESPACE}/${TARGET_POD}/${TARGET_CONTAINER} \
--reporter json:/tmp/scan-results.json
# Generate scan summary using SAF CLI
saf summary --input /tmp/scan-results.json --output-md /tmp/scan-summary.md
# Check scan against threshold
saf threshold -i /tmp/scan-results.json -t ${THRESHOLD_VALUE}
echo \$? > /tmp/threshold_result.txt
"
# Copy results back from service container
docker cp cinc-scanner:/tmp/scan-results.json ./scan-results.json
docker cp cinc-scanner:/tmp/scan-summary.md ./scan-summary.md
docker cp cinc-scanner:/tmp/threshold_result.txt ./threshold_result.txt
# Display summary in job output
cat scan-summary.md
# Process threshold result
THRESHOLD_RESULT=$(cat threshold_result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ ${THRESHOLD_RESULT} -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce threshold as a gate
# exit ${THRESHOLD_RESULT}
fi
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
# For distroless containers, we need a specialized approach
run_distroless_scan:
stage: scan
needs: [deploy_container, create_access]
# This job will only run if the DISTROLESS variable is set to "true"
rules:
- if: $DISTROLESS == "true"
# Use our specialized distroless scanner service container
services:
- name: registry.example.com/distroless-scanner:latest
alias: distroless-scanner
entrypoint: ["sleep", "infinity"]
script:
- |
# Create a kubeconfig file
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCANNER_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
- |
# Copy kubeconfig and profiles to distroless scanner service container
docker cp scan-kubeconfig.yaml distroless-scanner:/tmp/
docker cp ${CINC_PROFILE_PATH} distroless-scanner:/tmp/profile
# Run specialized distroless scan in service container
docker exec distroless-scanner bash -c "
KUBECONFIG=/tmp/scan-kubeconfig.yaml \
/opt/scripts/scan-distroless.sh \
${SCANNER_NAMESPACE} ${TARGET_POD} ${TARGET_CONTAINER} \
/tmp/profile /tmp/scan-results.json
# Generate scan summary using SAF CLI
saf summary --input /tmp/scan-results.json --output-md /tmp/scan-summary.md
# Check scan against threshold
saf threshold -i /tmp/scan-results.json -t ${THRESHOLD_VALUE}
echo \$? > /tmp/threshold_result.txt
"
# Copy results back from service container
docker cp distroless-scanner:/tmp/scan-results.json ./scan-results.json
docker cp distroless-scanner:/tmp/scan-summary.md ./scan-summary.md
docker cp distroless-scanner:/tmp/threshold_result.txt ./threshold_result.txt
# Display summary in job output
cat scan-summary.md
# Process threshold result
THRESHOLD_RESULT=$(cat threshold_result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ ${THRESHOLD_RESULT} -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce threshold as a gate
# exit ${THRESHOLD_RESULT}
fi
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
generate_report:
stage: report
needs: [run_scan]
script:
- |
# Install SAF CLI if needed in this stage
which saf || npm install -g @mitre/saf
# Generate a more comprehensive report
saf view -i scan-results.json --output scan-report.html
# Create a simple markdown report for the MR
cat > scan-report.md << EOF
# Security Scan Results
## Summary
$(cat scan-summary.md)
## Threshold Check
${THRESHOLD_PASSED} -eq 0 && echo "✅ **PASSED**" || echo "❌ **FAILED**"
Threshold: ${THRESHOLD_VALUE}%
## Details
For full results, see the artifacts.
EOF
artifacts:
paths:
- scan-report.html
- scan-report.md
when: always
cleanup:
stage: cleanup
needs: [run_scan]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Delete all resources
kubectl delete pod/${TARGET_POD} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete role/scanner-role-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete sa/scanner-sa-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete rolebinding/scanner-binding-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --ignore-not-found
|
This pipeline uses GitLab services to provide:
- Specialized service container for distroless scanning
- Pre-installed dependencies for debug container approach
- Simplified workflow for distroless container scanning
Sidecar Container Approach
The Sidecar Container Approach is our universal interim solution with minimal privileges that works for both standard and distroless containers.
GitHub Actions Implementation
Sidecar Scanner Approach
| name: CINC Auditor Sidecar Container Scan
on:
workflow_dispatch:
inputs:
kubernetes_version:
description: 'Kubernetes version to use'
required: true
default: 'v1.28.3'
target_image:
description: 'Target container image to scan'
required: true
default: 'busybox:latest'
is_distroless:
description: 'Is the target a distroless container?'
required: true
default: 'false'
type: boolean
threshold:
description: 'Minimum passing score (0-100)'
required: true
default: '70'
jobs:
sidecar-scan:
name: Sidecar Container Scan
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Kubernetes
id: kind
uses: helm/kind-action@v1.8.0
with:
version: v0.20.0
cluster_name: scan-cluster
config: |
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
feature-gates: "EphemeralContainers=true"
"system-reserved": "cpu=500m,memory=500Mi"
image: kindest/node:${{ github.event.inputs.kubernetes_version }}
- name: Get cluster status
run: |
kubectl get nodes
kubectl cluster-info
- name: Build CINC Auditor Scanner container
run: |
# Create a Dockerfile for the CINC Auditor scanner container
cat > Dockerfile.scanner << EOF
FROM ruby:3.0-slim
# Install dependencies
RUN apt-get update && apt-get install -y \
curl \
gnupg \
procps \
nodejs \
npm \
&& rm -rf /var/lib/apt/lists/*
# Install CINC Auditor
RUN curl -L https://omnitruck.cinc.sh/install.sh | bash -s -- -P cinc-auditor
# Install SAF CLI
RUN npm install -g @mitre/saf
# Copy profiles
COPY examples/cinc-profiles/container-baseline /opt/profiles/container-baseline
# Verify installation
RUN cinc-auditor --version && \
saf --version
# Create a simple script to scan in sidecar mode
RUN echo '#!/bin/bash \n\
TARGET_PID=\$(ps aux | grep -v grep | grep "\$1" | head -1 | awk "{print \\\$2}") \n\
echo "Target process identified: PID \$TARGET_PID" \n\
\n\
cinc-auditor exec /opt/profiles/\$2 \\\n\
-b os=linux \\\n\
--target=/proc/\$TARGET_PID/root \\\n\
--reporter cli json:/results/scan-results.json \n\
\n\
saf summary --input /results/scan-results.json --output-md /results/scan-summary.md \n\
\n\
saf threshold -i /results/scan-results.json -t /opt/thresholds/threshold.yml \n\
echo \$? > /results/threshold-result.txt \n\
\n\
touch /results/scan-complete \n\
' > /usr/local/bin/run-scanner
RUN chmod +x /usr/local/bin/run-scanner
# Default command
CMD ["/bin/bash"]
EOF
# Build the scanner image
docker build -t cinc-scanner:latest -f Dockerfile.scanner .
# Load the image into kind
kind load docker-image cinc-scanner:latest --name scan-cluster
- name: Create namespace and prepare environment
run: |
# Create namespace
kubectl create namespace inspec-test
# Create threshold ConfigMap
cat > threshold.yml << EOF
compliance:
min: ${{ github.event.inputs.threshold }}
failed:
critical:
max: 0 # No critical failures allowed
EOF
kubectl create configmap inspec-thresholds \
--from-file=threshold.yml=threshold.yml \
-n inspec-test
- name: Deploy pod with scanner sidecar
run: |
# Create the pod with shared process namespace
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app-scanner
namespace: inspec-test
labels:
app: scanner-pod
spec:
shareProcessNamespace: true # Enable shared process namespace
containers:
# Target container to be scanned
- name: target
image: ${{ github.event.inputs.target_image }}
command: ["sleep", "3600"]
# CINC Auditor scanner sidecar
- name: scanner
image: cinc-scanner:latest
command:
- "/bin/bash"
- "-c"
- |
# Wait for the main container to start
sleep 10
echo "Starting CINC Auditor scan..."
# Use the script to find process and run scanner
run-scanner "sleep 3600" "container-baseline"
# Keep container running briefly to allow result retrieval
echo "Scan complete. Results available in /results directory."
sleep 300
volumeMounts:
- name: shared-results
mountPath: /results
- name: thresholds
mountPath: /opt/thresholds
volumes:
- name: shared-results
emptyDir: {}
- name: thresholds
configMap:
name: inspec-thresholds
EOF
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/app-scanner -n inspec-test --timeout=300s
# Verify the pod is ready
kubectl get pod app-scanner -n inspec-test
- name: Wait for scan to complete and retrieve results
run: |
# Wait for scan to complete
echo "Waiting for scan to complete..."
until kubectl exec -it app-scanner -n inspec-test -c scanner -- ls /results/scan-complete >/dev/null 2>&1; do
echo "Scan in progress..."
sleep 5
done
# Retrieve scan results
echo "Retrieving scan results..."
kubectl cp inspec-test/app-scanner:/results/scan-results.json ./scan-results.json -c scanner
kubectl cp inspec-test/app-scanner:/results/scan-summary.md ./scan-summary.md -c scanner
# Check threshold result
if kubectl exec -it app-scanner -n inspec-test -c scanner -- cat /results/threshold-result.txt >/dev/null 2>&1; then
THRESHOLD_RESULT=$(kubectl exec -it app-scanner -n inspec-test -c scanner -- cat /results/threshold-result.txt)
echo "THRESHOLD_RESULT=${THRESHOLD_RESULT}" >> $GITHUB_ENV
if [ "${THRESHOLD_RESULT}" -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
fi
else
echo "Warning: Threshold result not found"
echo "THRESHOLD_RESULT=1" >> $GITHUB_ENV
fi
# Display summary in job output
echo "============= SCAN SUMMARY ============="
cat scan-summary.md
echo "========================================"
- name: Process results with SAF-CLI
run: |
# Install SAF CLI
npm install -g @mitre/saf
# Generate reports
saf view -i scan-results.json --output scan-report.html
saf generate -i scan-results.json -o csv > results.csv
saf generate -i scan-results.json -o junit > junit-results.xml
# Add to GitHub step summary
echo "## CINC Auditor Scan Results" > $GITHUB_STEP_SUMMARY
cat scan-summary.md >> $GITHUB_STEP_SUMMARY
# Add threshold result to summary
if [ "${{ env.THRESHOLD_RESULT }}" -eq 0 ]; then
echo "## ✅ Security scan passed threshold requirements" >> $GITHUB_STEP_SUMMARY
else
echo "## ❌ Security scan failed to meet threshold requirements" >> $GITHUB_STEP_SUMMARY
fi
echo "Threshold: ${{ github.event.inputs.threshold }}%" >> $GITHUB_STEP_SUMMARY
- name: Upload CINC Auditor results
if: always()
uses: actions/upload-artifact@v4
with:
name: cinc-results
path: |
scan-results.json
scan-summary.md
scan-report.html
results.csv
junit-results.xml
- name: Cleanup resources
if: always()
run: |
kubectl delete namespace inspec-test
|
This workflow implements:
- Shared process namespace setup
- Sidecar container deployment with CINC Auditor
- Process identification and scanning
- Support for both standard and distroless containers
GitLab CI Implementation
Standard Sidecar Approach
| stages:
- deploy
- scan
- report
- cleanup
variables:
SCANNER_NAMESPACE: "inspec-test"
TARGET_LABEL: "app=target-app"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
TARGET_IMAGE: "registry.example.com/my-image:latest" # Target image to scan
# If scanning a distroless image, set this to true
IS_DISTROLESS: "false"
deploy_sidecar_pod:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Create the namespace if it doesn't exist
kubectl get namespace ${SCANNER_NAMESPACE} || kubectl create namespace ${SCANNER_NAMESPACE}
# Create ConfigMap for CINC profile
cat > container-baseline.rb << EOF
# Example CINC Auditor profile for container scanning
title "Container Baseline"
control "container-1.1" do
impact 0.7
title "Container files should have proper permissions"
desc "Critical files in the container should have proper permissions."
describe file('/etc/passwd') do
it { should exist }
its('mode') { should cmp '0644' }
end
end
control "container-1.2" do
impact 0.5
title "Container should not have unnecessary packages"
desc "Container should be minimal and not contain unnecessary packages."
describe directory('/var/lib/apt') do
it { should_not exist }
end
end
EOF
kubectl create configmap inspec-profiles-${CI_PIPELINE_ID} \
--from-file=container-baseline=container-baseline.rb \
-n ${SCANNER_NAMESPACE}
# Create ConfigMap for threshold
cat > threshold.yml << EOF
compliance:
min: ${THRESHOLD_VALUE}
failed:
critical:
max: 0
EOF
kubectl create configmap inspec-thresholds-${CI_PIPELINE_ID} \
--from-file=threshold.yml=threshold.yml \
-n ${SCANNER_NAMESPACE}
# Deploy the pod with sidecar scanner
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app-scanner-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
labels:
app: scanner-pod
pipeline: "${CI_PIPELINE_ID}"
spec:
shareProcessNamespace: true # Enable shared process namespace
containers:
# Target container to be scanned
- name: target
image: ${TARGET_IMAGE}
command: ["sleep", "3600"]
# For distroless containers, adjust command accordingly
# CINC Auditor scanner sidecar
- name: scanner
image: ruby:3.0-slim
command:
- "/bin/bash"
- "-c"
- |
# Install dependencies
apt-get update
apt-get install -y curl gnupg procps nodejs npm
# Install CINC Auditor
curl -L https://omnitruck.cinc.sh/install.sh | bash -s -- -P cinc-auditor
# Install SAF CLI
npm install -g @mitre/saf
# Wait for the main container to start
sleep 10
echo "Starting CINC Auditor scan..."
# Find the main process of the target container
TARGET_PID=\$(ps aux | grep -v grep | grep "sleep 3600" | head -1 | awk '{print \$2}')
if [ -z "\$TARGET_PID" ]; then
echo "ERROR: Could not find target process"
exit 1
fi
echo "Target process identified: PID \$TARGET_PID"
# Run CINC Auditor against the target filesystem
cd /
cinc-auditor exec /opt/profiles/container-baseline \
-b os=linux \
--target=/proc/\$TARGET_PID/root \
--reporter cli json:/results/scan-results.json
SCAN_EXIT_CODE=\$?
echo "Scan completed with exit code: \$SCAN_EXIT_CODE"
# Process results with SAF
if [ -f "/results/scan-results.json" ]; then
echo "Processing results with SAF CLI..."
saf summary --input /results/scan-results.json --output-md /results/scan-summary.md
# Validate against threshold
if [ -f "/opt/thresholds/threshold.yml" ]; then
echo "Validating against threshold..."
saf threshold -i /results/scan-results.json -t /opt/thresholds/threshold.yml
THRESHOLD_RESULT=\$?
echo "Threshold validation result: \$THRESHOLD_RESULT" > /results/threshold-result.txt
fi
fi
# Indicate scan is complete
touch /results/scan-complete
# Keep container running briefly to allow result retrieval
echo "Scan complete. Results available in /results directory."
sleep 300
volumeMounts:
- name: shared-results
mountPath: /results
- name: profiles
mountPath: /opt/profiles
- name: thresholds
mountPath: /opt/thresholds
volumes:
- name: shared-results
emptyDir: {}
- name: profiles
configMap:
name: inspec-profiles-${CI_PIPELINE_ID}
- name: thresholds
configMap:
name: inspec-thresholds-${CI_PIPELINE_ID}
EOF
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/app-scanner-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --timeout=300s
# Save pod name for later stages
echo "SCANNER_POD=app-scanner-${CI_PIPELINE_ID}" >> deploy.env
- |
# Verify the pod is ready
kubectl get pod app-scanner-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE}
artifacts:
reports:
dotenv: deploy.env
retrieve_results:
stage: scan
needs: [deploy_sidecar_pod]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Wait for scan to complete
echo "Waiting for scan to complete..."
until kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- ls /results/scan-complete >/dev/null 2>&1; do
echo "Scan in progress..."
sleep 5
done
# Retrieve scan results
echo "Retrieving scan results..."
kubectl cp ${SCANNER_NAMESPACE}/${SCANNER_POD}:/results/scan-results.json ./scan-results.json -c scanner
kubectl cp ${SCANNER_NAMESPACE}/${SCANNER_POD}:/results/scan-summary.md ./scan-summary.md -c scanner
# Check threshold result
if kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- cat /results/threshold-result.txt >/dev/null 2>&1; then
THRESHOLD_RESULT=$(kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- cat /results/threshold-result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ "${THRESHOLD_RESULT}" -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
fi
else
echo "Warning: Threshold result not found"
echo "THRESHOLD_PASSED=1" >> scan.env
fi
# Display summary in job output
echo "============= SCAN SUMMARY ============="
cat scan-summary.md
echo "========================================"
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
generate_report:
stage: report
needs: [retrieve_results]
script:
- |
# Install SAF CLI if needed in this stage
which saf || npm install -g @mitre/saf
# Generate a more comprehensive report
saf view -i scan-results.json --output scan-report.html
# Create a simple markdown report for the MR
cat > scan-report.md << EOF
# Security Scan Results
## Summary
$(cat scan-summary.md)
## Threshold Check
${THRESHOLD_PASSED} -eq 0 && echo "✅ **PASSED**" || echo "❌ **FAILED**"
Threshold: ${THRESHOLD_VALUE}%
## Details
For full results, see the artifacts.
EOF
artifacts:
paths:
- scan-report.html
- scan-report.md
when: always
cleanup:
stage: cleanup
needs: [retrieve_results]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Delete all resources
kubectl delete pod/${SCANNER_POD} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete configmap/inspec-profiles-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete configmap/inspec-thresholds-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
|
This pipeline implements:
- Pod deployment with shared process namespace
- Sidecar scanner container configuration
- Process-based scanning approach
Sidecar with Services
| stages:
- deploy
- scan
- report
- cleanup
variables:
SCANNER_NAMESPACE: "inspec-test"
TARGET_LABEL: "app=target-app"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
TARGET_IMAGE: "registry.example.com/my-image:latest" # Target image to scan
# If scanning a distroless image, set this to true
IS_DISTROLESS: "false"
# Define a custom service image for CINC Auditor sidecar deployment
services:
- name: registry.example.com/cinc-auditor-scanner:latest
alias: cinc-scanner
entrypoint: ["sleep", "infinity"]
deploy_sidecar_pod:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Create the namespace if it doesn't exist
kubectl get namespace ${SCANNER_NAMESPACE} || kubectl create namespace ${SCANNER_NAMESPACE}
# Copy profile from within the service container
docker cp ${CINC_PROFILE_PATH} cinc-scanner:/tmp/profile
docker exec cinc-scanner ls -la /tmp/profile
# Create ConfigMap for CINC profile from the service container
kubectl create configmap inspec-profiles-${CI_PIPELINE_ID} \
--from-file=container-baseline=$(docker exec cinc-scanner find /tmp/profile -name "*.rb" | head -1) \
-n ${SCANNER_NAMESPACE}
# Create ConfigMap for threshold
cat > threshold.yml << EOF
compliance:
min: ${THRESHOLD_VALUE}
failed:
critical:
max: 0
EOF
kubectl create configmap inspec-thresholds-${CI_PIPELINE_ID} \
--from-file=threshold.yml=threshold.yml \
-n ${SCANNER_NAMESPACE}
# Deploy the pod with sidecar scanner
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app-scanner-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
labels:
app: scanner-pod
pipeline: "${CI_PIPELINE_ID}"
spec:
shareProcessNamespace: true # Enable shared process namespace
containers:
# Target container to be scanned
- name: target
image: ${TARGET_IMAGE}
command: ["sleep", "3600"]
# For distroless containers, adjust command accordingly
# CINC Auditor scanner sidecar
- name: scanner
image: registry.example.com/cinc-auditor-scanner:latest
command:
- "/bin/bash"
- "-c"
- |
# Wait for the main container to start
sleep 10
echo "Starting CINC Auditor scan..."
# Find the main process of the target container
TARGET_PID=$(ps aux | grep -v grep | grep "sleep 3600" | head -1 | awk '{print $2}')
if [ -z "$TARGET_PID" ]; then
echo "ERROR: Could not find target process"
exit 1
fi
echo "Target process identified: PID $TARGET_PID"
# Run CINC Auditor against the target filesystem
cd /
cinc-auditor exec /opt/profiles/container-baseline \
-b os=linux \
--target=/proc/$TARGET_PID/root \
--reporter cli json:/results/scan-results.json
SCAN_EXIT_CODE=$?
echo "Scan completed with exit code: $SCAN_EXIT_CODE"
# Process results with SAF
if [ -f "/results/scan-results.json" ]; then
echo "Processing results with SAF CLI..."
saf summary --input /results/scan-results.json --output-md /results/scan-summary.md
# Validate against threshold
if [ -f "/opt/thresholds/threshold.yml" ]; then
echo "Validating against threshold..."
saf threshold -i /results/scan-results.json -t /opt/thresholds/threshold.yml
THRESHOLD_RESULT=$?
echo "$THRESHOLD_RESULT" > /results/threshold-result.txt
fi
fi
# Indicate scan is complete
touch /results/scan-complete
# Keep container running briefly to allow result retrieval
echo "Scan complete. Results available in /results directory."
sleep 300
volumeMounts:
- name: shared-results
mountPath: /results
- name: profiles
mountPath: /opt/profiles
- name: thresholds
mountPath: /opt/thresholds
volumes:
- name: shared-results
emptyDir: {}
- name: profiles
configMap:
name: inspec-profiles-${CI_PIPELINE_ID}
- name: thresholds
configMap:
name: inspec-thresholds-${CI_PIPELINE_ID}
EOF
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/app-scanner-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --timeout=300s
# Save pod name for later stages
echo "SCANNER_POD=app-scanner-${CI_PIPELINE_ID}" >> deploy.env
- |
# Verify the pod is ready
kubectl get pod app-scanner-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE}
artifacts:
reports:
dotenv: deploy.env
retrieve_results:
stage: scan
needs: [deploy_sidecar_pod]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Wait for scan to complete
echo "Waiting for scan to complete..."
until kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- ls /results/scan-complete >/dev/null 2>&1; do
echo "Scan in progress..."
sleep 5
done
# Retrieve scan results using the service container
echo "Retrieving scan results..."
kubectl cp ${SCANNER_NAMESPACE}/${SCANNER_POD}:/results/scan-results.json /tmp/scan-results.json -c scanner
kubectl cp ${SCANNER_NAMESPACE}/${SCANNER_POD}:/results/scan-summary.md /tmp/scan-summary.md -c scanner
# Copy results to service container for processing
docker cp /tmp/scan-results.json cinc-scanner:/tmp/
docker cp /tmp/scan-summary.md cinc-scanner:/tmp/
# Process results in the service container
docker exec cinc-scanner bash -c "
# Generate normalized report
saf normalize -i /tmp/scan-results.json -o /tmp/normalized-results.json
# Additional report processing
saf view -i /tmp/scan-results.json --output /tmp/scan-report.html
"
# Copy processed results back
docker cp cinc-scanner:/tmp/normalized-results.json ./normalized-results.json
docker cp cinc-scanner:/tmp/scan-report.html ./scan-report.html
docker cp cinc-scanner:/tmp/scan-results.json ./scan-results.json
docker cp cinc-scanner:/tmp/scan-summary.md ./scan-summary.md
# Check threshold result
if kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- cat /results/threshold-result.txt >/dev/null 2>&1; then
THRESHOLD_RESULT=$(kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- cat /results/threshold-result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ "${THRESHOLD_RESULT}" -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
fi
else
echo "Warning: Threshold result not found"
echo "THRESHOLD_PASSED=1" >> scan.env
fi
# Display summary in job output
echo "============= SCAN SUMMARY ============="
cat scan-summary.md
echo "========================================"
artifacts:
paths:
- scan-results.json
- scan-summary.md
- normalized-results.json
- scan-report.html
reports:
dotenv: scan.env
# This example shows how to utilize the service container
# to generate specialized reports from the scan results
generate_report:
stage: report
needs: [retrieve_results]
script:
- |
# Use the service container to generate comprehensive reports
docker cp scan-results.json cinc-scanner:/tmp/
# Generate multiple report formats in the service container
docker exec cinc-scanner bash -c "
cd /tmp
# Generate HTML report
saf view -i scan-results.json --output enhanced-report.html
# Generate CSV report
saf generate -i scan-results.json -o csv > results.csv
# Generate Excel report
saf generate -i scan-results.json -o xlsx > results.xlsx
# Generate JUnit report for CI integration
saf generate -i scan-results.json -o junit > junit.xml
"
# Copy all reports back
docker cp cinc-scanner:/tmp/enhanced-report.html ./enhanced-report.html
docker cp cinc-scanner:/tmp/results.csv ./results.csv
docker cp cinc-scanner:/tmp/results.xlsx ./results.xlsx
docker cp cinc-scanner:/tmp/junit.xml ./junit.xml
# Create a simple markdown report for the MR
cat > scan-report.md << EOF
# Security Scan Results
## Summary
$(cat scan-summary.md)
## Threshold Check
$([ "${THRESHOLD_PASSED}" -eq 0 ] && echo "✅ **PASSED**" || echo "❌ **FAILED**")
Threshold: ${THRESHOLD_VALUE}%
## Details
For full results, see the enhanced report artifacts.
* HTML Report: enhanced-report.html
* CSV Report: results.csv
* Excel Report: results.xlsx
* JUnit Report: junit.xml
EOF
artifacts:
paths:
- enhanced-report.html
- results.csv
- results.xlsx
- junit.xml
- scan-report.md
reports:
junit: junit.xml
when: always
cleanup:
stage: cleanup
needs: [retrieve_results]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Delete all resources
kubectl delete pod/${SCANNER_POD} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete configmap/inspec-profiles-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete configmap/inspec-thresholds-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
|
This pipeline uses GitLab services to provide:
- Pre-configured sidecar scanner service
- Simplified deployment and configuration
- Consistent scanning environment
Choosing the Right Example
Use this guide to select the appropriate CI/CD implementation:
- For Standard Containers in Production:
- GitHub: Use
github-workflow-examples/existing-cluster-scanning.yml
-
GitLab: Use gitlab-pipeline-examples/gitlab-ci.yml
or gitlab-pipeline-examples/gitlab-ci-with-services.yml
-
For Distroless Containers:
- GitHub: Use
github-workflow-examples/setup-and-scan.yml
with distroless configuration
-
GitLab: Use gitlab-pipeline-examples/existing-cluster-scanning.yml
with distroless configuration or gitlab-pipeline-examples/gitlab-ci-with-services.yml
with distroless service
-
For Universal Scanning (both standard and distroless):
- GitHub: Use
github-workflow-examples/sidecar-scanner.yml
-
GitLab: Use gitlab-pipeline-examples/gitlab-ci-sidecar.yml
or gitlab-pipeline-examples/gitlab-ci-sidecar-with-services.yml
-
For Local Development and Testing:
- GitHub: Use
github-workflow-examples/setup-and-scan.yml
- GitLab: Use
gitlab-pipeline-examples/gitlab-ci.yml
with minikube setup
Features Comparison
Feature |
Kubernetes API Approach |
Debug Container Approach |
Sidecar Container Approach |
Standard Container Support |
✅ Best approach |
✅ Supported |
✅ Supported |
Distroless Container Support |
🔄 In progress |
✅ Best interim approach |
✅ Supported |
No Pod Modification Required |
✅ Yes |
❌ No |
❌ No |
Minimal Privileges |
✅ Yes |
❌ No |
✅ Yes |
GitHub Actions Support |
✅ Yes |
✅ Yes |
✅ Yes |
GitLab CI Support |
✅ Yes |
✅ Yes |
✅ Yes |
GitLab Services Support |
✅ Yes |
✅ Yes |
✅ Yes |