GitLab Pipeline Examples
This directory contains example GitLab CI pipeline configuration files that demonstrate various container scanning approaches.
Available Examples
- Standard Kubernetes API: Four-stage pipeline for container scanning using the Kubernetes API
- Dynamic RBAC Scanning: Label-based pod targeting with restricted RBAC permissions
- Existing Cluster Scanning: Configuration for scanning distroless containers
- GitLab CI with Services: Pipeline using GitLab services for a pre-configured scanning environment
- Sidecar Container: Pipeline implementing pod deployment with shared process namespace
- Sidecar with Services: Pipeline using GitLab services for sidecar scanner deployment
Standard GitLab CI Pipeline
| stages:
- deploy
- scan
- report
- cleanup
variables:
SCANNER_NAMESPACE: "inspec-test"
TARGET_LABEL: "app=target-app"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
deploy_container:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: scan-target-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
labels:
app: target-app
pipeline: "${CI_PIPELINE_ID}"
spec:
containers:
- name: target
image: registry.example.com/my-image:latest
command: ["sleep", "1h"]
EOF
- |
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/scan-target-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --timeout=120s
- |
# Save target info for later stages
echo "TARGET_POD=scan-target-${CI_PIPELINE_ID}" >> deploy.env
echo "TARGET_CONTAINER=target" >> deploy.env
artifacts:
reports:
dotenv: deploy.env
create_access:
stage: scan
needs: [deploy_container]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Create the role for this specific pod
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
resourceNames: ["${TARGET_POD}"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
resourceNames: ["${TARGET_POD}"]
EOF
- |
# Create service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
EOF
- |
# Create role binding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
subjects:
- kind: ServiceAccount
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
roleRef:
kind: Role
name: scanner-role-${CI_PIPELINE_ID}
apiGroup: rbac.authorization.k8s.io
EOF
- |
# Generate token
TOKEN=$(kubectl create token scanner-sa-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --duration=30m)
echo "SCANNER_TOKEN=${TOKEN}" >> scanner.env
# Save cluster info
SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten \
-o jsonpath='{.clusters[].cluster.certificate-authority-data}')
echo "CLUSTER_SERVER=${SERVER}" >> scanner.env
echo "CLUSTER_CA_DATA=${CA_DATA}" >> scanner.env
artifacts:
reports:
dotenv: scanner.env
run_scan:
stage: scan
needs: [deploy_container, create_access]
script:
- |
# Create a kubeconfig file
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCANNER_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
- |
# Install CINC Auditor
curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor
# Install train-k8s-container plugin
cinc-auditor plugin install train-k8s-container
# Install SAF CLI
npm install -g @mitre/saf
# Run cinc-auditor scan
KUBECONFIG=scan-kubeconfig.yaml \
cinc-auditor exec ${CINC_PROFILE_PATH} \
-t k8s-container://${SCANNER_NAMESPACE}/${TARGET_POD}/${TARGET_CONTAINER} \
--reporter json:scan-results.json
# Generate scan summary using SAF CLI
saf summary --input scan-results.json --output-md scan-summary.md
# Display summary in job output
cat scan-summary.md
# Check scan against threshold
saf threshold -i scan-results.json -t ${THRESHOLD_VALUE}
THRESHOLD_RESULT=$?
# Save result for later stages
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ ${THRESHOLD_RESULT} -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce threshold as a gate
# exit ${THRESHOLD_RESULT}
fi
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
generate_report:
stage: report
needs: [run_scan]
script:
- |
# Install SAF CLI if needed in this stage
which saf || npm install -g @mitre/saf
# Generate a more comprehensive report
saf view -i scan-results.json --output scan-report.html
# Create a simple markdown report for the MR
cat > scan-report.md << EOF
# Security Scan Results
## Summary
$(cat scan-summary.md)
## Threshold Check
${THRESHOLD_PASSED} -eq 0 && echo "✅ **PASSED**" || echo "❌ **FAILED**"
Threshold: ${THRESHOLD_VALUE}%
## Details
For full results, see the artifacts.
EOF
artifacts:
paths:
- scan-report.html
- scan-report.md
when: always
cleanup:
stage: cleanup
needs: [run_scan]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Delete all resources
kubectl delete pod/${TARGET_POD} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete role/scanner-role-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete sa/scanner-sa-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete rolebinding/scanner-binding-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --ignore-not-found
|
Dynamic RBAC Scanning Pipeline
| stages:
- deploy
- scan
- verify
- cleanup
variables:
KUBERNETES_NAMESPACE: "dynamic-scan-$CI_PIPELINE_ID"
TARGET_IMAGE: "busybox:latest"
SCAN_LABEL_KEY: "scan-target"
SCAN_LABEL_VALUE: "true"
CINC_PROFILE: "dev-sec/linux-baseline"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
DURATION_MINUTES: "15" # Token duration in minutes
# Allow overriding variables through pipeline triggers or UI
.dynamic_variables: &dynamic_variables
TARGET_IMAGE: ${TARGET_IMAGE}
SCAN_LABEL_KEY: ${SCAN_LABEL_KEY}
SCAN_LABEL_VALUE: ${SCAN_LABEL_VALUE}
CINC_PROFILE: ${CINC_PROFILE}
THRESHOLD_VALUE: ${THRESHOLD_VALUE}
ADDITIONAL_PROFILE_ANNOTATION: "${ADDITIONAL_PROFILE_ANNOTATION}" # Optional annotation for specifying additional profiles
setup_test_environment:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create test namespace
- kubectl create namespace ${KUBERNETES_NAMESPACE}
# Create multiple test pods with different images and labels
- |
# Create 3 pods, but only mark the first one for scanning
for i in {1..3}; do
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pod-${i}
namespace: ${KUBERNETES_NAMESPACE}
labels:
app: test-pod-${i}
${SCAN_LABEL_KEY}: "$([ $i -eq 1 ] && echo "${SCAN_LABEL_VALUE}" || echo "false")"
annotations:
scan-profile: "${CINC_PROFILE}"
$([ -n "${ADDITIONAL_PROFILE_ANNOTATION}" ] && echo "${ADDITIONAL_PROFILE_ANNOTATION}" || echo "")
spec:
containers:
- name: container
image: ${TARGET_IMAGE}
command: ["sleep", "infinity"]
EOF
done
# Wait for pods to be ready
- kubectl wait --for=condition=ready pod -l app=test-pod-1 -n ${KUBERNETES_NAMESPACE} --timeout=120s
# Get the name of the pod with our scan label
- |
TARGET_POD=$(kubectl get pods -n ${KUBERNETES_NAMESPACE} -l ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE} -o jsonpath='{.items[0].metadata.name}')
if [ -z "$TARGET_POD" ]; then
echo "Error: No pod found with label ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE}"
exit 1
fi
echo "TARGET_POD=${TARGET_POD}" >> deploy.env
# Save scan profile from annotations if available
- |
SCAN_PROFILE=$(kubectl get pod ${TARGET_POD} -n ${KUBERNETES_NAMESPACE} -o jsonpath='{.metadata.annotations.scan-profile}')
if [ -n "$SCAN_PROFILE" ]; then
echo "Found scan profile annotation: ${SCAN_PROFILE}"
echo "SCAN_PROFILE=${SCAN_PROFILE}" >> deploy.env
else
echo "Using default profile: ${CINC_PROFILE}"
echo "SCAN_PROFILE=${CINC_PROFILE}" >> deploy.env
fi
# Show all pods in the namespace
- kubectl get pods -n ${KUBERNETES_NAMESPACE} --show-labels
artifacts:
reports:
dotenv: deploy.env
create_dynamic_rbac:
stage: scan
needs: [setup_test_environment]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create service account
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-sa
namespace: ${KUBERNETES_NAMESPACE}
EOF
# Create role with label-based access
- |
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role
namespace: ${KUBERNETES_NAMESPACE}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
EOF
# Create role binding
- |
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding
namespace: ${KUBERNETES_NAMESPACE}
subjects:
- kind: ServiceAccount
name: scanner-sa
namespace: ${KUBERNETES_NAMESPACE}
roleRef:
kind: Role
name: scanner-role
apiGroup: rbac.authorization.k8s.io
EOF
# Generate token
- |
TOKEN=$(kubectl create token scanner-sa -n ${KUBERNETES_NAMESPACE} --duration=${DURATION_MINUTES}m)
SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
# Save token and cluster info for later stages
echo "SCANNER_TOKEN=${TOKEN}" >> scanner.env
echo "CLUSTER_SERVER=${SERVER}" >> scanner.env
echo "CLUSTER_CA_DATA=${CA_DATA}" >> scanner.env
artifacts:
reports:
dotenv: scanner.env
run_security_scan:
stage: scan
needs: [setup_test_environment, create_dynamic_rbac]
script:
# Create kubeconfig with restricted token
- |
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${KUBERNETES_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
chmod 600 scan-kubeconfig.yaml
# Install CINC Auditor
- curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor
# Install train-k8s-container plugin
- cinc-auditor plugin install train-k8s-container
# Install SAF CLI
- npm install -g @mitre/saf
# Verify the tools
- cinc-auditor --version
- saf --version
# Find the target pod by label using the restricted token
- |
echo "Looking for pods with label: ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE}"
SCANNED_POD=$(KUBECONFIG=scan-kubeconfig.yaml kubectl get pods -n ${KUBERNETES_NAMESPACE} -l ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE} -o jsonpath='{.items[0].metadata.name}')
if [ -z "$SCANNED_POD" ]; then
echo "Error: No pod found with label ${SCAN_LABEL_KEY}=${SCAN_LABEL_VALUE} using restricted access"
exit 1
fi
echo "Found target pod: ${SCANNED_POD}"
# Verify it matches what we expected
if [ "$SCANNED_POD" != "$TARGET_POD" ]; then
echo "Warning: Scanned pod ($SCANNED_POD) doesn't match expected target pod ($TARGET_POD)"
fi
# Get container name
- CONTAINER_NAME=$(kubectl get pod ${TARGET_POD} -n ${KUBERNETES_NAMESPACE} -o jsonpath='{.spec.containers[0].name}')
# Run CINC Auditor scan
- |
echo "Running CINC Auditor scan on ${KUBERNETES_NAMESPACE}/${TARGET_POD}/${CONTAINER_NAME}"
KUBECONFIG=scan-kubeconfig.yaml cinc-auditor exec ${SCAN_PROFILE} \
-t k8s-container://${KUBERNETES_NAMESPACE}/${TARGET_POD}/${CONTAINER_NAME} \
--reporter cli json:scan-results.json
# Save scan exit code
SCAN_EXIT_CODE=$?
echo "SCAN_EXIT_CODE=${SCAN_EXIT_CODE}" >> scan.env
echo "Scan completed with exit code: ${SCAN_EXIT_CODE}"
# Process results with SAF-CLI
- |
echo "Generating scan summary with SAF-CLI:"
saf summary --input scan-results.json --output-md scan-summary.md
# Display the summary in the logs
cat scan-summary.md
# Create a threshold file
cat > threshold.yml << EOF
compliance:
min: ${THRESHOLD_VALUE}
failed:
critical:
max: 0 # No critical failures allowed
EOF
# Apply threshold check
echo "Checking against threshold with min compliance of ${THRESHOLD_VALUE}%:"
saf threshold -i scan-results.json -t threshold.yml
THRESHOLD_RESULT=$?
echo "THRESHOLD_RESULT=${THRESHOLD_RESULT}" >> scan.env
if [ $THRESHOLD_RESULT -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce the threshold as a quality gate
# exit $THRESHOLD_RESULT
fi
# Generate comprehensive HTML report
saf view -i scan-results.json --output scan-report.html
artifacts:
paths:
- scan-results.json
- scan-summary.md
- scan-report.html
reports:
dotenv: scan.env
verify_rbac_restrictions:
stage: verify
needs: [setup_test_environment, create_dynamic_rbac, run_security_scan]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create a second kubeconfig with restricted token
- |
cat > verify-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${KUBERNETES_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
chmod 600 verify-kubeconfig.yaml
# Get a non-target pod name
- OTHER_POD=$(kubectl get pods -n ${KUBERNETES_NAMESPACE} -l app=test-pod-2 -o jsonpath='{.items[0].metadata.name}')
# Check what we CAN do
- |
echo "Verifying what we CAN do with restricted RBAC:"
echo "Can list pods:"
KUBECONFIG=verify-kubeconfig.yaml kubectl get pods -n ${KUBERNETES_NAMESPACE} > /dev/null &&
echo "✅ Can list pods" ||
echo "❌ Cannot list pods"
echo "Can exec into target pod:"
KUBECONFIG=verify-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n ${KUBERNETES_NAMESPACE} --resource-name=${TARGET_POD} &&
echo "✅ Can exec into target pod" ||
echo "❌ Cannot exec into target pod"
# Check what we CANNOT do
- |
echo "Verifying what we CANNOT do with restricted RBAC:"
echo "Cannot create pods:"
KUBECONFIG=verify-kubeconfig.yaml kubectl auth can-i create pods -n ${KUBERNETES_NAMESPACE} &&
echo "❌ Security issue: Can create pods" ||
echo "✅ Cannot create pods (expected)"
echo "Cannot delete pods:"
KUBECONFIG=verify-kubeconfig.yaml kubectl auth can-i delete pods -n ${KUBERNETES_NAMESPACE} &&
echo "❌ Security issue: Can delete pods" ||
echo "✅ Cannot delete pods (expected)"
# Create a security report for MR
- |
cat > security-report.md << EOF
# Container Security Scan Report
## Scan Results
$(cat scan-summary.md)
## Threshold Check
$([[ "${THRESHOLD_RESULT}" -eq 0 ]] && echo "✅ **PASSED**" || echo "❌ **FAILED**")
Threshold: ${THRESHOLD_VALUE}%
## RBAC Security Verification
The scanner service account has properly restricted access:
- ✅ Can list pods in the namespace
- ✅ Can exec into target pods for scanning
- ✅ Cannot create or delete pods
- ✅ Cannot access cluster-wide resources
## Scan Details
- Target Pod: \`${TARGET_POD}\`
- Container: \`${CONTAINER_NAME}\`
- Image: \`${TARGET_IMAGE}\`
- Profile: \`${SCAN_PROFILE}\`
For full results, see the scan artifacts.
EOF
artifacts:
paths:
- security-report.md
cleanup:
stage: cleanup
needs: [setup_test_environment]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- kubectl delete namespace ${KUBERNETES_NAMESPACE} --ignore-not-found
|
Existing Cluster Scanning Pipeline
| stages:
- prepare
- scan
- verify
- cleanup
variables:
# Default values - override in UI or with pipeline parameters
SCAN_NAMESPACE: "default" # Existing namespace where pods are deployed
TARGET_LABEL_SELECTOR: "scan-target=true" # Label to identify target pods
CINC_PROFILE: "dev-sec/linux-baseline"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
DURATION_MINUTES: "15" # Token duration in minutes
# Define workflow
workflow:
rules:
- if: $CI_PIPELINE_SOURCE == "web" # Manual trigger from UI
- if: $CI_PIPELINE_SOURCE == "schedule" # Scheduled pipeline
- if: $CI_PIPELINE_SOURCE == "trigger" # API trigger with token
# Find pods to scan in existing cluster
prepare_scan:
stage: prepare
image: bitnami/kubectl:latest
script:
# Configure kubectl with cluster credentials
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create a unique run ID for this pipeline
- RUN_ID="gl-$CI_PIPELINE_ID-$CI_JOB_ID"
- echo "RUN_ID=${RUN_ID}" >> prepare.env
# Verify the namespace exists
- kubectl get namespace ${SCAN_NAMESPACE} || { echo "Namespace ${SCAN_NAMESPACE} does not exist"; exit 1; }
# Find target pods with specified label
- |
TARGET_PODS=$(kubectl get pods -n ${SCAN_NAMESPACE} -l ${TARGET_LABEL_SELECTOR} -o jsonpath='{.items[*].metadata.name}')
if [ -z "$TARGET_PODS" ]; then
echo "No pods found matching label: ${TARGET_LABEL_SELECTOR} in namespace ${SCAN_NAMESPACE}"
exit 1
fi
# Count and list found pods
POD_COUNT=$(echo $TARGET_PODS | wc -w)
echo "Found ${POD_COUNT} pods to scan:"
kubectl get pods -n ${SCAN_NAMESPACE} -l ${TARGET_LABEL_SELECTOR} --show-labels
# Get the first pod as primary target
PRIMARY_POD=$(echo $TARGET_PODS | cut -d' ' -f1)
echo "Primary target pod: ${PRIMARY_POD}"
echo "PRIMARY_POD=${PRIMARY_POD}" >> prepare.env
# Get container name for the primary pod
PRIMARY_CONTAINER=$(kubectl get pod ${PRIMARY_POD} -n ${SCAN_NAMESPACE} -o jsonpath='{.spec.containers[0].name}')
echo "Primary container: ${PRIMARY_CONTAINER}"
echo "PRIMARY_CONTAINER=${PRIMARY_CONTAINER}" >> prepare.env
# Check for custom profile annotation
PROFILE_ANNOTATION=$(kubectl get pod ${PRIMARY_POD} -n ${SCAN_NAMESPACE} -o jsonpath='{.metadata.annotations.scan-profile}' 2>/dev/null || echo "")
if [ -n "$PROFILE_ANNOTATION" ]; then
echo "Found profile annotation: ${PROFILE_ANNOTATION}"
echo "PROFILE=${PROFILE_ANNOTATION}" >> prepare.env
else
echo "Using default profile: ${CINC_PROFILE}"
echo "PROFILE=${CINC_PROFILE}" >> prepare.env
fi
artifacts:
reports:
dotenv: prepare.env
# Create temporary RBAC for scanning
create_rbac:
stage: prepare
image: bitnami/kubectl:latest
needs: [prepare_scan]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create service account for scanning
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-${RUN_ID}
namespace: ${SCAN_NAMESPACE}
labels:
app: security-scanner
component: cinc-auditor
pipeline: "${CI_PIPELINE_ID}"
EOF
# Create role with least privilege
- |
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role-${RUN_ID}
namespace: ${SCAN_NAMESPACE}
labels:
app: security-scanner
component: cinc-auditor
pipeline: "${CI_PIPELINE_ID}"
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
resourceNames: ["${PRIMARY_POD}"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
resourceNames: ["${PRIMARY_POD}"]
EOF
# Create role binding
- |
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding-${RUN_ID}
namespace: ${SCAN_NAMESPACE}
labels:
app: security-scanner
component: cinc-auditor
pipeline: "${CI_PIPELINE_ID}"
subjects:
- kind: ServiceAccount
name: scanner-${RUN_ID}
namespace: ${SCAN_NAMESPACE}
roleRef:
kind: Role
name: scanner-role-${RUN_ID}
apiGroup: rbac.authorization.k8s.io
EOF
# Generate token for service account
- |
TOKEN=$(kubectl create token scanner-${RUN_ID} -n ${SCAN_NAMESPACE} --duration=${DURATION_MINUTES}m)
SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')
# Save token and cluster info for later stages
echo "SCANNER_TOKEN=${TOKEN}" >> rbac.env
echo "CLUSTER_SERVER=${SERVER}" >> rbac.env
echo "CLUSTER_CA_DATA=${CA_DATA}" >> rbac.env
artifacts:
reports:
dotenv: rbac.env
# Run the security scan with restricted access
run_security_scan:
stage: scan
image: registry.gitlab.com/gitlab-org/security-products/analyzers/container-scanning:5
needs: [prepare_scan, create_rbac]
script:
# Create restricted kubeconfig
- |
cat > scanner-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCAN_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
chmod 600 scanner-kubeconfig.yaml
# Install CINC Auditor and plugins
- curl -L https://omnitruck.cinc.sh/install.sh | bash -s -- -P cinc-auditor
- cinc-auditor plugin install train-k8s-container
# Install SAF CLI
- apt-get update && apt-get install -y npm
- npm install -g @mitre/saf
# Test restricted access
- |
echo "Testing restricted access:"
export KUBECONFIG=scanner-kubeconfig.yaml
kubectl get pods -n ${SCAN_NAMESPACE} -l ${TARGET_LABEL_SELECTOR}
echo "Verifying target pod access:"
kubectl get pod ${PRIMARY_POD} -n ${SCAN_NAMESPACE} -o name || { echo "Cannot access target pod with restricted token"; exit 1; }
# Run the scan
- |
echo "Running CINC Auditor scan on ${SCAN_NAMESPACE}/${PRIMARY_POD}/${PRIMARY_CONTAINER}"
KUBECONFIG=scanner-kubeconfig.yaml cinc-auditor exec ${PROFILE} \
-t k8s-container://${SCAN_NAMESPACE}/${PRIMARY_POD}/${PRIMARY_CONTAINER} \
--reporter cli json:scan-results.json
# Save scan exit code
SCAN_EXIT_CODE=$?
echo "SCAN_EXIT_CODE=${SCAN_EXIT_CODE}" >> scan.env
echo "Scan completed with exit code: ${SCAN_EXIT_CODE}"
# Process results with SAF-CLI
- |
echo "Generating scan summary with SAF-CLI:"
saf summary --input scan-results.json --output-md scan-summary.md
# Display the summary in the logs
cat scan-summary.md
# Create a threshold file
cat > threshold.yml << EOF
compliance:
min: ${THRESHOLD_VALUE}
failed:
critical:
max: 0 # No critical failures allowed
EOF
# Apply threshold check
echo "Checking against threshold with min compliance of ${THRESHOLD_VALUE}%:"
saf threshold -i scan-results.json -t threshold.yml
THRESHOLD_RESULT=$?
echo "THRESHOLD_RESULT=${THRESHOLD_RESULT}" >> scan.env
if [ $THRESHOLD_RESULT -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce the threshold as a quality gate
# exit $THRESHOLD_RESULT
fi
# Generate HTML report
saf view -i scan-results.json --output scan-report.html
artifacts:
paths:
- scan-results.json
- scan-summary.md
- scan-report.html
- threshold.yml
reports:
dotenv: scan.env
# Verify RBAC permissions are properly restricted
verify_rbac:
stage: verify
image: bitnami/kubectl:latest
needs: [prepare_scan, create_rbac, run_security_scan]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Create restricted kubeconfig for testing
- |
cat > scanner-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCAN_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
chmod 600 scanner-kubeconfig.yaml
# Check what we CAN do
- |
echo "Verifying what we CAN do with restricted RBAC:"
echo "Can list pods:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl get pods -n ${SCAN_NAMESPACE} > /dev/null &&
echo "✅ Can list pods" ||
echo "❌ Cannot list pods"
echo "Can exec into target pod:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n ${SCAN_NAMESPACE} --resource-name=${PRIMARY_POD} &&
echo "✅ Can exec into target pod" ||
echo "❌ Cannot exec into target pod"
# Check what we CANNOT do
- |
echo "Verifying what we CANNOT do with restricted RBAC:"
echo "Cannot create pods:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl auth can-i create pods -n ${SCAN_NAMESPACE} &&
echo "❌ Security issue: Can create pods" ||
echo "✅ Cannot create pods (expected)"
echo "Cannot delete pods:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl auth can-i delete pods -n ${SCAN_NAMESPACE} &&
echo "❌ Security issue: Can delete pods" ||
echo "✅ Cannot delete pods (expected)"
# Find non-target pod for testing
OTHER_POD=$(kubectl get pods -n ${SCAN_NAMESPACE} -l app!=scan-target -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || echo "")
if [ -n "$OTHER_POD" ] && [ "$OTHER_POD" != "$PRIMARY_POD" ]; then
echo "Cannot exec into non-target pod:"
KUBECONFIG=scanner-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n ${SCAN_NAMESPACE} --resource-name=${OTHER_POD} &&
echo "❌ Security issue: Can exec into non-target pod" ||
echo "✅ Cannot exec into non-target pod (expected)"
fi
# Create security report
- |
cat > security-report.md << EOF
# Container Security Scan Report
## Scan Details
- **Pipeline:** ${CI_PIPELINE_ID}
- **Target Namespace:** ${SCAN_NAMESPACE}
- **Target Pod:** ${PRIMARY_POD}
- **Target Container:** ${PRIMARY_CONTAINER}
- **CINC Profile:** ${PROFILE}
- **Compliance Threshold:** ${THRESHOLD_VALUE}%
## RBAC Security Verification
The scanner service account has properly restricted access:
- ✅ Can list pods in the namespace
- ✅ Can exec into target pods for scanning
- ✅ Cannot create or delete pods
- ✅ Cannot exec into non-target pods
- ✅ Cannot access cluster-wide resources
## Scan Results
$([[ "${THRESHOLD_RESULT}" -eq 0 ]] && echo "✅ **PASSED**" || echo "❌ **FAILED**")
See scan artifacts for detailed compliance results.
EOF
artifacts:
paths:
- security-report.md
# Always clean up RBAC resources
cleanup_rbac:
stage: cleanup
image: bitnami/kubectl:latest
needs: [prepare_scan]
when: always
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
# Delete role binding
- kubectl delete rolebinding scanner-binding-${RUN_ID} -n ${SCAN_NAMESPACE} --ignore-not-found
# Delete role
- kubectl delete role scanner-role-${RUN_ID} -n ${SCAN_NAMESPACE} --ignore-not-found
# Delete service account
- kubectl delete serviceaccount scanner-${RUN_ID} -n ${SCAN_NAMESPACE} --ignore-not-found
- echo "RBAC resources cleaned up"
|
GitLab CI with Services Pipeline
| stages:
- deploy
- scan
- report
- cleanup
variables:
SCANNER_NAMESPACE: "inspec-test"
TARGET_LABEL: "app=target-app"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
# Define a custom service image for CINC Auditor
services:
- name: registry.example.com/cinc-auditor-scanner:latest
alias: cinc-scanner
entrypoint: ["sleep", "infinity"]
deploy_container:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: scan-target-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
labels:
app: target-app
pipeline: "${CI_PIPELINE_ID}"
spec:
containers:
- name: target
image: registry.example.com/my-image:latest
command: ["sleep", "1h"]
EOF
- |
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/scan-target-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --timeout=120s
- |
# Save target info for later stages
echo "TARGET_POD=scan-target-${CI_PIPELINE_ID}" >> deploy.env
echo "TARGET_CONTAINER=target" >> deploy.env
artifacts:
reports:
dotenv: deploy.env
create_access:
stage: scan
needs: [deploy_container]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Create the role for this specific pod
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: scanner-role-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create"]
resourceNames: ["${TARGET_POD}"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get"]
resourceNames: ["${TARGET_POD}"]
EOF
- |
# Create service account
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: ServiceAccount
metadata:
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
EOF
- |
# Create role binding
cat <<EOF | kubectl apply -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: scanner-binding-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
subjects:
- kind: ServiceAccount
name: scanner-sa-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
roleRef:
kind: Role
name: scanner-role-${CI_PIPELINE_ID}
apiGroup: rbac.authorization.k8s.io
EOF
- |
# Generate token
TOKEN=$(kubectl create token scanner-sa-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --duration=30m)
echo "SCANNER_TOKEN=${TOKEN}" >> scanner.env
# Save cluster info
SERVER=$(kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}')
CA_DATA=$(kubectl config view --raw --minify --flatten \
-o jsonpath='{.clusters[].cluster.certificate-authority-data}')
echo "CLUSTER_SERVER=${SERVER}" >> scanner.env
echo "CLUSTER_CA_DATA=${CA_DATA}" >> scanner.env
artifacts:
reports:
dotenv: scanner.env
run_scan:
stage: scan
needs: [deploy_container, create_access]
# This job uses the cinc-scanner service container
# The service container already has CINC Auditor and the SAF CLI installed
script:
- |
# Create a kubeconfig file
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCANNER_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
- |
# Copy kubeconfig and profiles to service container
docker cp scan-kubeconfig.yaml cinc-scanner:/tmp/
docker cp ${CINC_PROFILE_PATH} cinc-scanner:/tmp/profile
# Run scan in service container
docker exec cinc-scanner bash -c "
KUBECONFIG=/tmp/scan-kubeconfig.yaml \
cinc-auditor exec /tmp/profile \
-t k8s-container://${SCANNER_NAMESPACE}/${TARGET_POD}/${TARGET_CONTAINER} \
--reporter json:/tmp/scan-results.json
# Generate scan summary using SAF CLI
saf summary --input /tmp/scan-results.json --output-md /tmp/scan-summary.md
# Check scan against threshold
saf threshold -i /tmp/scan-results.json -t ${THRESHOLD_VALUE}
echo \$? > /tmp/threshold_result.txt
"
# Copy results back from service container
docker cp cinc-scanner:/tmp/scan-results.json ./scan-results.json
docker cp cinc-scanner:/tmp/scan-summary.md ./scan-summary.md
docker cp cinc-scanner:/tmp/threshold_result.txt ./threshold_result.txt
# Display summary in job output
cat scan-summary.md
# Process threshold result
THRESHOLD_RESULT=$(cat threshold_result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ ${THRESHOLD_RESULT} -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce threshold as a gate
# exit ${THRESHOLD_RESULT}
fi
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
# For distroless containers, we need a specialized approach
run_distroless_scan:
stage: scan
needs: [deploy_container, create_access]
# This job will only run if the DISTROLESS variable is set to "true"
rules:
- if: $DISTROLESS == "true"
# Use our specialized distroless scanner service container
services:
- name: registry.example.com/distroless-scanner:latest
alias: distroless-scanner
entrypoint: ["sleep", "infinity"]
script:
- |
# Create a kubeconfig file
cat > scan-kubeconfig.yaml << EOF
apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
server: ${CLUSTER_SERVER}
certificate-authority-data: ${CLUSTER_CA_DATA}
name: scanner-cluster
contexts:
- context:
cluster: scanner-cluster
namespace: ${SCANNER_NAMESPACE}
user: scanner-user
name: scanner-context
current-context: scanner-context
users:
- name: scanner-user
user:
token: ${SCANNER_TOKEN}
EOF
- |
# Copy kubeconfig and profiles to distroless scanner service container
docker cp scan-kubeconfig.yaml distroless-scanner:/tmp/
docker cp ${CINC_PROFILE_PATH} distroless-scanner:/tmp/profile
# Run specialized distroless scan in service container
docker exec distroless-scanner bash -c "
KUBECONFIG=/tmp/scan-kubeconfig.yaml \
/opt/scripts/scan-distroless.sh \
${SCANNER_NAMESPACE} ${TARGET_POD} ${TARGET_CONTAINER} \
/tmp/profile /tmp/scan-results.json
# Generate scan summary using SAF CLI
saf summary --input /tmp/scan-results.json --output-md /tmp/scan-summary.md
# Check scan against threshold
saf threshold -i /tmp/scan-results.json -t ${THRESHOLD_VALUE}
echo \$? > /tmp/threshold_result.txt
"
# Copy results back from service container
docker cp distroless-scanner:/tmp/scan-results.json ./scan-results.json
docker cp distroless-scanner:/tmp/scan-summary.md ./scan-summary.md
docker cp distroless-scanner:/tmp/threshold_result.txt ./threshold_result.txt
# Display summary in job output
cat scan-summary.md
# Process threshold result
THRESHOLD_RESULT=$(cat threshold_result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ ${THRESHOLD_RESULT} -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
# Uncomment to enforce threshold as a gate
# exit ${THRESHOLD_RESULT}
fi
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
generate_report:
stage: report
needs: [run_scan]
script:
- |
# Install SAF CLI if needed in this stage
which saf || npm install -g @mitre/saf
# Generate a more comprehensive report
saf view -i scan-results.json --output scan-report.html
# Create a simple markdown report for the MR
cat > scan-report.md << EOF
# Security Scan Results
## Summary
$(cat scan-summary.md)
## Threshold Check
${THRESHOLD_PASSED} -eq 0 && echo "✅ **PASSED**" || echo "❌ **FAILED**"
Threshold: ${THRESHOLD_VALUE}%
## Details
For full results, see the artifacts.
EOF
artifacts:
paths:
- scan-report.html
- scan-report.md
when: always
cleanup:
stage: cleanup
needs: [run_scan]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Delete all resources
kubectl delete pod/${TARGET_POD} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete role/scanner-role-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete sa/scanner-sa-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete rolebinding/scanner-binding-${CI_PIPELINE_ID} \
-n ${SCANNER_NAMESPACE} --ignore-not-found
|
Sidecar Container Pipeline
| stages:
- deploy
- scan
- report
- cleanup
variables:
SCANNER_NAMESPACE: "inspec-test"
TARGET_LABEL: "app=target-app"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
TARGET_IMAGE: "registry.example.com/my-image:latest" # Target image to scan
# If scanning a distroless image, set this to true
IS_DISTROLESS: "false"
deploy_sidecar_pod:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Create the namespace if it doesn't exist
kubectl get namespace ${SCANNER_NAMESPACE} || kubectl create namespace ${SCANNER_NAMESPACE}
# Create ConfigMap for CINC profile
cat > container-baseline.rb << EOF
# Example CINC Auditor profile for container scanning
title "Container Baseline"
control "container-1.1" do
impact 0.7
title "Container files should have proper permissions"
desc "Critical files in the container should have proper permissions."
describe file('/etc/passwd') do
it { should exist }
its('mode') { should cmp '0644' }
end
end
control "container-1.2" do
impact 0.5
title "Container should not have unnecessary packages"
desc "Container should be minimal and not contain unnecessary packages."
describe directory('/var/lib/apt') do
it { should_not exist }
end
end
EOF
kubectl create configmap inspec-profiles-${CI_PIPELINE_ID} \
--from-file=container-baseline=container-baseline.rb \
-n ${SCANNER_NAMESPACE}
# Create ConfigMap for threshold
cat > threshold.yml << EOF
compliance:
min: ${THRESHOLD_VALUE}
failed:
critical:
max: 0
EOF
kubectl create configmap inspec-thresholds-${CI_PIPELINE_ID} \
--from-file=threshold.yml=threshold.yml \
-n ${SCANNER_NAMESPACE}
# Deploy the pod with sidecar scanner
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app-scanner-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
labels:
app: scanner-pod
pipeline: "${CI_PIPELINE_ID}"
spec:
shareProcessNamespace: true # Enable shared process namespace
containers:
# Target container to be scanned
- name: target
image: ${TARGET_IMAGE}
command: ["sleep", "3600"]
# For distroless containers, adjust command accordingly
# CINC Auditor scanner sidecar
- name: scanner
image: ruby:3.0-slim
command:
- "/bin/bash"
- "-c"
- |
# Install dependencies
apt-get update
apt-get install -y curl gnupg procps nodejs npm
# Install CINC Auditor
curl -L https://omnitruck.cinc.sh/install.sh | bash -s -- -P cinc-auditor
# Install SAF CLI
npm install -g @mitre/saf
# Wait for the main container to start
sleep 10
echo "Starting CINC Auditor scan..."
# Find the main process of the target container
TARGET_PID=\$(ps aux | grep -v grep | grep "sleep 3600" | head -1 | awk '{print \$2}')
if [ -z "\$TARGET_PID" ]; then
echo "ERROR: Could not find target process"
exit 1
fi
echo "Target process identified: PID \$TARGET_PID"
# Run CINC Auditor against the target filesystem
cd /
cinc-auditor exec /opt/profiles/container-baseline \
-b os=linux \
--target=/proc/\$TARGET_PID/root \
--reporter cli json:/results/scan-results.json
SCAN_EXIT_CODE=\$?
echo "Scan completed with exit code: \$SCAN_EXIT_CODE"
# Process results with SAF
if [ -f "/results/scan-results.json" ]; then
echo "Processing results with SAF CLI..."
saf summary --input /results/scan-results.json --output-md /results/scan-summary.md
# Validate against threshold
if [ -f "/opt/thresholds/threshold.yml" ]; then
echo "Validating against threshold..."
saf threshold -i /results/scan-results.json -t /opt/thresholds/threshold.yml
THRESHOLD_RESULT=\$?
echo "Threshold validation result: \$THRESHOLD_RESULT" > /results/threshold-result.txt
fi
fi
# Indicate scan is complete
touch /results/scan-complete
# Keep container running briefly to allow result retrieval
echo "Scan complete. Results available in /results directory."
sleep 300
volumeMounts:
- name: shared-results
mountPath: /results
- name: profiles
mountPath: /opt/profiles
- name: thresholds
mountPath: /opt/thresholds
volumes:
- name: shared-results
emptyDir: {}
- name: profiles
configMap:
name: inspec-profiles-${CI_PIPELINE_ID}
- name: thresholds
configMap:
name: inspec-thresholds-${CI_PIPELINE_ID}
EOF
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/app-scanner-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --timeout=300s
# Save pod name for later stages
echo "SCANNER_POD=app-scanner-${CI_PIPELINE_ID}" >> deploy.env
- |
# Verify the pod is ready
kubectl get pod app-scanner-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE}
artifacts:
reports:
dotenv: deploy.env
retrieve_results:
stage: scan
needs: [deploy_sidecar_pod]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Wait for scan to complete
echo "Waiting for scan to complete..."
until kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- ls /results/scan-complete >/dev/null 2>&1; do
echo "Scan in progress..."
sleep 5
done
# Retrieve scan results
echo "Retrieving scan results..."
kubectl cp ${SCANNER_NAMESPACE}/${SCANNER_POD}:/results/scan-results.json ./scan-results.json -c scanner
kubectl cp ${SCANNER_NAMESPACE}/${SCANNER_POD}:/results/scan-summary.md ./scan-summary.md -c scanner
# Check threshold result
if kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- cat /results/threshold-result.txt >/dev/null 2>&1; then
THRESHOLD_RESULT=$(kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- cat /results/threshold-result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ "${THRESHOLD_RESULT}" -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
fi
else
echo "Warning: Threshold result not found"
echo "THRESHOLD_PASSED=1" >> scan.env
fi
# Display summary in job output
echo "============= SCAN SUMMARY ============="
cat scan-summary.md
echo "========================================"
artifacts:
paths:
- scan-results.json
- scan-summary.md
reports:
dotenv: scan.env
generate_report:
stage: report
needs: [retrieve_results]
script:
- |
# Install SAF CLI if needed in this stage
which saf || npm install -g @mitre/saf
# Generate a more comprehensive report
saf view -i scan-results.json --output scan-report.html
# Create a simple markdown report for the MR
cat > scan-report.md << EOF
# Security Scan Results
## Summary
$(cat scan-summary.md)
## Threshold Check
${THRESHOLD_PASSED} -eq 0 && echo "✅ **PASSED**" || echo "❌ **FAILED**"
Threshold: ${THRESHOLD_VALUE}%
## Details
For full results, see the artifacts.
EOF
artifacts:
paths:
- scan-report.html
- scan-report.md
when: always
cleanup:
stage: cleanup
needs: [retrieve_results]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Delete all resources
kubectl delete pod/${SCANNER_POD} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete configmap/inspec-profiles-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete configmap/inspec-thresholds-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
|
Sidecar with Services Pipeline
| stages:
- deploy
- scan
- report
- cleanup
variables:
SCANNER_NAMESPACE: "inspec-test"
TARGET_LABEL: "app=target-app"
THRESHOLD_VALUE: "70" # Minimum passing score (0-100)
TARGET_IMAGE: "registry.example.com/my-image:latest" # Target image to scan
# If scanning a distroless image, set this to true
IS_DISTROLESS: "false"
# Define a custom service image for CINC Auditor sidecar deployment
services:
- name: registry.example.com/cinc-auditor-scanner:latest
alias: cinc-scanner
entrypoint: ["sleep", "infinity"]
deploy_sidecar_pod:
stage: deploy
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Create the namespace if it doesn't exist
kubectl get namespace ${SCANNER_NAMESPACE} || kubectl create namespace ${SCANNER_NAMESPACE}
# Copy profile from within the service container
docker cp ${CINC_PROFILE_PATH} cinc-scanner:/tmp/profile
docker exec cinc-scanner ls -la /tmp/profile
# Create ConfigMap for CINC profile from the service container
kubectl create configmap inspec-profiles-${CI_PIPELINE_ID} \
--from-file=container-baseline=$(docker exec cinc-scanner find /tmp/profile -name "*.rb" | head -1) \
-n ${SCANNER_NAMESPACE}
# Create ConfigMap for threshold
cat > threshold.yml << EOF
compliance:
min: ${THRESHOLD_VALUE}
failed:
critical:
max: 0
EOF
kubectl create configmap inspec-thresholds-${CI_PIPELINE_ID} \
--from-file=threshold.yml=threshold.yml \
-n ${SCANNER_NAMESPACE}
# Deploy the pod with sidecar scanner
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: app-scanner-${CI_PIPELINE_ID}
namespace: ${SCANNER_NAMESPACE}
labels:
app: scanner-pod
pipeline: "${CI_PIPELINE_ID}"
spec:
shareProcessNamespace: true # Enable shared process namespace
containers:
# Target container to be scanned
- name: target
image: ${TARGET_IMAGE}
command: ["sleep", "3600"]
# For distroless containers, adjust command accordingly
# CINC Auditor scanner sidecar
- name: scanner
image: registry.example.com/cinc-auditor-scanner:latest
command:
- "/bin/bash"
- "-c"
- |
# Wait for the main container to start
sleep 10
echo "Starting CINC Auditor scan..."
# Find the main process of the target container
TARGET_PID=$(ps aux | grep -v grep | grep "sleep 3600" | head -1 | awk '{print $2}')
if [ -z "$TARGET_PID" ]; then
echo "ERROR: Could not find target process"
exit 1
fi
echo "Target process identified: PID $TARGET_PID"
# Run CINC Auditor against the target filesystem
cd /
cinc-auditor exec /opt/profiles/container-baseline \
-b os=linux \
--target=/proc/$TARGET_PID/root \
--reporter cli json:/results/scan-results.json
SCAN_EXIT_CODE=$?
echo "Scan completed with exit code: $SCAN_EXIT_CODE"
# Process results with SAF
if [ -f "/results/scan-results.json" ]; then
echo "Processing results with SAF CLI..."
saf summary --input /results/scan-results.json --output-md /results/scan-summary.md
# Validate against threshold
if [ -f "/opt/thresholds/threshold.yml" ]; then
echo "Validating against threshold..."
saf threshold -i /results/scan-results.json -t /opt/thresholds/threshold.yml
THRESHOLD_RESULT=$?
echo "$THRESHOLD_RESULT" > /results/threshold-result.txt
fi
fi
# Indicate scan is complete
touch /results/scan-complete
# Keep container running briefly to allow result retrieval
echo "Scan complete. Results available in /results directory."
sleep 300
volumeMounts:
- name: shared-results
mountPath: /results
- name: profiles
mountPath: /opt/profiles
- name: thresholds
mountPath: /opt/thresholds
volumes:
- name: shared-results
emptyDir: {}
- name: profiles
configMap:
name: inspec-profiles-${CI_PIPELINE_ID}
- name: thresholds
configMap:
name: inspec-thresholds-${CI_PIPELINE_ID}
EOF
# Wait for pod to be ready
kubectl wait --for=condition=ready pod/app-scanner-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --timeout=300s
# Save pod name for later stages
echo "SCANNER_POD=app-scanner-${CI_PIPELINE_ID}" >> deploy.env
- |
# Verify the pod is ready
kubectl get pod app-scanner-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE}
artifacts:
reports:
dotenv: deploy.env
retrieve_results:
stage: scan
needs: [deploy_sidecar_pod]
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Wait for scan to complete
echo "Waiting for scan to complete..."
until kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- ls /results/scan-complete >/dev/null 2>&1; do
echo "Scan in progress..."
sleep 5
done
# Retrieve scan results using the service container
echo "Retrieving scan results..."
kubectl cp ${SCANNER_NAMESPACE}/${SCANNER_POD}:/results/scan-results.json /tmp/scan-results.json -c scanner
kubectl cp ${SCANNER_NAMESPACE}/${SCANNER_POD}:/results/scan-summary.md /tmp/scan-summary.md -c scanner
# Copy results to service container for processing
docker cp /tmp/scan-results.json cinc-scanner:/tmp/
docker cp /tmp/scan-summary.md cinc-scanner:/tmp/
# Process results in the service container
docker exec cinc-scanner bash -c "
# Generate normalized report
saf normalize -i /tmp/scan-results.json -o /tmp/normalized-results.json
# Additional report processing
saf view -i /tmp/scan-results.json --output /tmp/scan-report.html
"
# Copy processed results back
docker cp cinc-scanner:/tmp/normalized-results.json ./normalized-results.json
docker cp cinc-scanner:/tmp/scan-report.html ./scan-report.html
docker cp cinc-scanner:/tmp/scan-results.json ./scan-results.json
docker cp cinc-scanner:/tmp/scan-summary.md ./scan-summary.md
# Check threshold result
if kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- cat /results/threshold-result.txt >/dev/null 2>&1; then
THRESHOLD_RESULT=$(kubectl exec -it ${SCANNER_POD} -n ${SCANNER_NAMESPACE} -c scanner -- cat /results/threshold-result.txt)
echo "THRESHOLD_PASSED=${THRESHOLD_RESULT}" >> scan.env
if [ "${THRESHOLD_RESULT}" -eq 0 ]; then
echo "✅ Security scan passed threshold requirements"
else
echo "❌ Security scan failed to meet threshold requirements"
fi
else
echo "Warning: Threshold result not found"
echo "THRESHOLD_PASSED=1" >> scan.env
fi
# Display summary in job output
echo "============= SCAN SUMMARY ============="
cat scan-summary.md
echo "========================================"
artifacts:
paths:
- scan-results.json
- scan-summary.md
- normalized-results.json
- scan-report.html
reports:
dotenv: scan.env
# This example shows how to utilize the service container
# to generate specialized reports from the scan results
generate_report:
stage: report
needs: [retrieve_results]
script:
- |
# Use the service container to generate comprehensive reports
docker cp scan-results.json cinc-scanner:/tmp/
# Generate multiple report formats in the service container
docker exec cinc-scanner bash -c "
cd /tmp
# Generate HTML report
saf view -i scan-results.json --output enhanced-report.html
# Generate CSV report
saf generate -i scan-results.json -o csv > results.csv
# Generate Excel report
saf generate -i scan-results.json -o xlsx > results.xlsx
# Generate JUnit report for CI integration
saf generate -i scan-results.json -o junit > junit.xml
"
# Copy all reports back
docker cp cinc-scanner:/tmp/enhanced-report.html ./enhanced-report.html
docker cp cinc-scanner:/tmp/results.csv ./results.csv
docker cp cinc-scanner:/tmp/results.xlsx ./results.xlsx
docker cp cinc-scanner:/tmp/junit.xml ./junit.xml
# Create a simple markdown report for the MR
cat > scan-report.md << EOF
# Security Scan Results
## Summary
$(cat scan-summary.md)
## Threshold Check
$([ "${THRESHOLD_PASSED}" -eq 0 ] && echo "✅ **PASSED**" || echo "❌ **FAILED**")
Threshold: ${THRESHOLD_VALUE}%
## Details
For full results, see the enhanced report artifacts.
* HTML Report: enhanced-report.html
* CSV Report: results.csv
* Excel Report: results.xlsx
* JUnit Report: junit.xml
EOF
artifacts:
paths:
- enhanced-report.html
- results.csv
- results.xlsx
- junit.xml
- scan-report.md
reports:
junit: junit.xml
when: always
cleanup:
stage: cleanup
needs: [retrieve_results]
when: always # Run even if previous stages failed
script:
- echo "$KUBE_CONFIG" | base64 -d > kubeconfig.yaml
- export KUBECONFIG=kubeconfig.yaml
- |
# Delete all resources
kubectl delete pod/${SCANNER_POD} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete configmap/inspec-profiles-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
kubectl delete configmap/inspec-thresholds-${CI_PIPELINE_ID} -n ${SCANNER_NAMESPACE} --ignore-not-found
|
Usage
These pipeline examples are designed to be adapted to your specific environment. Each example includes detailed comments explaining the purpose of each step and how to customize it for your needs.
Strategic Priority
We strongly recommend the Kubernetes API Approach (standard GitLab CI example) for enterprise-grade container scanning. Our highest priority is enhancing the train-k8s-container plugin to support distroless containers. The other examples provide interim solutions until this enhancement is complete.
For detailed information on which scanning approach to use in different scenarios, see:
For detailed GitLab integration instructions, see the GitLab Integration Guide.