Skip to content

GitHub Workflow Examples

This directory contains example GitHub Action workflow files that demonstrate various container scanning approaches.

Available Examples

  • CI/CD Pipeline: Complete CI/CD pipeline with build, deploy, and scan steps
  • Dynamic RBAC Scanning: Dynamic RBAC implementation with least-privilege model
  • Existing Cluster Scanning: Scanning pods in existing clusters with externally provided credentials
  • Setup and Scan: Setup of minikube and scanning with distroless container support
  • Sidecar Scanner: Sidecar container approach with shared process namespace

CI/CD Pipeline Example

name: CI/CD Pipeline with CINC Auditor Scanning

on:
  workflow_dispatch:
    inputs:
      image_tag:
        description: 'Tag for the container image'
        required: true
        default: 'latest'
      scan_namespace:
        description: 'Kubernetes namespace for scanning'
        required: true
        default: 'app-scan'
      threshold:
        description: 'Minimum passing score (0-100)'
        required: true
        default: '70'

jobs:
  build-deploy-scan:
    name: Build, Deploy and Scan Container
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3

      - name: Define test application
        run: |
          # Create a simple application for testing
          mkdir -p ./app

          # Create a minimal Dockerfile
          cat > ./app/Dockerfile << 'EOF'
          FROM alpine:latest

          # Add some packages to test vulnerability scanning
          RUN apk add --no-cache bash curl wget

          # Add a sample script
          COPY hello.sh /hello.sh
          RUN chmod +x /hello.sh

          # Set CMD
          CMD ["/bin/sh", "-c", "while true; do /hello.sh; sleep 300; done"]
          EOF

          # Create a simple script file
          cat > ./app/hello.sh << 'EOF'
          #!/bin/bash
          echo "Hello from test container! The time is $(date)"
          echo "Running as user: $(whoami)"
          echo "OS release: $(cat /etc/os-release | grep PRETTY_NAME)"
          EOF

      - name: Set up Minikube
        uses: medyagh/setup-minikube@master
        with:
          driver: docker
          start-args: --nodes=2

      - name: Build container image
        run: |
          # Configure to use minikube's Docker daemon
          eval $(minikube docker-env)

          # Build the image
          docker build -t test-app:${{ github.event.inputs.image_tag }} ./app

          # List images to confirm
          docker images | grep test-app

      - name: Create Kubernetes deployment
        run: |
          # Create namespace
          kubectl create namespace ${{ github.event.inputs.scan_namespace }}

          # Create deployment
          cat <<EOF | kubectl apply -f -
          apiVersion: apps/v1
          kind: Deployment
          metadata:
            name: test-app
            namespace: ${{ github.event.inputs.scan_namespace }}
            labels:
              app: test-app
          spec:
            replicas: 1
            selector:
              matchLabels:
                app: test-app
            template:
              metadata:
                labels:
                  app: test-app
                  security-scan: "enabled"
              spec:
                containers:
                - name: app
                  image: test-app:${{ github.event.inputs.image_tag }}
                  imagePullPolicy: Never
          EOF

          # Wait for deployment to be ready
          kubectl -n ${{ github.event.inputs.scan_namespace }} rollout status deployment/test-app --timeout=120s

          # Get pod name
          POD_NAME=$(kubectl get pods -n ${{ github.event.inputs.scan_namespace }} -l app=test-app -o jsonpath='{.items[0].metadata.name}')
          echo "APP_POD=${POD_NAME}" >> $GITHUB_ENV

          # Show pods
          kubectl get pods -n ${{ github.event.inputs.scan_namespace }} --show-labels

      - name: Set up CINC Auditor
        run: |
          # Install CINC Auditor
          curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor

          # Install train-k8s-container plugin
          cinc-auditor plugin install train-k8s-container

          # Create a custom profile for application scanning
          mkdir -p ./app-scan-profile

          # Create profile files
          cat > ./app-scan-profile/inspec.yml << 'EOF'
          name: app-scan-profile
          title: Custom Application Container Scan
          maintainer: Security Team
          copyright: Security Team
          license: Apache-2.0
          summary: A custom profile for scanning containerized applications
          version: 0.1.0
          supports:
            platform: os
          EOF

          mkdir -p ./app-scan-profile/controls

          cat > ./app-scan-profile/controls/container_checks.rb << 'EOF'
          control 'container-1.1' do
            impact 0.7
            title 'Ensure container is not running as root'
            desc 'Containers should not run as root when possible'

            describe command('whoami') do
              its('stdout') { should_not cmp 'root' }
            end
          end

          control 'container-1.2' do
            impact 0.5
            title 'Check container OS version'
            desc 'Verify the container OS version'

            describe file('/etc/os-release') do
              it { should exist }
              its('content') { should include 'Alpine' }
            end
          end

          control 'container-1.3' do
            impact 0.3
            title 'Check for unnecessary packages'
            desc 'Container should not have unnecessary packages'

            describe package('curl') do
              it { should be_installed }
            end

            describe package('wget') do
              it { should be_installed }
            end
          end

          control 'container-1.4' do
            impact 0.7
            title 'Check for sensitive files'
            desc 'Container should not have sensitive files'

            describe file('/etc/shadow') do
              it { should exist }
              it { should_not be_readable.by('others') }
            end
          end
          EOF

      - name: Setup secure scanning infrastructure
        run: |
          # Create a unique ID for this run
          RUN_ID=$(date +%s)
          echo "RUN_ID=${RUN_ID}" >> $GITHUB_ENV

          # Create service account
          cat <<EOF | kubectl apply -f -
          apiVersion: v1
          kind: ServiceAccount
          metadata:
            name: cinc-scanner-${RUN_ID}
            namespace: ${{ github.event.inputs.scan_namespace }}
          EOF

          # Create role with label-based access
          cat <<EOF | kubectl apply -f -
          apiVersion: rbac.authorization.k8s.io/v1
          kind: Role
          metadata:
            name: cinc-scanner-role-${RUN_ID}
            namespace: ${{ github.event.inputs.scan_namespace }}
          rules:
          - apiGroups: [""]
            resources: ["pods"]
            verbs: ["get", "list"]
          - apiGroups: [""]
            resources: ["pods/exec"]
            verbs: ["create"]
            # No resourceNames restriction - use label selector in code
          - apiGroups: [""]
            resources: ["pods/log"]
            verbs: ["get"]
            # No resourceNames restriction - use label selector in code
          EOF

          # Create rolebinding
          cat <<EOF | kubectl apply -f -
          apiVersion: rbac.authorization.k8s.io/v1
          kind: RoleBinding
          metadata:
            name: cinc-scanner-binding-${RUN_ID}
            namespace: ${{ github.event.inputs.scan_namespace }}
          subjects:
          - kind: ServiceAccount
            name: cinc-scanner-${RUN_ID}
            namespace: ${{ github.event.inputs.scan_namespace }}
          roleRef:
            kind: Role
            name: cinc-scanner-role-${RUN_ID}
            apiGroup: rbac.authorization.k8s.io
          EOF

      - name: Setup SAF-CLI
        run: |
          # Install Node.js (should already be installed on GitHub runners)
          node --version || echo "Node.js not installed"

          # Install SAF-CLI globally
          npm install -g @mitre/saf

          # Verify installation
          saf --version

      - name: Run security scan with CINC Auditor
        run: |
          # Generate token
          TOKEN=$(kubectl create token cinc-scanner-${RUN_ID} -n ${{ github.event.inputs.scan_namespace }} --duration=15m)
          SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
          CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')

          # Create kubeconfig
          cat > scan-kubeconfig.yaml << EOF
          apiVersion: v1
          kind: Config
          preferences: {}
          clusters:
          - cluster:
              server: ${SERVER}
              certificate-authority-data: ${CA_DATA}
            name: scanner-cluster
          contexts:
          - context:
              cluster: scanner-cluster
              namespace: ${{ github.event.inputs.scan_namespace }}
              user: scanner-user
            name: scanner-context
          current-context: scanner-context
          users:
          - name: scanner-user
            user:
              token: ${TOKEN}
          EOF

          chmod 600 scan-kubeconfig.yaml

          # Verify we can access the pod with our labels
          POD_NAME=$(KUBECONFIG=scan-kubeconfig.yaml kubectl get pods -n ${{ github.event.inputs.scan_namespace }} -l security-scan=enabled -o jsonpath='{.items[0].metadata.name}')
          if [ -z "$POD_NAME" ]; then
            echo "Error: No pod found with security-scan=enabled label"
            exit 1
          fi
          echo "Found pod to scan: ${POD_NAME}"

          # Run the CINC Auditor scan
          KUBECONFIG=scan-kubeconfig.yaml cinc-auditor exec ./app-scan-profile \
            -t k8s-container://${{ github.event.inputs.scan_namespace }}/${POD_NAME}/app \
            --reporter cli json:scan-results.json

          SCAN_EXIT_CODE=$?
          echo "CINC Auditor scan completed with exit code: ${SCAN_EXIT_CODE}"

          # Also run a standard profile for comparison
          echo "Running standard DevSec Linux Baseline for comparison:"
          KUBECONFIG=scan-kubeconfig.yaml cinc-auditor exec dev-sec/linux-baseline \
            -t k8s-container://${{ github.event.inputs.scan_namespace }}/${POD_NAME}/app \
            --reporter cli json:baseline-results.json || true

      - name: Generate scan summary with SAF-CLI
        run: |
          # Create summary report with SAF-CLI
          echo "Generating scan summary with SAF-CLI:"
          saf summary --input scan-results.json --output-md scan-summary.md

          # Display the summary in the logs
          cat scan-summary.md

          # Create a proper threshold file
          cat > threshold.yml << EOF
compliance:
  min: ${{ github.event.inputs.threshold }}
failed:
  critical:
    max: 0  # No critical failures allowed
EOF

          # Apply threshold check
          echo "Checking against threshold with min compliance of ${{ github.event.inputs.threshold }}%:"
          saf threshold -i scan-results.json -t threshold.yml
          THRESHOLD_EXIT_CODE=$?

          if [ $THRESHOLD_EXIT_CODE -eq 0 ]; then
            echo "✅ Security scan passed threshold requirements"
          else
            echo "❌ Security scan failed to meet threshold requirements"
            # Uncomment to enforce the threshold as a quality gate
            # exit $THRESHOLD_EXIT_CODE
          fi

          # Generate summary for baseline results too
          echo "Generating baseline summary with SAF-CLI:"
          saf summary --input baseline-results.json --output-md baseline-summary.md

          # Create a combined summary for GitHub step summary
          echo "## Custom Application Profile Results" > $GITHUB_STEP_SUMMARY
          cat scan-summary.md >> $GITHUB_STEP_SUMMARY
          echo "## Linux Baseline Results" >> $GITHUB_STEP_SUMMARY
          cat baseline-summary.md >> $GITHUB_STEP_SUMMARY

      - name: Upload scan results
        uses: actions/upload-artifact@v4
        with:
          name: security-scan-results
          path: |
            scan-results.json
            baseline-results.json
            scan-summary.md
            baseline-summary.md

      - name: Cleanup resources
        if: always()
        run: |
          kubectl delete namespace ${{ github.event.inputs.scan_namespace }}

Dynamic RBAC Scanning Example

name: Dynamic RBAC Pod Scanning

on:
  workflow_dispatch:
    inputs:
      target_image:
        description: 'Target container image to scan'
        required: true
        default: 'busybox:latest'
      scan_label:
        description: 'Label to use for scanning'
        required: true
        default: 'scan-target=true'
      cinc_profile:
        description: 'CINC Auditor profile to run'
        required: true
        default: 'dev-sec/linux-baseline'
      threshold:
        description: 'Minimum passing score (0-100)'
        required: true
        default: '70'

jobs:
  dynamic-scan:
    name: Dynamic RBAC Pod Scanning
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup minikube
        id: minikube
        uses: medyagh/setup-minikube@master
        with:
          driver: docker
          start-args: --nodes=2

      - name: Set up CINC Auditor environment
        run: |
          # Install CINC Auditor
          curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor

          # Install train-k8s-container plugin
          cinc-auditor plugin install train-k8s-container

          # Install SAF-CLI
          npm install -g @mitre/saf

          # Verify installation
          cinc-auditor --version
          saf --version

      - name: Create test infrastructure
        run: |
          # Extract label key and value
          LABEL_KEY=$(echo "${{ github.event.inputs.scan_label }}" | cut -d= -f1)
          LABEL_VALUE=$(echo "${{ github.event.inputs.scan_label }}" | cut -d= -f2)

          # Create test namespace
          kubectl create namespace dynamic-scan

          # Create a unique identifier for this run
          RUN_ID="run-$(date +%s)"
          echo "RUN_ID=${RUN_ID}" >> $GITHUB_ENV

          # Create multiple test pods with different images and labels
          for i in {1..3}; do
            cat <<EOF | kubectl apply -f -
            apiVersion: v1
            kind: Pod
            metadata:
              name: pod-${i}-${RUN_ID}
              namespace: dynamic-scan
              labels:
                app: test-pod-${i}
                ${LABEL_KEY}: "${i == 1 && "${LABEL_VALUE}" || "false"}"
            spec:
              containers:
              - name: container
                image: ${{ github.event.inputs.target_image }}
                command: ["sleep", "infinity"]
            EOF
          done

          # Wait for pods to be running
          kubectl wait --for=condition=ready pod -l app=test-pod-1 -n dynamic-scan --timeout=120s

          # Get the name of the pod with our scan label
          TARGET_POD=$(kubectl get pods -n dynamic-scan -l ${LABEL_KEY}=${LABEL_VALUE} -o jsonpath='{.items[0].metadata.name}')
          if [ -z "$TARGET_POD" ]; then
            echo "Error: No pod found with label ${LABEL_KEY}=${LABEL_VALUE}"
            exit 1
          fi
          echo "TARGET_POD=${TARGET_POD}" >> $GITHUB_ENV

          # Show all pods in the namespace
          kubectl get pods -n dynamic-scan --show-labels

      - name: Set up label-based RBAC
        run: |
          # Extract label for RBAC
          LABEL_SELECTOR="${{ github.event.inputs.scan_label }}"

          # Create service account
          cat <<EOF | kubectl apply -f -
          apiVersion: v1
          kind: ServiceAccount
          metadata:
            name: scanner-sa-${RUN_ID}
            namespace: dynamic-scan
          EOF

          # Create role that allows access only to pods with specific label
          cat <<EOF | kubectl apply -f -
          apiVersion: rbac.authorization.k8s.io/v1
          kind: Role
          metadata:
            name: scanner-role-${RUN_ID}
            namespace: dynamic-scan
          rules:
          - apiGroups: [""]
            resources: ["pods"]
            verbs: ["get", "list"]
          - apiGroups: [""]
            resources: ["pods/exec"]
            verbs: ["create"]
          - apiGroups: [""]
            resources: ["pods/log"]
            verbs: ["get"]
          EOF

          # Create role binding
          cat <<EOF | kubectl apply -f -
          apiVersion: rbac.authorization.k8s.io/v1
          kind: RoleBinding
          metadata:
            name: scanner-binding-${RUN_ID}
            namespace: dynamic-scan
          subjects:
          - kind: ServiceAccount
            name: scanner-sa-${RUN_ID}
            namespace: dynamic-scan
          roleRef:
            kind: Role
            name: scanner-role-${RUN_ID}
            apiGroup: rbac.authorization.k8s.io
          EOF

      - name: Run scan on labeled pod
        run: |
          # Generate token
          TOKEN=$(kubectl create token scanner-sa-${RUN_ID} -n dynamic-scan --duration=15m)
          SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
          CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')

          # Create kubeconfig
          cat > scan-kubeconfig.yaml << EOF
          apiVersion: v1
          kind: Config
          preferences: {}
          clusters:
          - cluster:
              server: ${SERVER}
              certificate-authority-data: ${CA_DATA}
            name: scanner-cluster
          contexts:
          - context:
              cluster: scanner-cluster
              namespace: dynamic-scan
              user: scanner-user
            name: scanner-context
          current-context: scanner-context
          users:
          - name: scanner-user
            user:
              token: ${TOKEN}
          EOF

          chmod 600 scan-kubeconfig.yaml

          # Find the target pod by label
          LABEL_SELECTOR="${{ github.event.inputs.scan_label }}"
          echo "Looking for pods with label: ${LABEL_SELECTOR}"
          TARGET_POD=$(KUBECONFIG=scan-kubeconfig.yaml kubectl get pods -n dynamic-scan -l ${LABEL_SELECTOR} -o jsonpath='{.items[0].metadata.name}')
          if [ -z "$TARGET_POD" ]; then
            echo "Error: No pod found with label ${LABEL_SELECTOR} using restricted access"
            exit 1
          fi
          echo "Found target pod: ${TARGET_POD}"

          # Get container name
          CONTAINER_NAME=$(kubectl get pod ${TARGET_POD} -n dynamic-scan -o jsonpath='{.spec.containers[0].name}')
          echo "Container name: ${CONTAINER_NAME}"

          # Test access to pod
          echo "Testing pod access with restricted token:"
          KUBECONFIG=scan-kubeconfig.yaml kubectl get pods -n dynamic-scan

          # Run CINC Auditor scan
          echo "Running CINC Auditor scan on dynamic-scan/${TARGET_POD}/${CONTAINER_NAME}"
          KUBECONFIG=scan-kubeconfig.yaml cinc-auditor exec ${{ github.event.inputs.cinc_profile }} \
            -t k8s-container://dynamic-scan/${TARGET_POD}/${CONTAINER_NAME} \
            --reporter cli json:scan-results.json

          # Save scan exit code
          SCAN_EXIT_CODE=$?
          echo "Scan completed with exit code: ${SCAN_EXIT_CODE}"

          # Process results with SAF-CLI
          echo "Generating scan summary with SAF-CLI:"
          saf summary --input scan-results.json --output-md scan-summary.md

          # Display the summary in the logs
          cat scan-summary.md

          # Add to GitHub step summary
          echo "## CINC Auditor Scan Results" > $GITHUB_STEP_SUMMARY
          cat scan-summary.md >> $GITHUB_STEP_SUMMARY

          # Create a proper threshold file
          cat > threshold.yml << EOF
compliance:
  min: ${{ github.event.inputs.threshold }}
failed:
  critical:
    max: 0  # No critical failures allowed
EOF

          # Apply threshold check
          echo "Checking against threshold with min compliance of ${{ github.event.inputs.threshold }}%:"
          saf threshold -i scan-results.json -t threshold.yml
          THRESHOLD_EXIT_CODE=$?

          if [ $THRESHOLD_EXIT_CODE -eq 0 ]; then
            echo "✅ Security scan passed threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
          else
            echo "❌ Security scan failed to meet threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
            # Uncomment to enforce the threshold as a quality gate
            # exit $THRESHOLD_EXIT_CODE
          fi

      - name: Verify RBAC restrictions
        run: |
          # Generate token for scanning
          TOKEN=$(kubectl create token scanner-sa-${RUN_ID} -n dynamic-scan --duration=5m)
          SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
          CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')

          # Create kubeconfig
          cat > test-kubeconfig.yaml << EOF
          apiVersion: v1
          kind: Config
          preferences: {}
          clusters:
          - cluster:
              server: ${SERVER}
              certificate-authority-data: ${CA_DATA}
            name: scanner-cluster
          contexts:
          - context:
              cluster: scanner-cluster
              namespace: dynamic-scan
              user: scanner-user
            name: scanner-context
          current-context: scanner-context
          users:
          - name: scanner-user
            user:
              token: ${TOKEN}
          EOF

          echo "## RBAC Security Verification" >> $GITHUB_STEP_SUMMARY

          # Check what we CAN do
          echo "Verifying what we CAN do with restricted RBAC:" | tee -a $GITHUB_STEP_SUMMARY
          echo "Can list pods:" | tee -a $GITHUB_STEP_SUMMARY
          KUBECONFIG=test-kubeconfig.yaml kubectl get pods -n dynamic-scan > /dev/null && 
            echo "✅ Can list pods" | tee -a $GITHUB_STEP_SUMMARY || 
            echo "❌ Cannot list pods" | tee -a $GITHUB_STEP_SUMMARY

          echo "Can exec into labeled pod:" | tee -a $GITHUB_STEP_SUMMARY
          KUBECONFIG=test-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n dynamic-scan --resource-name=${TARGET_POD} &&
            echo "✅ Can exec into target pod" | tee -a $GITHUB_STEP_SUMMARY || 
            echo "❌ Cannot exec into target pod" | tee -a $GITHUB_STEP_SUMMARY

          # Check what we CANNOT do
          echo "Verifying what we CANNOT do with restricted RBAC:" | tee -a $GITHUB_STEP_SUMMARY
          echo "Cannot create pods:" | tee -a $GITHUB_STEP_SUMMARY
          KUBECONFIG=test-kubeconfig.yaml kubectl auth can-i create pods -n dynamic-scan && 
            echo "❌ Security issue: Can create pods" | tee -a $GITHUB_STEP_SUMMARY || 
            echo "✅ Cannot create pods (expected)" | tee -a $GITHUB_STEP_SUMMARY

          echo "Cannot delete pods:" | tee -a $GITHUB_STEP_SUMMARY
          KUBECONFIG=test-kubeconfig.yaml kubectl auth can-i delete pods -n dynamic-scan && 
            echo "❌ Security issue: Can delete pods" | tee -a $GITHUB_STEP_SUMMARY || 
            echo "✅ Cannot delete pods (expected)" | tee -a $GITHUB_STEP_SUMMARY

          # For non-labeled pods, we should be able to list them but not exec into them
          OTHER_POD=$(kubectl get pods -n dynamic-scan -l app=test-pod-2 -o jsonpath='{.items[0].metadata.name}')
          echo "Cannot exec into non-labeled pod:" | tee -a $GITHUB_STEP_SUMMARY
          KUBECONFIG=test-kubeconfig.yaml kubectl auth can-i create pods/exec --subresource=exec -n dynamic-scan --resource-name=${OTHER_POD} && 
            echo "❌ Security issue: Can exec into non-target pod" | tee -a $GITHUB_STEP_SUMMARY || 
            echo "✅ Cannot exec into non-target pod (expected)" | tee -a $GITHUB_STEP_SUMMARY

      - name: Upload CINC results
        uses: actions/upload-artifact@v4
        with:
          name: cinc-scan-results
          path: |
            scan-results.json
            scan-summary.md

      - name: Cleanup
        if: always()
        run: |
          kubectl delete namespace dynamic-scan

Existing Cluster Scanning Example

name: Existing Cluster Container Scanning

on:
  workflow_dispatch:
    inputs:
      target_namespace:
        description: 'Namespace where target pods are deployed'
        required: true
        default: 'default'
      target_label:
        description: 'Label selector for target pods (app=myapp)'
        required: true
        default: 'scan-target=true'
      cinc_profile:
        description: 'CINC Auditor profile to run'
        required: true
        default: 'dev-sec/linux-baseline'
      threshold:
        description: 'Minimum passing score (0-100)'
        required: true
        default: '70'

jobs:
  scan-existing-cluster:
    name: Scan Containers in Existing Cluster
    runs-on: ubuntu-latest

    env:
      SCAN_NAMESPACE: ${{ github.event.inputs.target_namespace }}
      LABEL_SELECTOR: ${{ github.event.inputs.target_label }}
      CINC_PROFILE: ${{ github.event.inputs.cinc_profile }}
      THRESHOLD_VALUE: ${{ github.event.inputs.threshold }}

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Set up kubectl
        uses: azure/setup-kubectl@v3

      - name: Configure Kubernetes cluster
        run: |
          # Set up kubeconfig using supplied cluster credentials
          echo "${{ secrets.KUBE_CONFIG }}" > kubeconfig.yaml
          chmod 600 kubeconfig.yaml
          export KUBECONFIG=kubeconfig.yaml

          # Verify connection and target namespace
          kubectl get namespace ${SCAN_NAMESPACE} || { echo "Namespace ${SCAN_NAMESPACE} does not exist"; exit 1; }

          # Find pods matching the label selector
          TARGET_PODS=$(kubectl get pods -n ${SCAN_NAMESPACE} -l ${LABEL_SELECTOR} -o jsonpath='{.items[*].metadata.name}')
          if [ -z "$TARGET_PODS" ]; then
            echo "No pods found matching label: ${LABEL_SELECTOR} in namespace ${SCAN_NAMESPACE}"
            exit 1
          fi

          # Count and list found pods
          POD_COUNT=$(echo $TARGET_PODS | wc -w)
          echo "Found ${POD_COUNT} pods to scan:"
          kubectl get pods -n ${SCAN_NAMESPACE} -l ${LABEL_SELECTOR} --show-labels

          # Save the first pod as our primary target
          PRIMARY_POD=$(echo $TARGET_PODS | cut -d' ' -f1)
          echo "Primary target pod: ${PRIMARY_POD}"
          echo "PRIMARY_POD=${PRIMARY_POD}" >> $GITHUB_ENV

          # Get container name for the primary pod
          PRIMARY_CONTAINER=$(kubectl get pod ${PRIMARY_POD} -n ${SCAN_NAMESPACE} -o jsonpath='{.spec.containers[0].name}')
          echo "Primary container: ${PRIMARY_CONTAINER}"
          echo "PRIMARY_CONTAINER=${PRIMARY_CONTAINER}" >> $GITHUB_ENV

          # Check if pod has profile annotation
          PROFILE_ANNOTATION=$(kubectl get pod ${PRIMARY_POD} -n ${SCAN_NAMESPACE} -o jsonpath='{.metadata.annotations.scan-profile}' 2>/dev/null || echo "")
          if [ -n "$PROFILE_ANNOTATION" ]; then
            echo "Found profile annotation: ${PROFILE_ANNOTATION}"
            echo "CINC_PROFILE=${PROFILE_ANNOTATION}" >> $GITHUB_ENV
          fi

      - name: Create dynamic RBAC for scanning
        run: |
          export KUBECONFIG=kubeconfig.yaml

          # Create a unique ID for this run
          RUN_ID="gh-${{ github.run_id }}-${{ github.run_attempt }}"
          echo "RUN_ID=${RUN_ID}" >> $GITHUB_ENV

          # Create service account for scanning
          cat <<EOF | kubectl apply -f -
          apiVersion: v1
          kind: ServiceAccount
          metadata:
            name: scanner-${RUN_ID}
            namespace: ${SCAN_NAMESPACE}
            labels:
              app: security-scanner
              run-id: "${RUN_ID}"
          EOF

          # Create role with least privilege
          cat <<EOF | kubectl apply -f -
          apiVersion: rbac.authorization.k8s.io/v1
          kind: Role
          metadata:
            name: scanner-role-${RUN_ID}
            namespace: ${SCAN_NAMESPACE}
            labels:
              app: security-scanner
              run-id: "${RUN_ID}"
          rules:
          - apiGroups: [""]
            resources: ["pods"]
            verbs: ["get", "list"]
          - apiGroups: [""]
            resources: ["pods/exec"]
            verbs: ["create"]
            resourceNames: ["${PRIMARY_POD}"]
          - apiGroups: [""]
            resources: ["pods/log"]
            verbs: ["get"]
            resourceNames: ["${PRIMARY_POD}"]
          EOF

          # Create role binding
          cat <<EOF | kubectl apply -f -
          apiVersion: rbac.authorization.k8s.io/v1
          kind: RoleBinding
          metadata:
            name: scanner-binding-${RUN_ID}
            namespace: ${SCAN_NAMESPACE}
            labels:
              app: security-scanner
              run-id: "${RUN_ID}"
          subjects:
          - kind: ServiceAccount
            name: scanner-${RUN_ID}
            namespace: ${SCAN_NAMESPACE}
          roleRef:
            kind: Role
            name: scanner-role-${RUN_ID}
            apiGroup: rbac.authorization.k8s.io
          EOF

          # Create token for service account (15 minute duration)
          TOKEN=$(kubectl create token scanner-${RUN_ID} -n ${SCAN_NAMESPACE} --duration=15m)
          SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
          CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')

          # Create restricted kubeconfig
          cat > scanner-kubeconfig.yaml << EOF
          apiVersion: v1
          kind: Config
          preferences: {}
          clusters:
          - cluster:
              server: ${SERVER}
              certificate-authority-data: ${CA_DATA}
            name: scanner-cluster
          contexts:
          - context:
              cluster: scanner-cluster
              namespace: ${SCAN_NAMESPACE}
              user: scanner-user
            name: scanner-context
          current-context: scanner-context
          users:
          - name: scanner-user
            user:
              token: ${TOKEN}
          EOF

          chmod 600 scanner-kubeconfig.yaml

      - name: Set up CINC Auditor and SAF-CLI
        run: |
          # Install CINC Auditor
          curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor

          # Install train-k8s-container plugin
          cinc-auditor plugin install train-k8s-container

          # Install SAF-CLI
          npm install -g @mitre/saf

          # Verify installations
          cinc-auditor --version
          saf --version

      - name: Run security scan with restricted access
        run: |
          # Verify access with restricted token
          echo "Verifying restricted access:"
          KUBECONFIG=scanner-kubeconfig.yaml kubectl get pods -n ${SCAN_NAMESPACE} -l ${LABEL_SELECTOR}

          # Verify we can access the target pod
          ACCESSIBLE_POD=$(KUBECONFIG=scanner-kubeconfig.yaml kubectl get pod ${PRIMARY_POD} -n ${SCAN_NAMESPACE} -o jsonpath='{.metadata.name}' 2>/dev/null || echo "")
          if [ -z "$ACCESSIBLE_POD" ]; then
            echo "Error: Cannot access pod ${PRIMARY_POD} with restricted token"
            exit 1
          fi

          # Run CINC Auditor scan
          echo "Running CINC Auditor scan on ${SCAN_NAMESPACE}/${PRIMARY_POD}/${PRIMARY_CONTAINER}"
          KUBECONFIG=scanner-kubeconfig.yaml cinc-auditor exec ${CINC_PROFILE} \
            -t k8s-container://${SCAN_NAMESPACE}/${PRIMARY_POD}/${PRIMARY_CONTAINER} \
            --reporter cli json:scan-results.json

          # Save scan exit code
          SCAN_EXIT_CODE=$?
          echo "Scan completed with exit code: ${SCAN_EXIT_CODE}"

          # Process results with SAF-CLI
          echo "Generating scan summary with SAF-CLI:"
          saf summary --input scan-results.json --output-md scan-summary.md

          # Display the summary in the logs
          cat scan-summary.md

          # Add to GitHub step summary
          echo "## CINC Auditor Scan Results" > $GITHUB_STEP_SUMMARY
          cat scan-summary.md >> $GITHUB_STEP_SUMMARY

          # Create a threshold file
          cat > threshold.yml << EOF
          compliance:
            min: ${THRESHOLD_VALUE}
          failed:
            critical:
              max: 0  # No critical failures allowed
          EOF

          # Apply threshold check
          echo "Checking against threshold with min compliance of ${THRESHOLD_VALUE}%:"
          saf threshold -i scan-results.json -t threshold.yml
          THRESHOLD_EXIT_CODE=$?

          if [ $THRESHOLD_EXIT_CODE -eq 0 ]; then
            echo "✅ Security scan passed threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
          else
            echo "❌ Security scan failed to meet threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
            # Uncomment to enforce the threshold as a quality gate
            # exit $THRESHOLD_EXIT_CODE
          fi

          # Generate HTML report
          saf view -i scan-results.json --output scan-report.html

      - name: Upload scan results
        uses: actions/upload-artifact@v4
        with:
          name: security-scan-results
          path: |
            scan-results.json
            scan-summary.md
            scan-report.html

      - name: Cleanup RBAC resources
        if: always()
        run: |
          export KUBECONFIG=kubeconfig.yaml

          # Delete role binding
          kubectl delete rolebinding scanner-binding-${RUN_ID} -n ${SCAN_NAMESPACE} --ignore-not-found

          # Delete role
          kubectl delete role scanner-role-${RUN_ID} -n ${SCAN_NAMESPACE} --ignore-not-found

          # Delete service account
          kubectl delete serviceaccount scanner-${RUN_ID} -n ${SCAN_NAMESPACE} --ignore-not-found

          echo "RBAC resources cleaned up"

Setup and Scan Example

name: Setup Minikube and Run CINC Auditor Scan

on:
  workflow_dispatch:
    inputs:
      minikube_version:
        description: 'Minikube version to use'
        required: true
        default: 'v1.32.0'
      kubernetes_version:
        description: 'Kubernetes version to use'
        required: true
        default: 'v1.28.3'
      cinc_profile:
        description: 'CINC Auditor profile to run'
        required: true
        default: 'dev-sec/linux-baseline'
      threshold:
        description: 'Minimum passing score (0-100)'
        required: true
        default: '70'

jobs:
  setup-and-scan:
    name: Setup minikube and run CINC Auditor scan
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup minikube
        id: minikube
        uses: medyagh/setup-minikube@master
        with:
          minikube-version: ${{ github.event.inputs.minikube_version }}
          kubernetes-version: ${{ github.event.inputs.kubernetes_version }}
          github-token: ${{ secrets.GITHUB_TOKEN }}
          driver: docker
          start-args: --nodes=2

      - name: Get cluster status
        run: |
          kubectl get nodes
          minikube status

      - name: Set up CINC Auditor environment
        run: |
          # Install CINC Auditor
          curl -L https://omnitruck.cinc.sh/install.sh | sudo bash -s -- -P cinc-auditor

          # Install train-k8s-container plugin
          cinc-auditor plugin install train-k8s-container

          # Install SAF-CLI for result processing
          npm install -g @mitre/saf

          # Verify installation
          cinc-auditor --version
          cinc-auditor plugin list
          saf --version

      - name: Create namespace and test pod
        run: |
          # Create namespace
          kubectl create namespace inspec-test

          # Create test pod
          cat <<EOF | kubectl apply -f -
          apiVersion: v1
          kind: Pod
          metadata:
            name: inspec-target
            namespace: inspec-test
            labels:
              app: inspec-target
              scan-target: "true"
          spec:
            containers:
            - name: busybox
              image: busybox:latest
              command: ["sleep", "infinity"]
          EOF

          # Wait for pod to be running
          kubectl wait --for=condition=ready pod/inspec-target -n inspec-test --timeout=120s

          # Verify pod is running
          kubectl get pods -n inspec-test

      - name: Set up RBAC configuration
        run: |
          # Create service account
          cat <<EOF | kubectl apply -f -
          apiVersion: v1
          kind: ServiceAccount
          metadata:
            name: inspec-scanner
            namespace: inspec-test
          EOF

          # Create role
          cat <<EOF | kubectl apply -f -
          apiVersion: rbac.authorization.k8s.io/v1
          kind: Role
          metadata:
            name: inspec-container-role
            namespace: inspec-test
          rules:
          - apiGroups: [""]
            resources: ["pods"]
            verbs: ["get", "list"]
          - apiGroups: [""]
            resources: ["pods/exec"]
            verbs: ["create"]
            resourceNames: ["inspec-target"]
          - apiGroups: [""]
            resources: ["pods/log"]
            verbs: ["get"]
            resourceNames: ["inspec-target"]
          EOF

          # Create role binding
          cat <<EOF | kubectl apply -f -
          apiVersion: rbac.authorization.k8s.io/v1
          kind: RoleBinding
          metadata:
            name: inspec-container-rolebinding
            namespace: inspec-test
          subjects:
          - kind: ServiceAccount
            name: inspec-scanner
            namespace: inspec-test
          roleRef:
            kind: Role
            name: inspec-container-role
            apiGroup: rbac.authorization.k8s.io
          EOF

          # Verify RBAC setup
          kubectl get serviceaccount,role,rolebinding -n inspec-test

      - name: Generate restricted kubeconfig
        run: |
          # Get token
          TOKEN=$(kubectl create token inspec-scanner -n inspec-test --duration=15m)

          # Get cluster information
          SERVER=$(kubectl config view --minify --output=jsonpath='{.clusters[0].cluster.server}')
          CA_DATA=$(kubectl config view --raw --minify --flatten -o jsonpath='{.clusters[].cluster.certificate-authority-data}')

          # Create kubeconfig
          cat > restricted-kubeconfig.yaml << EOF
          apiVersion: v1
          kind: Config
          preferences: {}
          clusters:
          - cluster:
              server: ${SERVER}
              certificate-authority-data: ${CA_DATA}
            name: scanner-cluster
          contexts:
          - context:
              cluster: scanner-cluster
              namespace: inspec-test
              user: scanner-user
            name: scanner-context
          current-context: scanner-context
          users:
          - name: scanner-user
            user:
              token: ${TOKEN}
          EOF

          # Set proper permissions
          chmod 600 restricted-kubeconfig.yaml

          # Test the kubeconfig
          KUBECONFIG=restricted-kubeconfig.yaml kubectl get pods -n inspec-test

      - name: Run CINC Auditor scan with restricted access
        run: |
          # Download CINC profile
          if [[ "${{ github.event.inputs.cinc_profile }}" == http* ]]; then
            # If it's a URL, use it directly
            PROFILE="${{ github.event.inputs.cinc_profile }}"
          elif [[ "${{ github.event.inputs.cinc_profile }}" == */* ]]; then
            # If it's a profile from Chef Supermarket (e.g., dev-sec/linux-baseline)
            PROFILE="${{ github.event.inputs.cinc_profile }}"
          else
            # If it's a local path
            PROFILE="./${{ github.event.inputs.cinc_profile }}"
          fi

          # Run CINC Auditor with the train-k8s-container transport
          KUBECONFIG=restricted-kubeconfig.yaml cinc-auditor exec ${PROFILE} \
            -t k8s-container://inspec-test/inspec-target/busybox \
            --reporter cli json:cinc-results.json

          # Store the exit code
          CINC_EXIT_CODE=$?
          echo "CINC Auditor scan completed with exit code: ${CINC_EXIT_CODE}"

      - name: Process results with SAF-CLI
        run: |
          # Generate summary report with SAF-CLI
          echo "Generating scan summary with SAF-CLI:"
          saf summary --input cinc-results.json --output-md scan-summary.md

          # Display the summary in the logs
          cat scan-summary.md

          # Add to GitHub step summary
          echo "## CINC Auditor Scan Results" > $GITHUB_STEP_SUMMARY
          cat scan-summary.md >> $GITHUB_STEP_SUMMARY

          # Create a proper threshold file
          cat > threshold.yml << EOF
compliance:
  min: ${{ github.event.inputs.threshold }}
failed:
  critical:
    max: 0  # No critical failures allowed
EOF

          # Apply threshold check
          echo "Checking against threshold with min compliance of ${{ github.event.inputs.threshold }}%:"
          saf threshold -i cinc-results.json -t threshold.yml
          THRESHOLD_EXIT_CODE=$?

          if [ $THRESHOLD_EXIT_CODE -eq 0 ]; then
            echo "✅ Security scan passed threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
          else
            echo "❌ Security scan failed to meet threshold requirements" | tee -a $GITHUB_STEP_SUMMARY
            # Uncomment to enforce the threshold as a quality gate
            # exit $THRESHOLD_EXIT_CODE
          fi

      - name: Upload CINC Auditor results
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: cinc-results
          path: |
            cinc-results.json
            scan-summary.md

      - name: Cleanup resources
        if: always()
        run: |
          kubectl delete namespace inspec-test

Sidecar Scanner Example

name: CINC Auditor Sidecar Container Scan

on:
  workflow_dispatch:
    inputs:
      kubernetes_version:
        description: 'Kubernetes version to use'
        required: true
        default: 'v1.28.3'
      target_image:
        description: 'Target container image to scan'
        required: true
        default: 'busybox:latest'
      is_distroless:
        description: 'Is the target a distroless container?'
        required: true
        default: 'false'
        type: boolean
      threshold:
        description: 'Minimum passing score (0-100)'
        required: true
        default: '70'

jobs:
  sidecar-scan:
    name: Sidecar Container Scan
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v4

      - name: Setup Kubernetes
        id: kind
        uses: helm/kind-action@v1.8.0
        with:
          version: v0.20.0
          cluster_name: scan-cluster
          config: |
            kind: Cluster
            apiVersion: kind.x-k8s.io/v1alpha4
            nodes:
            - role: control-plane
              kubeadmConfigPatches:
                - |
                  kind: InitConfiguration
                  nodeRegistration:
                    kubeletExtraArgs:
                      feature-gates: "EphemeralContainers=true"
                      "system-reserved": "cpu=500m,memory=500Mi"
              image: kindest/node:${{ github.event.inputs.kubernetes_version }}

      - name: Get cluster status
        run: |
          kubectl get nodes
          kubectl cluster-info

      - name: Build CINC Auditor Scanner container
        run: |
          # Create a Dockerfile for the CINC Auditor scanner container
          cat > Dockerfile.scanner << EOF
          FROM ruby:3.0-slim

          # Install dependencies
          RUN apt-get update && apt-get install -y \
              curl \
              gnupg \
              procps \
              nodejs \
              npm \
              && rm -rf /var/lib/apt/lists/*

          # Install CINC Auditor
          RUN curl -L https://omnitruck.cinc.sh/install.sh | bash -s -- -P cinc-auditor

          # Install SAF CLI
          RUN npm install -g @mitre/saf

          # Copy profiles
          COPY examples/cinc-profiles/container-baseline /opt/profiles/container-baseline

          # Verify installation
          RUN cinc-auditor --version && \
              saf --version

          # Create a simple script to scan in sidecar mode
          RUN echo '#!/bin/bash \n\
          TARGET_PID=\$(ps aux | grep -v grep | grep "\$1" | head -1 | awk "{print \\\$2}") \n\
          echo "Target process identified: PID \$TARGET_PID" \n\
          \n\
          cinc-auditor exec /opt/profiles/\$2 \\\n\
            -b os=linux \\\n\
            --target=/proc/\$TARGET_PID/root \\\n\
            --reporter cli json:/results/scan-results.json \n\
          \n\
          saf summary --input /results/scan-results.json --output-md /results/scan-summary.md \n\
          \n\
          saf threshold -i /results/scan-results.json -t /opt/thresholds/threshold.yml \n\
          echo \$? > /results/threshold-result.txt \n\
          \n\
          touch /results/scan-complete \n\
          ' > /usr/local/bin/run-scanner

          RUN chmod +x /usr/local/bin/run-scanner

          # Default command
          CMD ["/bin/bash"]
          EOF

          # Build the scanner image
          docker build -t cinc-scanner:latest -f Dockerfile.scanner .

          # Load the image into kind
          kind load docker-image cinc-scanner:latest --name scan-cluster

      - name: Create namespace and prepare environment
        run: |
          # Create namespace
          kubectl create namespace inspec-test

          # Create threshold ConfigMap
          cat > threshold.yml << EOF
          compliance:
            min: ${{ github.event.inputs.threshold }}
          failed:
            critical:
              max: 0  # No critical failures allowed
          EOF

          kubectl create configmap inspec-thresholds \
            --from-file=threshold.yml=threshold.yml \
            -n inspec-test

      - name: Deploy pod with scanner sidecar
        run: |
          # Create the pod with shared process namespace
          cat <<EOF | kubectl apply -f -
          apiVersion: v1
          kind: Pod
          metadata:
            name: app-scanner
            namespace: inspec-test
            labels:
              app: scanner-pod
          spec:
            shareProcessNamespace: true  # Enable shared process namespace
            containers:
            # Target container to be scanned
            - name: target
              image: ${{ github.event.inputs.target_image }}
              command: ["sleep", "3600"]

            # CINC Auditor scanner sidecar
            - name: scanner
              image: cinc-scanner:latest
              command: 
              - "/bin/bash"
              - "-c"
              - |
                # Wait for the main container to start
                sleep 10

                echo "Starting CINC Auditor scan..."

                # Use the script to find process and run scanner
                run-scanner "sleep 3600" "container-baseline" 

                # Keep container running briefly to allow result retrieval
                echo "Scan complete. Results available in /results directory."
                sleep 300
              volumeMounts:
              - name: shared-results
                mountPath: /results
              - name: thresholds
                mountPath: /opt/thresholds

            volumes:
            - name: shared-results
              emptyDir: {}
            - name: thresholds
              configMap:
                name: inspec-thresholds
          EOF

          # Wait for pod to be ready
          kubectl wait --for=condition=ready pod/app-scanner -n inspec-test --timeout=300s

          # Verify the pod is ready
          kubectl get pod app-scanner -n inspec-test

      - name: Wait for scan to complete and retrieve results
        run: |
          # Wait for scan to complete
          echo "Waiting for scan to complete..."
          until kubectl exec -it app-scanner -n inspec-test -c scanner -- ls /results/scan-complete >/dev/null 2>&1; do
            echo "Scan in progress..."
            sleep 5
          done

          # Retrieve scan results
          echo "Retrieving scan results..."
          kubectl cp inspec-test/app-scanner:/results/scan-results.json ./scan-results.json -c scanner
          kubectl cp inspec-test/app-scanner:/results/scan-summary.md ./scan-summary.md -c scanner

          # Check threshold result
          if kubectl exec -it app-scanner -n inspec-test -c scanner -- cat /results/threshold-result.txt >/dev/null 2>&1; then
            THRESHOLD_RESULT=$(kubectl exec -it app-scanner -n inspec-test -c scanner -- cat /results/threshold-result.txt)
            echo "THRESHOLD_RESULT=${THRESHOLD_RESULT}" >> $GITHUB_ENV

            if [ "${THRESHOLD_RESULT}" -eq 0 ]; then
              echo "✅ Security scan passed threshold requirements"
            else
              echo "❌ Security scan failed to meet threshold requirements"
            fi
          else
            echo "Warning: Threshold result not found"
            echo "THRESHOLD_RESULT=1" >> $GITHUB_ENV
          fi

          # Display summary in job output
          echo "============= SCAN SUMMARY ============="
          cat scan-summary.md
          echo "========================================"

      - name: Process results with SAF-CLI
        run: |
          # Install SAF CLI
          npm install -g @mitre/saf

          # Generate reports
          saf view -i scan-results.json --output scan-report.html
          saf generate -i scan-results.json -o csv > results.csv
          saf generate -i scan-results.json -o junit > junit-results.xml

          # Add to GitHub step summary
          echo "## CINC Auditor Scan Results" > $GITHUB_STEP_SUMMARY
          cat scan-summary.md >> $GITHUB_STEP_SUMMARY

          # Add threshold result to summary
          if [ "${{ env.THRESHOLD_RESULT }}" -eq 0 ]; then
            echo "## ✅ Security scan passed threshold requirements" >> $GITHUB_STEP_SUMMARY
          else  
            echo "## ❌ Security scan failed to meet threshold requirements" >> $GITHUB_STEP_SUMMARY
          fi
          echo "Threshold: ${{ github.event.inputs.threshold }}%" >> $GITHUB_STEP_SUMMARY

      - name: Upload CINC Auditor results
        if: always()
        uses: actions/upload-artifact@v4
        with:
          name: cinc-results
          path: |
            scan-results.json
            scan-summary.md
            scan-report.html
            results.csv
            junit-results.xml

      - name: Cleanup resources
        if: always()
        run: |
          kubectl delete namespace inspec-test

Usage

These workflow examples are designed to be adapted to your specific environment. Each example includes detailed comments explaining the purpose of each step and how to customize it for your needs.

For detailed information on which scanning approach to use in different scenarios, see:

For detailed GitHub Actions integration instructions, see the GitHub Actions Integration Guide.