6.4 Container и Kubernetes Security

6.4 Container и Kubernetes Security #

🎯 Цель изучения #

Изучить специфические аспекты безопасности контейнеров и Kubernetes, включая runtime security, Pod Security Standards и сетевые политики безопасности.


🐳 Container Security Fundamentals #

Container Security Model #

Модель безопасности контейнеров представляет собой многоуровневую архитектуру:

  1. Host OS (операционная система хоста) - основа для всех контейнеров
  2. Container Runtime - система управления контейнерами (Docker, containerd)
  3. Отдельные контейнеры, каждый содержит:
    • Приложение
    • Библиотеки и зависимости
    • Базовую ОС

Основные уязвимости:

  • Уязвимости операционной системы хоста
  • Уязвимости container runtime
  • Уязвимости в образах контейнеров
  • Уязвимости приложений
  • Ошибки в конфигурации

🔒 Container Isolation Mechanisms #

Namespaces #

Механизм namespace обеспечивает изоляцию контейнеров на уровне операционной системы. Основные namespace включают:

  • pid - изоляция процессов
  • net - сетевая изоляция
  • mnt - файловая система
  • uts - hostname и domain name
  • ipc - межпроцессное взаимодействие
  • user - пользователи и группы
  • cgroup - контроль ресурсов

Можно проверить namespace контейнера через просмотр /proc/1/ns/ или с помощью docker inspect.

Control Groups (cgroups) #

cgroups позволяют ограничивать и контролировать использование ресурсов контейнерами:

  • Ограничение памяти (–memory=“512m”) - максимальное количество оперативной памяти
  • Ограничение CPU (–cpus=“1.5”) - максимальное количество ядер процессора
  • Ограничение процессов (–pids-limit 100) - максимальное количество процессов

Мониторинг ограничений осуществляется через файловую систему /sys/fs/cgroup.

Capabilities #

Capabilities позволяют точно контролировать привилегии контейнера:

  • –cap-drop=ALL - удалить все привилегии (принцип минимальных прав)
  • –cap-add=NET_BIND_SERVICE - добавить только необходимую привилегию (в данном случае - привязку к привилегированным портам)
  • –user 1000:1000 - запуск от имени непривилегированного пользователя

Можно проверить текущие capabilities с помощью утилиты capsh.


🏗️ Secure Container Images #

Dockerfile Security Best Practices #

# ❌ Небезопасный Dockerfile
FROM ubuntu:18.04
COPY . /app
RUN apt-get update
RUN chmod 777 /app
USER root
EXPOSE 22
CMD ["node", "app.js"]
# ✅ Безопасный Dockerfile

# Используем обновленный base image
FROM node:18-alpine AS builder

# Создаем non-root пользователя
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001

# Устанавливаем зависимости как builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production && \
    npm cache clean --force

# Production stage
FROM node:18-alpine AS runner

# Обновляем пакеты системы
RUN apk update && apk upgrade && \
    rm -rf /var/cache/apk/*

# Создаем non-root пользователя
RUN addgroup -g 1001 -S nodejs && \
    adduser -S nextjs -u 1001

WORKDIR /app

# Копируем только необходимые файлы
COPY --from=builder --chown=nextjs:nodejs /app/node_modules ./node_modules
COPY --chown=nextjs:nodejs package*.json ./
COPY --chown=nextjs:nodejs . .

# Настройки безопасности
USER nextjs

# Экспонируем только необходимые порты
EXPOSE 3000

# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
  CMD curl -f http://localhost:3000/health || exit 1

# Запуск приложения
CMD ["node", "app.js"]

Multi-stage Builds для Security #

# Secure multi-stage build для Go приложения
FROM golang:1.19-alpine AS builder

# Установка CA certificates
RUN apk update && apk add --no-cache git ca-certificates tzdata

# Создание non-root пользователя
RUN adduser -D -g '' appuser

WORKDIR /build
COPY go.mod go.sum ./
RUN go mod download

COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build \
    -ldflags='-w -s -extldflags "-static"' \
    -o app ./cmd/main.go

# Final stage - distroless
FROM gcr.io/distroless/static-debian11:nonroot

# Копируем CA certificates
COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/

# Копируем timezone data
COPY --from=builder /usr/share/zoneinfo /usr/share/zoneinfo

# Копируем binary
COPY --from=builder /build/app /app

# Используем non-root пользователя
USER nonroot:nonroot

EXPOSE 8080

ENTRYPOINT ["/app"]

Distroless Images #

# Distroless для Java
FROM gcr.io/distroless/java:11
COPY app.jar /app.jar
ENTRYPOINT ["java", "-jar", "/app.jar"]

# Distroless для Node.js
FROM gcr.io/distroless/nodejs:18
COPY --chown=nonroot:nonroot app.js package.json ./
EXPOSE 3000
USER nonroot
CMD ["app.js"]

# Distroless для Python
FROM gcr.io/distroless/python3
COPY --chown=nonroot:nonroot . /app
WORKDIR /app
USER nonroot
CMD ["main.py"]

🛡️ Container Runtime Security #

Runtime Security с Falco #

# Falco installation
apiVersion: v1
kind: ServiceAccount
metadata:
  name: falco
  namespace: falco-system
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: falco
  namespace: falco-system
  labels:
    app: falco
spec:
  selector:
    matchLabels:
      app: falco
  template:
    metadata:
      labels:
        app: falco
    spec:
      serviceAccount: falco
      hostNetwork: true
      hostPID: true
      containers:
      - name: falco
        image: falcosecurity/falco:0.35.1
        securityContext:
          privileged: true
        env:
        - name: FALCO_K8S_AUDIT_ENDPOINT
          value: "http://localhost:8765/k8s-audit"
        volumeMounts:
        - mountPath: /host/var/run/docker.sock
          name: docker-socket
        - mountPath: /host/proc
          name: proc-fs
          readOnly: true
        - mountPath: /host/boot
          name: boot-fs
          readOnly: true
        - mountPath: /etc/falco
          name: falco-config
        resources:
          limits:
            memory: 512Mi
            cpu: 200m
          requests:
            memory: 256Mi
            cpu: 100m
      volumes:
      - name: docker-socket
        hostPath:
          path: /var/run/docker.sock
      - name: proc-fs
        hostPath:
          path: /proc
      - name: boot-fs
        hostPath:
          path: /boot
      - name: falco-config
        configMap:
          name: falco-config

Custom Falco Rules #

apiVersion: v1
kind: ConfigMap
metadata:
  name: falco-config
  namespace: falco-system
data:
  custom_rules.yaml: |
    # Container Security Rules
    
    - rule: Container Drift Detection
      desc: Detect files created/modified in containers
      condition: >
        spawned_process and container and
        not proc.name in (package_mgmt_binaries) and
        not proc.pname in (ls, find, grep, cat, vi, vim, nano)
      output: >
        File created in container after startup
        (user=%user.name container=%container.name 
         file=%fd.name command=%proc.cmdline)
      priority: WARNING
      
    - rule: Sensitive File Access
      desc: Detect access to sensitive files
      condition: >
        open_read and container and
        (fd.name startswith /etc/passwd or
         fd.name startswith /etc/shadow or
         fd.name startswith /etc/sudoers)
      output: >
        Sensitive file access detected
        (user=%user.name file=%fd.name container=%container.name)
      priority: CRITICAL
      
    - rule: Privilege Escalation
      desc: Detect privilege escalation attempts
      condition: >
        spawned_process and container and
        proc.name in (sudo, su, setuid, setgid, sudo) and
        not user.name in (trusted_users)
      output: >
        Privilege escalation attempt
        (user=%user.name command=%proc.cmdline container=%container.name)
      priority: CRITICAL
      
    - rule: Crypto Mining Detection
      desc: Detect cryptocurrency mining
      condition: >
        spawned_process and container and
        (proc.name in (xmrig, minerd, cpuminer) or
         proc.cmdline contains stratum)
      output: >
        Cryptocurrency mining detected
        (command=%proc.cmdline container=%container.name)
      priority: CRITICAL

Sysdig Runtime Security #

# Sysdig Secure Agent
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: sysdig-agent
  namespace: sysdig-agent
spec:
  selector:
    matchLabels:
      app: sysdig-agent
  template:
    metadata:
      labels:
        app: sysdig-agent
    spec:
      serviceAccountName: sysdig-agent
      hostNetwork: true
      hostPID: true
      containers:
      - name: sysdig-agent
        image: sysdig/agent:latest
        securityContext:
          privileged: true
        env:
        - name: ACCESS_KEY
          valueFrom:
            secretKeyRef:
              name: sysdig-agent
              key: access-key
        - name: TAGS
          value: "env:production,team:devops"
        volumeMounts:
        - mountPath: /host/var/run/docker.sock
          name: docker-sock
        - mountPath: /host/dev
          name: dev-vol
        - mountPath: /host/proc
          name: proc-vol
          readOnly: true
        - mountPath: /host/boot
          name: boot-vol
          readOnly: true
        - mountPath: /host/lib/modules
          name: modules-vol
          readOnly: true
        - mountPath: /host/usr
          name: usr-vol
          readOnly: true
        - mountPath: /dev/shm
          name: dshm
        - mountPath: /opt/draios/etc/kubernetes/config
          name: sysdig-agent-config
        - mountPath: /opt/draios/etc/kubernetes/secrets
          name: sysdig-agent-secrets
      volumes:
      - name: docker-sock
        hostPath:
          path: /var/run/docker.sock
      - name: dev-vol
        hostPath:
          path: /dev
      - name: proc-vol
        hostPath:
          path: /proc
      - name: boot-vol
        hostPath:
          path: /boot
      - name: modules-vol
        hostPath:
          path: /lib/modules
      - name: usr-vol
        hostPath:
          path: /usr
      - name: dshm
        emptyDir:
          medium: Memory
      - name: sysdig-agent-config
        configMap:
          name: sysdig-agent-config
      - name: sysdig-agent-secrets
        secret:
          secretName: sysdig-agent

☸️ Kubernetes Security #

Pod Security Standards #

Restricted Pod Security Standard #

# Namespace с Restricted policy
apiVersion: v1
kind: Namespace
metadata:
  name: secure-namespace
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted

---
# Secure Pod example
apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
  namespace: secure-namespace
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: app
    image: nginx:alpine
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 1000
      capabilities:
        drop:
        - ALL
        add:
        - NET_BIND_SERVICE
      seccompProfile:
        type: RuntimeDefault
    volumeMounts:
    - name: tmp
      mountPath: /tmp
    - name: var-run
      mountPath: /var/run
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  volumes:
  - name: tmp
    emptyDir: {}
  - name: var-run
    emptyDir: {}
  serviceAccountName: limited-sa

Service Account Security #

# Minimal Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
  name: limited-sa
  namespace: secure-namespace
automountServiceAccountToken: false  # Отключаем автомонтирование

---
# Minimal RBAC
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  namespace: secure-namespace
  name: pod-reader
rules:
- apiGroups: [""]
  resources: ["pods"]
  verbs: ["get", "list"]

---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: secure-namespace
subjects:
- kind: ServiceAccount
  name: limited-sa
  namespace: secure-namespace
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

Admission Controllers #

OPA Gatekeeper Policies #

# Security Context Constraint
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: k8ssecuritycontext
spec:
  crd:
    spec:
      names:
        kind: K8sSecurityContext
      validation:
        type: object
        properties:
          runAsNonRoot:
            type: boolean
          readOnlyRootFilesystem:
            type: boolean
          allowPrivilegeEscalation:
            type: boolean
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package k8ssecuritycontext
        
        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          input.parameters.runAsNonRoot == true
          not container.securityContext.runAsNonRoot == true
          msg := "Container must run as non-root user"
        }
        
        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          input.parameters.readOnlyRootFilesystem == true
          not container.securityContext.readOnlyRootFilesystem == true
          msg := "Container must use read-only root filesystem"
        }
        
        violation[{"msg": msg}] {
          container := input.review.object.spec.containers[_]
          input.parameters.allowPrivilegeEscalation == false
          not container.securityContext.allowPrivilegeEscalation == false
          msg := "Container must not allow privilege escalation"
        }

---
# Apply the constraint
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sSecurityContext
metadata:
  name: security-context-constraint
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod"]
    excludedNamespaces: ["kube-system", "gatekeeper-system"]
  parameters:
    runAsNonRoot: true
    readOnlyRootFilesystem: true
    allowPrivilegeEscalation: false

ValidatingAdmissionWebhook Example #

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingAdmissionWebhook
metadata:
  name: security-webhook
webhooks:
- name: security.example.com
  clientConfig:
    service:
      name: security-webhook
      namespace: webhook-system
      path: "/validate"
  rules:
  - operations: ["CREATE", "UPDATE"]
    apiGroups: [""]
    apiVersions: ["v1"]
    resources: ["pods"]
  admissionReviewVersions: ["v1", "v1beta1"]
  sideEffects: None
  failurePolicy: Fail

🌐 Network Security #

Network Policies Deep Dive #

# Zero Trust Network Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: zero-trust-policy
  namespace: production
spec:
  podSelector: {}  # Applies to all pods
  policyTypes:
  - Ingress
  - Egress
  # Default deny all traffic

---
# Web Tier Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: web-tier-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      tier: web
  policyTypes:
  - Ingress
  - Egress
  ingress:
  # Allow traffic from load balancer
  - from:
    - namespaceSelector:
        matchLabels:
          name: ingress-nginx
    ports:
    - protocol: TCP
      port: 8080
  egress:
  # Allow DNS
  - to: []
    ports:
    - protocol: UDP
      port: 53
  # Allow traffic to app tier
  - to:
    - podSelector:
        matchLabels:
          tier: app
    ports:
    - protocol: TCP
      port: 8080

---
# App Tier Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: app-tier-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      tier: app
  policyTypes:
  - Ingress
  - Egress
  ingress:
  # Allow traffic from web tier
  - from:
    - podSelector:
        matchLabels:
          tier: web
    ports:
    - protocol: TCP
      port: 8080
  egress:
  # Allow DNS
  - to: []
    ports:
    - protocol: UDP
      port: 53
  # Allow traffic to database
  - to:
    - podSelector:
        matchLabels:
          tier: database
    ports:
    - protocol: TCP
      port: 5432

---
# Database Tier Policy
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: db-tier-policy
  namespace: production
spec:
  podSelector:
    matchLabels:
      tier: database
  policyTypes:
  - Ingress
  - Egress
  ingress:
  # Allow traffic only from app tier
  - from:
    - podSelector:
        matchLabels:
          tier: app
    ports:
    - protocol: TCP
      port: 5432
  egress:
  # Allow DNS only
  - to: []
    ports:
    - protocol: UDP
      port: 53

Calico Global Network Policies #

# Calico Global Network Policy
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: deny-all-non-kube-system
spec:
  order: 100
  selector: all()
  types:
  - Ingress
  - Egress
  # Default deny (no rules specified)

---
# Allow kube-system traffic
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: allow-kube-system
spec:
  order: 50
  selector: projectcalico.org/namespace == "kube-system"
  types:
  - Ingress
  - Egress
  ingress:
  - action: Allow
  egress:
  - action: Allow

---
# Restrict egress to specific destinations
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
  name: restrict-egress
spec:
  order: 200
  selector: environment == "production"
  types:
  - Egress
  egress:
  # Allow DNS
  - action: Allow
    protocol: UDP
    destination:
      ports: [53]
  # Allow HTTPS to specific domains
  - action: Allow
    protocol: TCP
    destination:
      ports: [443]
      nets: 
      - 8.8.8.8/32  # Google DNS
      - 1.1.1.1/32  # Cloudflare DNS
  # Allow internal cluster communication
  - action: Allow
    destination:
      nets:
      - 10.0.0.0/8
      - 172.16.0.0/12
      - 192.168.0.0/16

🔐 Image Security #

Image Scanning Pipeline #

# Container image security pipeline
name: Container Security Pipeline

on:
  push:
    branches: [main]
  pull_request:
    branches: [main]

jobs:
  build-and-scan:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    # Build image
    - name: Build Docker image
      run: |
        docker build -t myapp:${{ github.sha }} .
        
    # Scan with Trivy
    - name: Run Trivy vulnerability scanner
      uses: aquasecurity/trivy-action@master
      with:
        image-ref: 'myapp:${{ github.sha }}'
        format: 'sarif'
        output: 'trivy-results.sarif'
        severity: 'CRITICAL,HIGH'
        
    # Scan with Grype
    - name: Run Grype vulnerability scanner
      uses: anchore/scan-action@v3
      with:
        image: 'myapp:${{ github.sha }}'
        fail-build: true
        severity-cutoff: high
        
    # Scan with Snyk
    - name: Run Snyk to check Docker image for vulnerabilities
      uses: snyk/actions/docker@master
      env:
        SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
      with:
        image: 'myapp:${{ github.sha }}'
        args: --severity-threshold=high
        
    # SBOM Generation
    - name: Generate SBOM
      uses: anchore/sbom-action@v0
      with:
        image: 'myapp:${{ github.sha }}'
        format: spdx-json
        output-file: sbom.spdx.json
        
    # Image signing with Cosign
    - name: Install Cosign
      uses: sigstore/cosign-installer@v3
      
    - name: Sign container image
      run: |
        cosign sign --yes myapp:${{ github.sha }}
      env:
        COSIGN_EXPERIMENTAL: 1
        
    # Upload to GitHub Security
    - name: Upload Trivy scan results to GitHub Security tab
      uses: github/codeql-action/upload-sarif@v2
      if: always()
      with:
        sarif_file: 'trivy-results.sarif'

Image Signing and Verification #

# Cosign - Container Image Signing

# Генерация ключей
cosign generate-key-pair

# Подпись образа
cosign sign --key cosign.key myapp:latest

# Верификация подписи
cosign verify --key cosign.pub myapp:latest

# Keyless signing (с OIDC)
COSIGN_EXPERIMENTAL=1 cosign sign myapp:latest

# Keyless verification
COSIGN_EXPERIMENTAL=1 cosign verify myapp:latest

Admission Controller для Signed Images #

# Policy для проверки подписанных образов
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-signed-images
spec:
  validationFailureAction: enforce
  background: false
  rules:
  - name: check-image-signature
    match:
      any:
      - resources:
          kinds:
          - Pod
    verifyImages:
    - imageReferences:
      - "*"
      attestors:
      - entries:
        - keys:
            publicKeys: |-
              -----BEGIN PUBLIC KEY-----
              MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE...
              -----END PUBLIC KEY-----

🎯 Практические задания #

🟢 Задание 1: Secure Dockerfile #

# 1. Создать уязвимый Dockerfile
cat > Dockerfile.insecure << 'EOF'
FROM ubuntu:18.04
COPY . /app
RUN apt-get update
RUN chmod 777 /app
USER root
EXPOSE 22 80 443 8080
CMD ["python", "/app/app.py"]
EOF

# 2. Сканирование образа
docker build -f Dockerfile.insecure -t insecure-app .
trivy image insecure-app:latest

# 3. Создать безопасную версию
cat > Dockerfile.secure << 'EOF'
FROM python:3.11-alpine AS builder
WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

FROM python:3.11-alpine
RUN addgroup -g 1001 -S appgroup && \
    adduser -S appuser -u 1001 -G appgroup
WORKDIR /app
COPY --from=builder /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
COPY --chown=appuser:appgroup . .
USER appuser
EXPOSE 8080
HEALTHCHECK --interval=30s --timeout=3s CMD python -c "import requests; requests.get('http://localhost:8080/health')"
CMD ["python", "app.py"]
EOF

# 4. Сравнить результаты сканирования
docker build -f Dockerfile.secure -t secure-app .
trivy image secure-app:latest

🟡 Задание 2: Pod Security Standards #

# 1. Создать namespace с Pod Security Standards
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
  name: secure-test
  labels:
    pod-security.kubernetes.io/enforce: restricted
    pod-security.kubernetes.io/audit: restricted
    pod-security.kubernetes.io/warn: restricted
EOF

# 2. Попробовать создать небезопасный pod (должен быть отклонен)
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: insecure-pod
  namespace: secure-test
spec:
  containers:
  - name: app
    image: nginx
    securityContext:
      runAsUser: 0  # root user
      privileged: true
EOF

# 3. Создать безопасный pod
cat << 'EOF' | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: secure-pod
  namespace: secure-test
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 3000
    fsGroup: 2000
    seccompProfile:
      type: RuntimeDefault
  containers:
  - name: app
    image: nginx:alpine
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 1000
      capabilities:
        drop:
        - ALL
        add:
        - NET_BIND_SERVICE
      seccompProfile:
        type: RuntimeDefault
    volumeMounts:
    - name: tmp
      mountPath: /tmp
    - name: nginx-cache
      mountPath: /var/cache/nginx
    - name: nginx-run
      mountPath: /var/run
    resources:
      requests:
        memory: "64Mi"
        cpu: "250m"
      limits:
        memory: "128Mi"
        cpu: "500m"
  volumes:
  - name: tmp
    emptyDir: {}
  - name: nginx-cache
    emptyDir: {}
  - name: nginx-run
    emptyDir: {}
EOF

🟠 Задание 3: Network Policies #

# 1. Создать test environment
kubectl create namespace netpol-test
kubectl label namespace netpol-test environment=test

# 2. Создать test applications
kubectl run web --image=nginx --labels="app=web,tier=frontend" -n netpol-test
kubectl run api --image=httpd --labels="app=api,tier=backend" -n netpol-test  
kubectl run db --image=postgres --labels="app=db,tier=database" -n netpol-test
kubectl run client --image=busybox --labels="app=client" -n netpol-test -- sleep 3600

# 3. Проверить connectivity до применения policies
kubectl exec -n netpol-test client -- wget -qO- --timeout=2 web
kubectl exec -n netpol-test client -- wget -qO- --timeout=2 api
kubectl exec -n netpol-test client -- wget -qO- --timeout=2 db

# 4. Применить network policies
cat << 'EOF' | kubectl apply -f -
# Default deny all
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: default-deny-all
  namespace: netpol-test
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

---
# Allow client to web
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-client-to-web
  namespace: netpol-test
spec:
  podSelector:
    matchLabels:
      app: web
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: client
    ports:
    - protocol: TCP
      port: 80

---
# Allow web to api
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-web-to-api
  namespace: netpol-test
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Ingress
  ingress:
  - from:
    - podSelector:
        matchLabels:
          app: web
    ports:
    - protocol: TCP
      port: 80

---
# Allow API egress to database
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-api-to-db
  namespace: netpol-test
spec:
  podSelector:
    matchLabels:
      app: api
  policyTypes:
  - Egress
  egress:
  - to:
    - podSelector:
        matchLabels:
          app: db
    ports:
    - protocol: TCP
      port: 5432
  # Allow DNS
  - to: []
    ports:
    - protocol: UDP
      port: 53
EOF

# 5. Проверить connectivity после применения policies
kubectl exec -n netpol-test client -- wget -qO- --timeout=2 web  # Should work
kubectl exec -n netpol-test client -- wget -qO- --timeout=2 api  # Should fail
kubectl exec -n netpol-test client -- wget -qO- --timeout=2 db   # Should fail

📚 Полезные ресурсы #

🛠️ Инструменты #

  • Container Scanning: Trivy, Grype, Snyk, Twistlock
  • Runtime Security: Falco, Sysdig, Aqua Security
  • Image Signing: Cosign, Notary, Docker Content Trust
  • Admission Control: OPA Gatekeeper, Kyverno, ValidatingAdmissionWebhook

📖 Документация и стандарты #


🎯 Результаты изучения #

После изучения этого раздела вы:

  • ✅ Понимаете механизмы изоляции контейнеров
  • ✅ Умеете создавать безопасные Docker образы
  • ✅ Знаете как настроить runtime security мониторинг
  • ✅ Можете применять Pod Security Standards
  • ✅ Умеете создавать и применять Network Policies
  • ✅ Знаете как подписывать и верифицировать образы контейнеров

Следующий раздел: 6.5 Compliance и Governance