6.5 Compliance и Governance

6.5 Compliance и Governance #

🎯 Цель изучения #

Изучить подходы к обеспечению соответствия требованиям compliance (SOC2, PCI DSS, GDPR), настройке governance процессов и автоматизации audit reporting в DevSecOps среде.


📋 Understanding Compliance #

Основные Compliance Frameworks #

┌─────────────────────────────────────────────────────┐
│                 Compliance Landscape                │
├─────────────────────────────────────────────────────┤
│ Financial:      │ Privacy:       │ Security:        │
│ • SOX           │ • GDPR         │ • SOC 2         │
│ • PCI DSS       │ • CCPA         │ • ISO 27001     │
│ • FFIEC         │ • PIPEDA       │ • NIST          │
│                 │                │ • FedRAMP       │
├─────────────────────────────────────────────────────┤
│ Industry Specific:                                  │
│ • HIPAA (Healthcare)  • FERPA (Education)          │
│ • FISMA (Government)  • GLBA (Banking)             │
└─────────────────────────────────────────────────────┘

SOC 2 Type II Compliance #

Trust Service Criteria #

SOC 2 Controls Mapping

Безопасность:

  • CC6.1: Логические и физические контроли доступа
  • CC6.2: Управление доступом к компонентам системы
  • CC6.3: Контроли сетевой безопасности
  • CC6.6: Управление уязвимостями
  • CC6.7: Защита передачи данных

Доступность:

  • CC7.1: Мощность системы и мониторинг
  • CC7.2: Мониторинг и оповещения системы
  • CC7.3: Резервное копирование и восстановление системы

Целостность обработки:

  • CC8.1: Авторизация обработки данных

Конфиденциальность:

  • CC9.1: Доступ к конфиденциальной информации

Конфиденциальность персональных данных:

  • CC10.1: Сбор персональной информации

SOC 2 Evidence Collection #

#!/bin/bash
# SOC 2 Evidence Collection Script

# Access logs collection
echo "Collecting access logs for SOC 2 audit..."
kubectl logs -l app=auth-service --since=168h > access_logs_$(date +%Y%m%d).log

# Network security evidence
echo "Collecting network policies..."
kubectl get networkpolicies --all-namespaces -o yaml > network_policies_$(date +%Y%m%d).yaml

# RBAC evidence
echo "Collecting RBAC configurations..."
kubectl get rolebindings,clusterrolebindings --all-namespaces -o yaml > rbac_$(date +%Y%m%d).yaml

# Encryption evidence
echo "Collecting TLS certificates info..."
kubectl get secrets --all-namespaces --field-selector type=kubernetes.io/tls -o yaml > tls_certs_$(date +%Y%m%d).yaml

# Backup verification
echo "Verifying backup procedures..."
aws s3 ls s3://backup-bucket/ --recursive | grep $(date +%Y-%m-%d) > backup_verification_$(date +%Y%m%d).log

# Change management evidence
echo "Collecting deployment history..."
kubectl get deployments --all-namespaces -o yaml > deployments_$(date +%Y%m%d).yaml

PCI DSS Compliance #

PCI DSS Requirements Mapping #

PCI DSS Требования для DevOps

Требование 1: Установка и поддержка файерволов

  • Сегментация сети
  • Правила файервола
  • Группы безопасности

Требование 2: Не использовать дефолтные пароли

  • Политики паролей
  • Удаление дефолтных учетных записей
  • Безопасные конфигурации

Требование 3: Защита хранимых данных держателей карт

  • Шифрование в состоянии покоя
  • Классификация данных
  • Безопасное управление ключами

Требование 4: Шифрование передачи данных держателей карт

  • Шифрование при передаче
  • Конфигурации TLS
  • Безопасные протоколы

Требование 6: Разработка и поддержка безопасных систем

  • Практики безопасного кодирования
  • Управление уязвимостями
  • Тестирование безопасности

Требование 8: Идентификация и аутентификация доступа

  • Уникальные идентификаторы пользователей
  • Строгая аутентификация
  • Многофакторная аутентификация

Требование 10: Отслеживание и мониторинг доступа

  • Аудит-логирование
  • Мониторинг логов
  • Реагирование на инциденты

PCI DSS Network Segmentation #

# Terraform: PCI DSS Compliant Network
resource "aws_vpc" "pci_vpc" {
  cidr_block           = "10.0.0.0/16"
  enable_dns_hostnames = true
  enable_dns_support   = true

  tags = {
    Name = "pci-compliant-vpc"
    PCI  = "true"
  }
}

# Cardholder Data Environment (CDE)
resource "aws_subnet" "cde_subnet" {
  vpc_id            = aws_vpc.pci_vpc.id
  cidr_block        = "10.0.1.0/24"
  availability_zone = "us-west-2a"

  tags = {
    Name = "cde-subnet"
    PCI  = "CDE"
  }
}

# DMZ subnet for web servers
resource "aws_subnet" "dmz_subnet" {
  vpc_id            = aws_vpc.pci_vpc.id
  cidr_block        = "10.0.2.0/24"
  availability_zone = "us-west-2a"

  tags = {
    Name = "dmz-subnet"
    PCI  = "DMZ"
  }
}

# Security Group for CDE
resource "aws_security_group" "cde_sg" {
  name_prefix = "cde-sg"
  vpc_id      = aws_vpc.pci_vpc.id

  # Only allow specific ports from DMZ
  ingress {
    from_port       = 443
    to_port         = 443
    protocol        = "tcp"
    security_groups = [aws_security_group.dmz_sg.id]
  }

  # No direct internet access
  egress {
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    cidr_blocks = ["10.0.0.0/16"]
  }

  tags = {
    Name = "cde-security-group"
    PCI  = "CDE"
  }
}

GDPR Compliance #

Data Processing Inventory #

# GDPR Data Processing Record
apiVersion: v1
kind: ConfigMap
metadata:
  name: gdpr-data-inventory
  namespace: compliance
data:
  data_processing_record.yaml: |
    data_processing_activities:
      - name: "User Registration"
        purpose: "Account creation and authentication"
        legal_basis: "Contract performance"
        data_categories:
          - "Personal identifiers (email, name)"
          - "Authentication data"
        data_subjects: "Website users"
        recipients: "Internal systems only"
        retention_period: "Active account + 2 years"
        security_measures:
          - "Encryption at rest and in transit"
          - "Access controls and RBAC"
          - "Regular security audits"
        
      - name: "Payment Processing"
        purpose: "Transaction processing"
        legal_basis: "Contract performance"
        data_categories:
          - "Payment information"
          - "Billing addresses"
        data_subjects: "Customers"
        recipients: "Payment processor (Stripe)"
        retention_period: "7 years (legal requirement)"
        security_measures:
          - "PCI DSS compliance"
          - "Tokenization"
          - "End-to-end encryption"

GDPR Technical Measures #

# Data Subject Rights Implementation
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gdpr-service
  namespace: compliance
spec:
  replicas: 2
  selector:
    matchLabels:
      app: gdpr-service
  template:
    metadata:
      labels:
        app: gdpr-service
    spec:
      containers:
      - name: gdpr-service
        image: mycompany/gdpr-service:latest
        env:
        - name: DATABASE_URL
          valueFrom:
            secretKeyRef:
              name: database-credentials
              key: url
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"

---
# GDPR API Routes
apiVersion: v1
kind: ConfigMap
metadata:
  name: gdpr-api-config
  namespace: compliance
data:
  routes.yaml: |
    gdpr_endpoints:
      data_export:
        path: "/api/v1/gdpr/export"
        method: "POST"
        description: "Export all user data (Article 20)"
        authentication: "required"
        
      data_deletion:
        path: "/api/v1/gdpr/delete"
        method: "DELETE"
        description: "Delete user data (Article 17)"
        authentication: "required"
        
      data_rectification:
        path: "/api/v1/gdpr/update"
        method: "PUT"
        description: "Update user data (Article 16)"
        authentication: "required"
        
      consent_withdrawal:
        path: "/api/v1/gdpr/consent/withdraw"
        method: "POST"
        description: "Withdraw consent (Article 7)"
        authentication: "required"

🏛️ Governance Framework #

Security Policies as Code #

Policy Definition #

# OPA Rego Policy для Compliance
package compliance.k8s

# PCI DSS Requirement 2: Default passwords
deny[msg] {
    input.kind == "Secret"
    input.type == "Opaque"
    data := base64.decode(input.data.password)
    data == "password"
    msg := "Default passwords are not allowed (PCI DSS Req 2)"
}

# SOC 2 CC6.1: Logical access controls
deny[msg] {
    input.kind == "Pod"
    input.spec.containers[_].securityContext.privileged == true
    msg := "Privileged containers not allowed (SOC 2 CC6.1)"
}

# GDPR Article 32: Security of processing
deny[msg] {
    input.kind == "Pod"
    input.spec.containers[_].env[_].name == "DATABASE_PASSWORD"
    not input.spec.containers[_].env[_].valueFrom
    msg := "Database passwords must use secrets (GDPR Article 32)"
}

# PCI DSS Requirement 4: Encrypt transmission
deny[msg] {
    input.kind == "Ingress"
    not input.spec.tls
    msg := "Ingress must use TLS encryption (PCI DSS Req 4)"
}

Policy Enforcement #

# Gatekeeper Constraint Template
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
  name: compliancepolicy
spec:
  crd:
    spec:
      names:
        kind: CompliancePolicy
      validation:
        type: object
        properties:
          frameworks:
            type: array
            items:
              type: string
          severity:
            type: string
  targets:
    - target: admission.k8s.gatekeeper.sh
      rego: |
        package compliancepolicy
        
        violation[{"msg": msg}] {
          # PCI DSS network segmentation
          input.review.object.kind == "NetworkPolicy"
          "PCI-DSS" in input.parameters.frameworks
          not input.review.object.spec.policyTypes
          msg := "Network policies must specify policy types for PCI DSS compliance"
        }
        
        violation[{"msg": msg}] {
          # SOC 2 access controls
          input.review.object.kind == "RoleBinding"
          "SOC2" in input.parameters.frameworks
          input.review.object.subjects[_].kind == "User"
          input.review.object.subjects[_].name == "system:anonymous"
          msg := "Anonymous access not allowed for SOC 2 compliance"
        }

---
# Apply Constraint
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: CompliancePolicy
metadata:
  name: enforce-compliance
spec:
  match:
    kinds:
      - apiGroups: [""]
        kinds: ["Pod", "Service"]
      - apiGroups: ["networking.k8s.io"]
        kinds: ["NetworkPolicy"]
      - apiGroups: ["rbac.authorization.k8s.io"]
        kinds: ["RoleBinding", "ClusterRoleBinding"]
  parameters:
    frameworks: ["PCI-DSS", "SOC2", "GDPR"]
    severity: "high"

Change Management #

GitOps Compliance Workflow #

# .github/workflows/compliance-check.yml
name: Compliance Check

on:
  pull_request:
    branches: [main]
    paths: ['k8s/**', 'terraform/**']

jobs:
  compliance-scan:
    runs-on: ubuntu-latest
    steps:
    - uses: actions/checkout@v3
    
    # Policy validation
    - name: Validate Policies
      run: |
        # Install OPA
        curl -L -o opa https://openpolicyagent.org/downloads/latest/opa_linux_amd64
        chmod +x opa
        
        # Test policies against manifests
        for file in k8s/*.yaml; do
          ./opa eval -d policies/ -i $file "data.compliance.violations"
        done
    
    # Terraform compliance
    - name: Terraform Compliance Check
      run: |
        # Install terraform-compliance
        pip install terraform-compliance
        
        # Generate plan
        terraform init
        terraform plan -out=plan.out
        terraform show -json plan.out > plan.json
        
        # Run compliance tests
        terraform-compliance -f compliance-tests/ -p plan.json
    
    # Generate compliance report
    - name: Generate Compliance Report
      run: |
        cat > compliance-report.md << 'EOF'
        ## Compliance Check Results
        
        ### Policy Violations
        $(./opa eval -d policies/ -i k8s/*.yaml "data.compliance.violations" --format pretty)
        
        ### Terraform Compliance
        - [x] PCI DSS network segmentation
        - [x] SOC 2 access controls
        - [x] GDPR data protection
        
        ### Risk Assessment
        - **High**: 0 issues
        - **Medium**: 0 issues  
        - **Low**: 0 issues
        EOF
    
    # Comment on PR
    - name: Comment PR
      uses: actions/github-script@v6
      with:
        script: |
          const fs = require('fs');
          const report = fs.readFileSync('compliance-report.md', 'utf8');
          github.rest.issues.createComment({
            issue_number: context.issue.number,
            owner: context.repo.owner,
            repo: context.repo.repo,
            body: report
          });

📊 Audit Logging и Monitoring #

Centralized Audit Logging #

ELK Stack для Compliance #

# Elasticsearch for audit logs
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: elasticsearch
  namespace: compliance-logging
spec:
  serviceName: elasticsearch
  replicas: 3
  selector:
    matchLabels:
      app: elasticsearch
  template:
    metadata:
      labels:
        app: elasticsearch
    spec:
      containers:
      - name: elasticsearch
        image: docker.elastic.co/elasticsearch/elasticsearch:8.5.0
        env:
        - name: discovery.type
          value: zen
        - name: ES_JAVA_OPTS
          value: "-Xms2g -Xmx2g"
        - name: xpack.security.enabled
          value: "true"
        - name: xpack.security.transport.ssl.enabled
          value: "true"
        volumeMounts:
        - name: data
          mountPath: /usr/share/elasticsearch/data
        resources:
          requests:
            memory: "4Gi"
            cpu: "1000m"
          limits:
            memory: "8Gi"
            cpu: "2000m"
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: ["ReadWriteOnce"]
      resources:
        requests:
          storage: 100Gi

---
# Logstash для парсинга audit logs
apiVersion: v1
kind: ConfigMap
metadata:
  name: logstash-config
  namespace: compliance-logging
data:
  logstash.conf: |
    input {
      beats {
        port => 5044
      }
    }
    
    filter {
      if [kubernetes][namespace] {
        # Add compliance tags
        if [kubernetes][namespace] in ["production", "staging"] {
          mutate {
            add_tag => ["compliance_required"]
          }
        }
        
        # Parse Kubernetes audit logs
        if [source] == "/var/log/audit/audit.log" {
          grok {
            match => { "message" => "%{KUBERNETESAUDIT}" }
          }
          
          # Extract sensitive operations
          if [verb] in ["create", "update", "delete"] and [objectRef][resource] in ["secrets", "configmaps"] {
            mutate {
              add_tag => ["sensitive_operation"]
            }
          }
        }
        
        # Parse application logs for GDPR events
        if "gdpr" in [tags] {
          grok {
            match => { "message" => "%{TIMESTAMP_ISO8601:timestamp} %{WORD:action} user_id=%{INT:user_id} data_type=%{WORD:data_type}" }
          }
          
          if [action] in ["export", "delete", "update"] {
            mutate {
              add_tag => ["gdpr_event"]
            }
          }
        }
      }
    }
    
    output {
      elasticsearch {
        hosts => ["elasticsearch:9200"]
        index => "compliance-logs-%{+YYYY.MM.dd}"
        
        # Separate indices for different compliance frameworks
        if "pci_dss" in [tags] {
          index => "pci-logs-%{+YYYY.MM.dd}"
        } else if "sox" in [tags] {
          index => "sox-logs-%{+YYYY.MM.dd}"
        } else if "gdpr" in [tags] {
          index => "gdpr-logs-%{+YYYY.MM.dd}"
        }
      }
    }

Kubernetes Audit Policy #

# Kubernetes audit policy для compliance
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# SOC 2 - Track access to sensitive resources
- level: RequestResponse
  resources:
  - group: ""
    resources: ["secrets", "configmaps"]
  - group: "rbac.authorization.k8s.io"
    resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]

# PCI DSS - Log network policy changes
- level: RequestResponse
  resources:
  - group: "networking.k8s.io"
    resources: ["networkpolicies"]

# GDPR - Track data processing activities
- level: RequestResponse
  namespaces: ["production", "staging"]
  resources:
  - group: ""
    resources: ["pods", "services"]
  - group: "apps"
    resources: ["deployments", "replicasets"]

# Financial compliance - Track all changes in production
- level: Metadata
  namespaces: ["production"]
  verbs: ["create", "update", "patch", "delete"]

# Security events - Privilege escalation attempts
- level: RequestResponse
  verbs: ["create", "update", "patch"]
  resources:
  - group: ""
    resources: ["pods/exec", "pods/portforward"]
  - group: "rbac.authorization.k8s.io"
    resources: ["rolebindings", "clusterrolebindings"]

Compliance Dashboards #

Grafana Compliance Dashboard #

{
  "dashboard": {
    "title": "Compliance Monitoring Dashboard",
    "panels": [
      {
        "title": "SOC 2 - Access Control Violations",
        "type": "stat",
        "targets": [
          {
            "expr": "sum(rate(kubernetes_audit_total{verb=\"create\",resource=\"rolebindings\",user=\"system:anonymous\"}[5m]))",
            "legendFormat": "Anonymous Access Attempts"
          }
        ]
      },
      {
        "title": "PCI DSS - Network Policy Changes",
        "type": "graph",
        "targets": [
          {
            "expr": "sum by (user) (rate(kubernetes_audit_total{resource=\"networkpolicies\",verb=~\"create|update|delete\"}[1h]))",
            "legendFormat": "{{user}}"
          }
        ]
      },
      {
        "title": "GDPR - Data Processing Events",
        "type": "table",
        "targets": [
          {
            "expr": "sum by (action, user_id) (rate(gdpr_events_total[1h]))",
            "format": "table"
          }
        ]
      },
      {
        "title": "Security Policy Violations",
        "type": "heatmap",
        "targets": [
          {
            "expr": "sum by (policy, severity) (rate(gatekeeper_violations_total[1h]))",
            "legendFormat": "{{policy}} - {{severity}}"
          }
        ]
      }
    ]
  }
}

🔍 Risk Management #

Risk Assessment Framework #

# Risk Assessment Configuration
apiVersion: v1
kind: ConfigMap
metadata:
  name: risk-assessment-config
  namespace: compliance
data:
  risk_matrix.yaml: |
    **Категории рисков:**
    
    **Утечка данных:**
    - Влияние: Высокое
    - Вероятность: Средняя
    - Оценка риска: 8
    - Контроли:
      - Шифрование в состоянии покоя
      - Контроли доступа
      - Сегментация сети
      - Мониторинг и оповещения
    
    **Повышение привилегий:**
    - Влияние: Высокое
    - Вероятность: Низкая
    - Оценка риска: 6
    - Контроли:
      - Политики RBAC
      - Стандарты безопасности Pod
      - Контроллеры допуска
      - Мониторинг во время выполнения
    
    **Потеря данных:**
    - Влияние: Среднее
    - Вероятность: Низкая
    - Оценка риска: 4
    - Контроли:
      - Процедуры резервного копирования
      - Аварийное восстановление
      - Репликация данных
      - Контроль версий
    
    **Нарушение соответствия:**
    - Влияние: Среднее
    - Вероятность: Средняя
    - Оценка риска: 6
    - Контроли:
      - Применение политик
      - Аудит-логирование
      - Регулярные оценки
      - Обучение и осведомленность
    
    **Пороги рисков:**
    - Высокий: 7
    - Средний: 4
    - Низкий: 2

Automated Risk Scoring #

#!/usr/bin/env python3
"""
Automated Risk Assessment Tool
"""

import yaml
import json
import subprocess
from datetime import datetime

class RiskAssessment:
    def __init__(self, config_file):
        with open(config_file, 'r') as f:
            self.config = yaml.safe_load(f)
    
    def assess_kubernetes_risks(self):
        """Assess Kubernetes security risks"""
        risks = []
        
        # Check for privileged pods
        result = subprocess.run(['kubectl', 'get', 'pods', '--all-namespaces', 
                               '-o', 'json'], capture_output=True, text=True)
        pods = json.loads(result.stdout)
        
        privileged_pods = 0
        for pod in pods['items']:
            for container in pod['spec'].get('containers', []):
                if container.get('securityContext', {}).get('privileged'):
                    privileged_pods += 1
        
        if privileged_pods > 0:
            risks.append({
                'category': 'privilege_escalation',
                'severity': 'high',
                'description': f'{privileged_pods} privileged pods found',
                'recommendation': 'Remove privileged flag or use Pod Security Standards'
            })
        
        # Check for missing network policies
        result = subprocess.run(['kubectl', 'get', 'networkpolicies', 
                               '--all-namespaces', '-o', 'json'], 
                               capture_output=True, text=True)
        netpols = json.loads(result.stdout)
        
        result = subprocess.run(['kubectl', 'get', 'namespaces', '-o', 'json'], 
                               capture_output=True, text=True)
        namespaces = json.loads(result.stdout)
        
        protected_namespaces = set()
        for netpol in netpols['items']:
            protected_namespaces.add(netpol['metadata']['namespace'])
        
        unprotected_count = 0
        for ns in namespaces['items']:
            if (ns['metadata']['name'] not in protected_namespaces and 
                not ns['metadata']['name'].startswith('kube-')):
                unprotected_count += 1
        
        if unprotected_count > 0:
            risks.append({
                'category': 'data_breach',
                'severity': 'medium',
                'description': f'{unprotected_count} namespaces without network policies',
                'recommendation': 'Implement default-deny network policies'
            })
        
        return risks
    
    def generate_risk_report(self):
        """Generate comprehensive risk report"""
        kubernetes_risks = self.assess_kubernetes_risks()
        
        report = {
            'timestamp': datetime.now().isoformat(),
            'assessment_type': 'automated',
            'risks': kubernetes_risks,
            'summary': {
                'total_risks': len(kubernetes_risks),
                'high_severity': len([r for r in kubernetes_risks if r['severity'] == 'high']),
                'medium_severity': len([r for r in kubernetes_risks if r['severity'] == 'medium']),
                'low_severity': len([r for r in kubernetes_risks if r['severity'] == 'low'])
            }
        }
        
        return report

if __name__ == '__main__':
    assessor = RiskAssessment('risk_config.yaml')
    report = assessor.generate_risk_report()
    
    with open(f'risk_assessment_{datetime.now().strftime("%Y%m%d_%H%M%S")}.json', 'w') as f:
        json.dump(report, f, indent=2)
    
    print(f"Risk assessment completed. Found {report['summary']['total_risks']} risks.")

📋 Compliance Reporting #

Automated Compliance Reports #

#!/usr/bin/env python3
"""
Automated Compliance Reporting
"""

import json
import subprocess
from datetime import datetime, timedelta
from elasticsearch import Elasticsearch
from jinja2 import Template

class ComplianceReporter:
    def __init__(self, es_host='elasticsearch:9200'):
        self.es = Elasticsearch([es_host])
        
    def generate_soc2_report(self, start_date, end_date):
        """Generate SOC 2 compliance report"""
        
        # Query access control events
        access_query = {
            "query": {
                "bool": {
                    "must": [
                        {"range": {"@timestamp": {"gte": start_date, "lte": end_date}}},
                        {"terms": {"verb": ["create", "update", "delete"]}},
                        {"terms": {"resource": ["rolebindings", "clusterrolebindings"]}}
                    ]
                }
            }
        }
        
        access_events = self.es.search(index="compliance-logs-*", body=access_query)
        
        # Query network security events
        network_query = {
            "query": {
                "bool": {
                    "must": [
                        {"range": {"@timestamp": {"gte": start_date, "lte": end_date}}},
                        {"term": {"resource": "networkpolicies"}}
                    ]
                }
            }
        }
        
        network_events = self.es.search(index="compliance-logs-*", body=network_query)
        
        report = {
            "report_type": "SOC 2 Type II",
            "period": f"{start_date} to {end_date}",
            "controls": {
                "CC6_1": {
                    "description": "Logical and physical access controls",
                    "events_count": access_events['hits']['total']['value'],
                    "status": "Compliant" if access_events['hits']['total']['value'] == 0 else "Review Required"
                },
                "CC6_3": {
                    "description": "Network security controls",
                    "events_count": network_events['hits']['total']['value'],
                    "status": "Compliant"
                }
            }
        }
        
        return report
    
    def generate_gdpr_report(self, start_date, end_date):
        """Generate GDPR compliance report"""
        
        # Query data processing events
        gdpr_query = {
            "query": {
                "bool": {
                    "must": [
                        {"range": {"@timestamp": {"gte": start_date, "lte": end_date}}},
                        {"exists": {"field": "gdpr_event"}}
                    ]
                }
            },
            "aggs": {
                "actions": {
                    "terms": {"field": "action.keyword"}
                }
            }
        }
        
        gdpr_events = self.es.search(index="gdpr-logs-*", body=gdpr_query)
        
        report = {
            "report_type": "GDPR Article 30 Record",
            "period": f"{start_date} to {end_date}",
            "data_processing_activities": gdpr_events['aggregations']['actions']['buckets'],
            "data_subject_requests": {
                "total": gdpr_events['hits']['total']['value'],
                "export_requests": len([h for h in gdpr_events['hits']['hits'] 
                                       if h['_source'].get('action') == 'export']),
                "deletion_requests": len([h for h in gdpr_events['hits']['hits'] 
                                        if h['_source'].get('action') == 'delete'])
            }
        }
        
        return report

# HTML Report Template
HTML_TEMPLATE = """
<!DOCTYPE html>
<html>
<head>
    <title>{{ report.report_type }} - {{ report.period }}</title>
    <style>
        body { font-family: Arial, sans-serif; margin: 20px; }
        .header { background-color: #f5f5f5; padding: 20px; margin-bottom: 20px; }
        .control { margin-bottom: 15px; padding: 10px; border-left: 4px solid #007cba; }
        .compliant { border-left-color: #28a745; }
        .review { border-left-color: #ffc107; }
        .non-compliant { border-left-color: #dc3545; }
        table { width: 100%; border-collapse: collapse; margin-top: 10px; }
        th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
        th { background-color: #f2f2f2; }
    </style>
</head>
<body>
    <div class="header">
        <h1>{{ report.report_type }}</h1>
        <p><strong>Reporting Period:</strong> {{ report.period }}</p>
        <p><strong>Generated:</strong> {{ timestamp }}</p>
    </div>
    
    {% if report.controls %}
    <h2>Control Assessment</h2>
    {% for control_id, control in report.controls.items() %}
    <div class="control {{ 'compliant' if control.status == 'Compliant' else 'review' }}">
        <h3>{{ control_id }}: {{ control.description }}</h3>
        <p><strong>Status:</strong> {{ control.status }}</p>
        <p><strong>Events:</strong> {{ control.events_count }}</p>
    </div>
    {% endfor %}
    {% endif %}
    
    {% if report.data_subject_requests %}
    <h2>Data Subject Requests</h2>
    <table>
        <tr>
            <th>Request Type</th>
            <th>Count</th>
        </tr>
        <tr>
            <td>Total Requests</td>
            <td>{{ report.data_subject_requests.total }}</td>
        </tr>
        <tr>
            <td>Export Requests</td>
            <td>{{ report.data_subject_requests.export_requests }}</td>
        </tr>
        <tr>
            <td>Deletion Requests</td>
            <td>{{ report.data_subject_requests.deletion_requests }}</td>
        </tr>
    </table>
    {% endif %}
</body>
</html>
"""

def generate_html_report(report_data):
    template = Template(HTML_TEMPLATE)
    return template.render(
        report=report_data,
        timestamp=datetime.now().strftime("%Y-%m-%d %H:%M:%S")
    )

if __name__ == '__main__':
    reporter = ComplianceReporter()
    
    # Generate monthly SOC 2 report
    end_date = datetime.now()
    start_date = end_date - timedelta(days=30)
    
    soc2_report = reporter.generate_soc2_report(
        start_date.isoformat(),
        end_date.isoformat()
    )
    
    # Generate HTML report
    html_report = generate_html_report(soc2_report)
    
    with open(f'soc2_report_{datetime.now().strftime("%Y%m%d")}.html', 'w') as f:
        f.write(html_report)
    
    print("SOC 2 compliance report generated successfully")

🎯 Практические задания #

🟢 Задание 1: Policy as Code #

# 1. Создать OPA policies для compliance
mkdir compliance-policies && cd compliance-policies

cat > soc2-policies.rego << 'EOF'
package soc2

# SOC 2 CC6.1 - Logical access controls
deny[msg] {
    input.kind == "Pod"
    input.spec.containers[_].securityContext.privileged == true
    msg := "Privileged containers violate SOC 2 CC6.1"
}

# SOC 2 CC6.3 - Network security
deny[msg] {
    input.kind == "Service"
    input.spec.type == "LoadBalancer"
    not input.spec.loadBalancerSourceRanges
    msg := "LoadBalancer services must restrict source ranges (SOC 2 CC6.3)"
}
EOF

# 2. Тестировать policies
cat > test-pod.yaml << 'EOF'
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
spec:
  containers:
  - name: app
    image: nginx
    securityContext:
      privileged: true
EOF

# Проверить policy
opa eval -d soc2-policies.rego -i test-pod.yaml "data.soc2.deny"

🟡 Задание 2: Audit Logging Setup #

# 1. Настроить Kubernetes audit policy
cat > audit-policy.yaml << 'EOF'
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: RequestResponse
  resources:
  - group: "rbac.authorization.k8s.io"
    resources: ["*"]
- level: Metadata
  namespaces: ["production"]
  verbs: ["create", "update", "patch", "delete"]
EOF

# 2. Применить audit policy (требует доступа к control plane)
# sudo cp audit-policy.yaml /etc/kubernetes/audit-policy.yaml

# 3. Настроить kube-apiserver с audit logging
# В /etc/kubernetes/manifests/kube-apiserver.yaml добавить:
cat >> kube-apiserver-audit.yaml << 'EOF'
spec:
  containers:
  - command:
    - kube-apiserver
    - --audit-log-path=/var/log/audit.log
    - --audit-policy-file=/etc/kubernetes/audit-policy.yaml
    - --audit-log-maxage=30
    - --audit-log-maxbackup=3
    - --audit-log-maxsize=100
EOF

🟠 Задание 3: Compliance Dashboard #

# 1. Создать Prometheus rules для compliance metrics
cat > compliance-rules.yaml << 'EOF'
groups:
- name: compliance.rules
  rules:
  - alert: SOC2_PrivilegedContainer
    expr: kube_pod_container_status_running{container=~".*"} and on(pod) kube_pod_spec_containers_security_context_privileged == 1
    for: 0m
    labels:
      severity: critical
      compliance: SOC2
    annotations:
      summary: "Privileged container detected"
      description: "Pod {{ $labels.pod }} in namespace {{ $labels.namespace }} is running a privileged container"

  - alert: PCI_DSS_UnencryptedTraffic
    expr: increase(istio_requests_total{security_policy!="mutual_tls"}[5m]) > 0
    for: 2m
    labels:
      severity: warning
      compliance: PCI-DSS
    annotations:
      summary: "Unencrypted traffic detected"
      description: "Service {{ $labels.destination_service_name }} is receiving unencrypted traffic"

  - alert: GDPR_DataProcessingEvent
    expr: increase(gdpr_data_processing_total[1h]) > 100
    for: 0m
    labels:
      severity: info
      compliance: GDPR
    annotations:
      summary: "High volume of data processing events"
      description: "{{ $value }} GDPR data processing events in the last hour"
EOF

# 2. Применить rules
kubectl apply -f compliance-rules.yaml

# 3. Создать Grafana dashboard
cat > compliance-dashboard.json << 'EOF'
{
  "dashboard": {
    "title": "Compliance Monitoring",
    "panels": [
      {
        "title": "SOC 2 Violations",
        "type": "stat",
        "targets": [
          {
            "expr": "sum(ALERTS{alertname=\"SOC2_PrivilegedContainer\"})"
          }
        ]
      }
    ]
  }
}
EOF

📚 Полезные ресурсы #

🛠️ Инструменты #

  • Policy Management: Open Policy Agent, Kyverno, Falco
  • Audit Logging: ELK Stack, Splunk, Fluentd
  • Compliance Scanning: Prowler, ScoutSuite, CloudSploit
  • Risk Assessment: OpenFAIR, FAIR-U, RiskLens

📖 Стандарты и Frameworks #


🎯 Результаты изучения #

После изучения этого раздела вы:

  • ✅ Понимаете требования основных compliance frameworks
  • ✅ Умеете реализовывать Policy as Code подход
  • ✅ Знаете как настроить audit logging и мониторинг
  • ✅ Можете автоматизировать compliance reporting
  • ✅ Понимаете принципы risk management в DevSecOps
  • ✅ Умеете создавать governance процессы для DevOps

<function_calls> [{“content”: “\u0421\u043e\u0437\u0434\u0430\u0442\u044c \u0441\u043e\u0434\u0435\u0440\u0436\u0438\u043c\u043e\u0435 \u0434\u043b\u044f \u0433\u043b\u0430\u0432\u044b 6 - DevSecOps”, “status”: “completed”, “priority”: “high”, “id”: “1”}]