fix: skip WAL freshness check for new CNPG clusters without archives #93

Merged
forgejo_admin merged 2 commits from 92-fix-backup-verify-new-clusters into main 2026-03-17 02:21:39 +00:00

Summary

Backup verify CronJob was failing because woodpecker CNPG cluster (2 days old) has no WAL archives yet. CNPG only archives WAL segments when they fill up (16MB) — low-traffic DBs take days. Added a WAL directory existence check before the freshness check.

Changes

  • terraform/main.tf — Added WAL directory existence check in cnpg_backup_verify CronJob script. If no WAL archives exist yet for a prefix, logs SKIP: No WAL archives yet and continues without error. Base backup object presence (already checked above) is sufficient verification for new clusters.

Test Plan

  • tofu plan shows only CronJob resource changing
  • Next 03:00 UTC CronJob run passes (or manual trigger via kubectl create job --from=cronjob/cnpg-backup-verify test-verify -n postgres)
  • pal-e-postgres prefix: WAL freshness check runs as before (has WAL segments)
  • woodpecker prefix: gets SKIP instead of ERROR (no WAL segments yet)

Review Checklist

  • tofu fmt -check passes
  • tofu validate passes
  • No unrelated changes — 8 lines added, zero lines removed
  • Closes #92
  • todo-cnpg-backup-verify-failure — pal-e-docs TODO
  • deployment-lessons — lessons learned doc
## Summary Backup verify CronJob was failing because woodpecker CNPG cluster (2 days old) has no WAL archives yet. CNPG only archives WAL segments when they fill up (16MB) — low-traffic DBs take days. Added a WAL directory existence check before the freshness check. ## Changes - `terraform/main.tf` — Added WAL directory existence check in `cnpg_backup_verify` CronJob script. If no WAL archives exist yet for a prefix, logs `SKIP: No WAL archives yet` and continues without error. Base backup object presence (already checked above) is sufficient verification for new clusters. ## Test Plan - [ ] `tofu plan` shows only CronJob resource changing - [ ] Next 03:00 UTC CronJob run passes (or manual trigger via `kubectl create job --from=cronjob/cnpg-backup-verify test-verify -n postgres`) - [ ] pal-e-postgres prefix: WAL freshness check runs as before (has WAL segments) - [ ] woodpecker prefix: gets SKIP instead of ERROR (no WAL segments yet) ## Review Checklist - [x] `tofu fmt -check` passes - [x] `tofu validate` passes - [x] No unrelated changes — 8 lines added, zero lines removed ## Related - Closes #92 - `todo-cnpg-backup-verify-failure` — pal-e-docs TODO - `deployment-lessons` — lessons learned doc
fix: correct Ollama hostPath volume keys for otwld Helm chart
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
4c8eb04a73
PR #90 used extraVolumes/extraVolumeMounts which don't exist in the
otwld/ollama-helm chart, causing duplicate volume name and mountPath
collisions. Fix uses the chart's actual keys: volumes/volumeMounts
for the hostPath mount at /ollama-models, extraEnv for OLLAMA_MODELS,
and persistentVolume.enabled=false to disable the default PVC.

The chart's emptyDir at /root/.ollama is harmless — unused because
OLLAMA_MODELS points to /ollama-models (the hostPath mount).
Models verified surviving pod restart at /var/lib/ollama/models/.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
fix: skip WAL freshness check for new CNPG clusters without archives
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
d928782f81
New CNPG clusters (like woodpecker-db, 2 days old) haven't archived
any WAL segments yet — CNPG only archives when segments fill up (16MB).
The backup verify CronJob treated empty WAL directories as failures,
causing KubeJobFailed alerts even though base backups completed fine.

Add a WAL directory existence check before the freshness check. If no
WAL archives exist yet, log SKIP and continue — base backup presence
(already verified) is sufficient for new clusters.

Closes #92

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Author
Owner

Tofu Plan Output

tailscale_acl.this: Refreshing state... [id=acl]
data.kubernetes_namespace_v1.pal_e_docs: Reading...
kubernetes_namespace_v1.woodpecker: Refreshing state... [id=woodpecker]
kubernetes_namespace_v1.monitoring: Refreshing state... [id=monitoring]
kubernetes_namespace_v1.postgres: Refreshing state... [id=postgres]
kubernetes_namespace_v1.tailscale: Refreshing state... [id=tailscale]
kubernetes_namespace_v1.minio: Refreshing state... [id=minio]
kubernetes_namespace_v1.cnpg_system: Refreshing state... [id=cnpg-system]
kubernetes_namespace_v1.harbor: Refreshing state... [id=harbor]
helm_release.nvidia_device_plugin: Refreshing state... [id=nvidia-device-plugin]
data.kubernetes_namespace_v1.pal_e_docs: Read complete after 0s [id=pal-e-docs]
kubernetes_namespace_v1.keycloak: Refreshing state... [id=keycloak]
data.kubernetes_namespace_v1.tofu_state: Reading...
kubernetes_namespace_v1.ollama: Refreshing state... [id=ollama]
kubernetes_namespace_v1.forgejo: Refreshing state... [id=forgejo]
kubernetes_service_v1.embedding_worker_metrics: Refreshing state... [id=pal-e-docs/embedding-worker-metrics]
kubernetes_secret_v1.paledocs_db_url: Refreshing state... [id=pal-e-docs/paledocs-db-url]
kubernetes_config_map_v1.uptime_dashboard: Refreshing state... [id=monitoring/uptime-dashboard]
data.kubernetes_namespace_v1.tofu_state: Read complete after 0s [id=tofu-state]
helm_release.tailscale_operator: Refreshing state... [id=tailscale-operator]
helm_release.loki_stack: Refreshing state... [id=loki-stack]
kubernetes_service_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
kubernetes_secret_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
helm_release.kube_prometheus_stack: Refreshing state... [id=kube-prometheus-stack]
kubernetes_secret_v1.woodpecker_db_credentials: Refreshing state... [id=woodpecker/woodpecker-db-credentials]
helm_release.cnpg: Refreshing state... [id=cnpg]
kubernetes_manifest.netpol_monitoring: Refreshing state...
kubernetes_manifest.netpol_harbor: Refreshing state...
kubernetes_manifest.netpol_minio: Refreshing state...
kubernetes_manifest.netpol_postgres: Refreshing state...
kubernetes_manifest.netpol_cnpg_system: Refreshing state...
kubernetes_manifest.netpol_woodpecker: Refreshing state...
kubernetes_role_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_service_account_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_secret_v1.keycloak_admin: Refreshing state... [id=keycloak/keycloak-admin]
kubernetes_persistent_volume_claim_v1.keycloak_data: Refreshing state... [id=keycloak/keycloak-data]
kubernetes_service_v1.keycloak: Refreshing state... [id=keycloak/keycloak]
helm_release.forgejo: Refreshing state... [id=forgejo]
kubernetes_manifest.netpol_keycloak: Refreshing state...
helm_release.ollama: Refreshing state... [id=ollama]
kubernetes_role_binding_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_manifest.netpol_forgejo: Refreshing state...
kubernetes_manifest.netpol_ollama: Refreshing state...
kubernetes_deployment_v1.keycloak: Refreshing state... [id=keycloak/keycloak]
kubernetes_ingress_v1.keycloak_funnel: Refreshing state... [id=keycloak/keycloak-funnel]
kubernetes_config_map_v1.grafana_loki_datasource: Refreshing state... [id=monitoring/grafana-loki-datasource]
kubernetes_config_map_v1.pal_e_docs_dashboard: Refreshing state... [id=monitoring/pal-e-docs-dashboard]
kubernetes_ingress_v1.alertmanager_funnel: Refreshing state... [id=monitoring/alertmanager-funnel]
helm_release.blackbox_exporter: Refreshing state... [id=blackbox-exporter]
kubernetes_manifest.embedding_alerts: Refreshing state...
kubernetes_config_map_v1.dora_dashboard: Refreshing state... [id=monitoring/dora-dashboard]
helm_release.minio: Refreshing state... [id=minio]
kubernetes_deployment_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
kubernetes_manifest.blackbox_alerts: Refreshing state...
kubernetes_ingress_v1.grafana_funnel: Refreshing state... [id=monitoring/grafana-funnel]
kubernetes_manifest.dora_exporter_service_monitor: Refreshing state...
kubernetes_manifest.embedding_worker_service_monitor: Refreshing state...
helm_release.harbor: Refreshing state... [id=harbor]
kubernetes_ingress_v1.forgejo_funnel: Refreshing state... [id=forgejo/forgejo-funnel]
minio_iam_policy.cnpg_wal: Refreshing state... [id=cnpg-wal]
minio_s3_bucket.tf_state_backups: Refreshing state... [id=tf-state-backups]
minio_iam_policy.tf_backup: Refreshing state... [id=tf-backup]
kubernetes_ingress_v1.minio_funnel: Refreshing state... [id=minio/minio-funnel]
minio_s3_bucket.assets: Refreshing state... [id=assets]
kubernetes_ingress_v1.minio_api_funnel: Refreshing state... [id=minio/minio-api-funnel]
minio_iam_user.tf_backup: Refreshing state... [id=tf-backup]
minio_iam_user.cnpg: Refreshing state... [id=cnpg]
minio_s3_bucket.postgres_wal: Refreshing state... [id=postgres-wal]
minio_iam_user_policy_attachment.cnpg: Refreshing state... [id=cnpg-20260302210642491000000001]
minio_iam_user_policy_attachment.tf_backup: Refreshing state... [id=tf-backup-20260314163610110100000001]
kubernetes_secret_v1.tf_backup_s3_creds: Refreshing state... [id=tofu-state/tf-backup-s3-creds]
kubernetes_secret_v1.woodpecker_cnpg_s3_creds: Refreshing state... [id=woodpecker/cnpg-s3-creds]
kubernetes_secret_v1.cnpg_s3_creds: Refreshing state... [id=postgres/cnpg-s3-creds]
kubernetes_cron_job_v1.tf_state_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_cron_job_v1.cnpg_backup_verify: Refreshing state... [id=postgres/cnpg-backup-verify]
kubernetes_manifest.woodpecker_postgres: Refreshing state...
kubernetes_ingress_v1.harbor_funnel: Refreshing state... [id=harbor/harbor-funnel]
helm_release.woodpecker: Refreshing state... [id=woodpecker]
kubernetes_manifest.woodpecker_postgres_scheduled_backup: Refreshing state...
kubernetes_ingress_v1.woodpecker_funnel: Refreshing state... [id=woodpecker/woodpecker-funnel]

OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  + create
  ~ update in-place

OpenTofu will perform the following actions:

  # helm_release.woodpecker will be updated in-place
  ~ resource "helm_release" "woodpecker" {
        id                         = "woodpecker"
      ~ metadata                   = [
          - {
              - app_version    = "3.13.0"
              - chart          = "woodpecker"
              - first_deployed = 1773625582
              - last_deployed  = 1773710708
              - name           = "woodpecker"
              - namespace      = "woodpecker"
              - notes          = <<-EOT
                    1. Get the application URL by running these commands:
                      export POD_NAME=$(kubectl get pods --namespace woodpecker -l "app.kubernetes.io/name=server,app.kubernetes.io/instance=woodpecker" -o jsonpath="{.items[0].metadata.name}")
                      export CONTAINER_PORT=$(kubectl get pod --namespace woodpecker $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
                      echo "Visit http://127.0.0.1:8080 to use your application"
                      kubectl --namespace woodpecker port-forward $POD_NAME 8080:$CONTAINER_PORT
                EOT
              - revision       = 3
              - values         = jsonencode(
                    {
                      - agent  = {
                          - enabled      = true
                          - env          = {
                              - WOODPECKER_AGENT_SECRET              = "(sensitive value)"
                              - WOODPECKER_BACKEND                   = "kubernetes"
                              - WOODPECKER_BACKEND_K8S_NAMESPACE     = "woodpecker"
                              - WOODPECKER_BACKEND_K8S_STORAGE_CLASS = "local-path"
                              - WOODPECKER_BACKEND_K8S_VOLUME_SIZE   = "1Gi"
                            }
                          - replicaCount = 1
                          - resources    = {
                              - limits   = {
                                  - memory = "256Mi"
                                }
                              - requests = {
                                  - cpu    = "50m"
                                  - memory = "64Mi"
                                }
                            }
                        }
                      - server = {
                          - env              = {
                              - WOODPECKER_ADMIN               = "forgejo_admin"
                              - WOODPECKER_AGENT_SECRET        = "(sensitive value)"
                              - WOODPECKER_DATABASE_DATASOURCE = "postgres://woodpecker:kM3L4AhLNiuMhIY7tMQ@woodpecker-db-rw.woodpecker.svc.cluster.local:5432/woodpecker?sslmode=disable"
                              - WOODPECKER_DATABASE_DRIVER     = "postgres"
                              - WOODPECKER_FORGEJO             = "true"
                              - WOODPECKER_FORGEJO_CLIENT      = "(sensitive value)"
                              - WOODPECKER_FORGEJO_CLONE_URL   = "http://forgejo-http.forgejo.svc.cluster.local:80"
                              - WOODPECKER_FORGEJO_SECRET      = "(sensitive value)"
                              - WOODPECKER_FORGEJO_URL         = "https://forgejo.tail5b443a.ts.net"
                              - WOODPECKER_HOST                = "https://woodpecker.tail5b443a.ts.net"
                            }
                          - persistentVolume = {
                              - enabled      = true
                              - size         = "5Gi"
                              - storageClass = "local-path"
                            }
                          - resources        = {
                              - limits   = {
                                  - memory = "512Mi"
                                }
                              - requests = {
                                  - cpu    = "50m"
                                  - memory = "128Mi"
                                }
                            }
                          - statefulSet      = {
                              - replicaCount = 1
                            }
                        }
                    }
                )
              - version        = "3.5.1"
            },
        ] -> (known after apply)
        name                       = "woodpecker"
      ~ status                     = "pending-upgrade" -> "deployed"
        # (25 unchanged attributes hidden)

      - set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      - set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }

        # (2 unchanged blocks hidden)
    }

  # kubernetes_cron_job_v1.cnpg_backup_verify will be updated in-place
  ~ resource "kubernetes_cron_job_v1" "cnpg_backup_verify" {
        id = "postgres/cnpg-backup-verify"

      ~ spec {
            # (6 unchanged attributes hidden)

          ~ job_template {
              ~ spec {
                    # (7 unchanged attributes hidden)

                  ~ template {
                      ~ spec {
                            # (12 unchanged attributes hidden)

                          ~ container {
                              ~ args                       = [
                                  - <<-EOT
                                        set -euo pipefail
                                        
                                        apk add --no-cache curl >/dev/null
                                        
                                        # Install mc (MinIO Client)
                                        curl -sSL https://dl.min.io/client/mc/release/linux-amd64/mc -o /tmp/mc
                                        chmod +x /tmp/mc
                                        
                                        # Configure MinIO alias
                                        /tmp/mc alias set backup http://minio.minio.svc.cluster.local:9000 "$ACCESS_KEY_ID" "$ACCESS_SECRET_KEY"
                                        
                                        ERRORS=0
                                        MAX_AGE_HOURS=25  # Allow 1h buffer beyond 24h
                                        
                                        # Check each backup path prefix
                                        for PREFIX in "pal-e-postgres" "woodpecker"; do
                                          echo "=== Checking backups for $PREFIX ==="
                                        
                                          # List objects in the backup path
                                          OBJECTS=$(/tmp/mc ls "backup/postgres-wal/$PREFIX/" 2>/dev/null | head -5 || true)
                                        
                                          if [ -z "$OBJECTS" ]; then
                                            echo "ERROR: No backup objects found for $PREFIX"
                                            ERRORS=$((ERRORS + 1))
                                            continue
                                          fi
                                        
                                          echo "Found backup objects for $PREFIX:"
                                          echo "$OBJECTS"
                                        
                                          # Check WAL directory for recent files
                                          RECENT=$(/tmp/mc find "backup/postgres-wal/$PREFIX/wals/" --newer-than "${MAX_AGE_HOURS}h" 2>/dev/null | head -1 || true)
                                        
                                          if [ -z "$RECENT" ]; then
                                            echo "WARNING: No WAL files newer than ${MAX_AGE_HOURS}h for $PREFIX"
                                            ERRORS=$((ERRORS + 1))
                                          else
                                            echo "OK: Recent WAL files found for $PREFIX"
                                          fi
                                        done
                                        
                                        if [ "$ERRORS" -gt 0 ]; then
                                          echo "FAILED: $ERRORS backup verification errors"
                                          exit 1
                                        fi
                                        
                                        echo "All backup verifications passed."
                                    EOT,
                                  + <<-EOT
                                        set -euo pipefail
                                        
                                        apk add --no-cache curl >/dev/null
                                        
                                        # Install mc (MinIO Client)
                                        curl -sSL https://dl.min.io/client/mc/release/linux-amd64/mc -o /tmp/mc
                                        chmod +x /tmp/mc
                                        
                                        # Configure MinIO alias
                                        /tmp/mc alias set backup http://minio.minio.svc.cluster.local:9000 "$ACCESS_KEY_ID" "$ACCESS_SECRET_KEY"
                                        
                                        ERRORS=0
                                        MAX_AGE_HOURS=25  # Allow 1h buffer beyond 24h
                                        
                                        # Check each backup path prefix
                                        for PREFIX in "pal-e-postgres" "woodpecker"; do
                                          echo "=== Checking backups for $PREFIX ==="
                                        
                                          # List objects in the backup path
                                          OBJECTS=$(/tmp/mc ls "backup/postgres-wal/$PREFIX/" 2>/dev/null | head -5 || true)
                                        
                                          if [ -z "$OBJECTS" ]; then
                                            echo "ERROR: No backup objects found for $PREFIX"
                                            ERRORS=$((ERRORS + 1))
                                            continue
                                          fi
                                        
                                          echo "Found backup objects for $PREFIX:"
                                          echo "$OBJECTS"
                                        
                                          # Check if WAL directory has content (new clusters may not have archived WALs yet)
                                          WAL_EXISTS=$(/tmp/mc ls "backup/postgres-wal/$PREFIX/wals/" 2>/dev/null | head -1 || true)
                                        
                                          if [ -z "$WAL_EXISTS" ]; then
                                            echo "SKIP: No WAL archives yet for $PREFIX (new cluster, base backup only)"
                                            continue
                                          fi
                                        
                                          # Check WAL directory for recent files
                                          RECENT=$(/tmp/mc find "backup/postgres-wal/$PREFIX/wals/" --newer-than "${MAX_AGE_HOURS}h" 2>/dev/null | head -1 || true)
                                        
                                          if [ -z "$RECENT" ]; then
                                            echo "WARNING: No WAL files newer than ${MAX_AGE_HOURS}h for $PREFIX"
                                            ERRORS=$((ERRORS + 1))
                                          else
                                            echo "OK: Recent WAL files found for $PREFIX"
                                          fi
                                        done
                                        
                                        if [ "$ERRORS" -gt 0 ]; then
                                          echo "FAILED: $ERRORS backup verification errors"
                                          exit 1
                                        fi
                                        
                                        echo "All backup verifications passed."
                                    EOT,
                                ]
                                name                       = "verify"
                                # (8 unchanged attributes hidden)

                                # (3 unchanged blocks hidden)
                            }
                        }

                        # (1 unchanged block hidden)
                    }
                }

                # (1 unchanged block hidden)
            }
        }

        # (1 unchanged block hidden)
    }

  # kubernetes_manifest.netpol_cnpg_system will be created
  + resource "kubernetes_manifest" "netpol_cnpg_system" {
      + manifest = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + name      = "default-deny-ingress"
              + namespace = "cnpg-system"
            }
          + spec       = {
              + ingress     = [
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "kube-system"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                            },
                        ]
                    },
                ]
              + podSelector = {}
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
      + object   = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + annotations                = (known after apply)
              + creationTimestamp          = (known after apply)
              + deletionGracePeriodSeconds = (known after apply)
              + deletionTimestamp          = (known after apply)
              + finalizers                 = (known after apply)
              + generateName               = (known after apply)
              + generation                 = (known after apply)
              + labels                     = (known after apply)
              + managedFields              = (known after apply)
              + name                       = "default-deny-ingress"
              + namespace                  = "cnpg-system"
              + ownerReferences            = (known after apply)
              + resourceVersion            = (known after apply)
              + selfLink                   = (known after apply)
              + uid                        = (known after apply)
            }
          + spec       = {
              + egress      = (known after apply)
              + ingress     = [
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "kube-system"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                ]
              + podSelector = {
                  + matchExpressions = (known after apply)
                  + matchLabels      = (known after apply)
                }
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
    }

  # kubernetes_manifest.netpol_forgejo will be created
  + resource "kubernetes_manifest" "netpol_forgejo" {
      + manifest = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + name      = "default-deny-ingress"
              + namespace = "forgejo"
            }
          + spec       = {
              + ingress     = [
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "woodpecker"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                            },
                        ]
                    },
                ]
              + podSelector = {}
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
      + object   = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + annotations                = (known after apply)
              + creationTimestamp          = (known after apply)
              + deletionGracePeriodSeconds = (known after apply)
              + deletionTimestamp          = (known after apply)
              + finalizers                 = (known after apply)
              + generateName               = (known after apply)
              + generation                 = (known after apply)
              + labels                     = (known after apply)
              + managedFields              = (known after apply)
              + name                       = "default-deny-ingress"
              + namespace                  = "forgejo"
              + ownerReferences            = (known after apply)
              + resourceVersion            = (known after apply)
              + selfLink                   = (known after apply)
              + uid                        = (known after apply)
            }
          + spec       = {
              + egress      = (known after apply)
              + ingress     = [
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "woodpecker"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                ]
              + podSelector = {
                  + matchExpressions = (known after apply)
                  + matchLabels      = (known after apply)
                }
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
    }

  # kubernetes_manifest.netpol_harbor will be created
  + resource "kubernetes_manifest" "netpol_harbor" {
      + manifest = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + name      = "default-deny-ingress"
              + namespace = "harbor"
            }
          + spec       = {
              + ingress     = [
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "harbor"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "woodpecker"
                                    }
                                }
                            },
                        ]
                    },
                ]
              + podSelector = {}
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
      + object   = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + annotations                = (known after apply)
              + creationTimestamp          = (known after apply)
              + deletionGracePeriodSeconds = (known after apply)
              + deletionTimestamp          = (known after apply)
              + finalizers                 = (known after apply)
              + generateName               = (known after apply)
              + generation                 = (known after apply)
              + labels                     = (known after apply)
              + managedFields              = (known after apply)
              + name                       = "default-deny-ingress"
              + namespace                  = "harbor"
              + ownerReferences            = (known after apply)
              + resourceVersion            = (known after apply)
              + selfLink                   = (known after apply)
              + uid                        = (known after apply)
            }
          + spec       = {
              + egress      = (known after apply)
              + ingress     = [
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "harbor"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "woodpecker"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                ]
              + podSelector = {
                  + matchExpressions = (known after apply)
                  + matchLabels      = (known after apply)
                }
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
    }

  # kubernetes_manifest.netpol_keycloak will be created
  + resource "kubernetes_manifest" "netpol_keycloak" {
      + manifest = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + name      = "default-deny-ingress"
              + namespace = "keycloak"
            }
          + spec       = {
              + ingress     = [
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                            },
                        ]
                    },
                ]
              + podSelector = {}
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
      + object   = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + annotations                = (known after apply)
              + creationTimestamp          = (known after apply)
              + deletionGracePeriodSeconds = (known after apply)
              + deletionTimestamp          = (known after apply)
              + finalizers                 = (known after apply)
              + generateName               = (known after apply)
              + generation                 = (known after apply)
              + labels                     = (known after apply)
              + managedFields              = (known after apply)
              + name                       = "default-deny-ingress"
              + namespace                  = "keycloak"
              + ownerReferences            = (known after apply)
              + resourceVersion            = (known after apply)
              + selfLink                   = (known after apply)
              + uid                        = (known after apply)
            }
          + spec       = {
              + egress      = (known after apply)
              + ingress     = [
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                ]
              + podSelector = {
                  + matchExpressions = (known after apply)
                  + matchLabels      = (known after apply)
                }
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
    }

  # kubernetes_manifest.netpol_minio will be created
  + resource "kubernetes_manifest" "netpol_minio" {
      + manifest = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + name      = "default-deny-ingress"
              + namespace = "minio"
            }
          + spec       = {
              + ingress     = [
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "postgres"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "woodpecker"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "tofu-state"
                                    }
                                }
                            },
                        ]
                    },
                ]
              + podSelector = {}
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
      + object   = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + annotations                = (known after apply)
              + creationTimestamp          = (known after apply)
              + deletionGracePeriodSeconds = (known after apply)
              + deletionTimestamp          = (known after apply)
              + finalizers                 = (known after apply)
              + generateName               = (known after apply)
              + generation                 = (known after apply)
              + labels                     = (known after apply)
              + managedFields              = (known after apply)
              + name                       = "default-deny-ingress"
              + namespace                  = "minio"
              + ownerReferences            = (known after apply)
              + resourceVersion            = (known after apply)
              + selfLink                   = (known after apply)
              + uid                        = (known after apply)
            }
          + spec       = {
              + egress      = (known after apply)
              + ingress     = [
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "postgres"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "woodpecker"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "tofu-state"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                ]
              + podSelector = {
                  + matchExpressions = (known after apply)
                  + matchLabels      = (known after apply)
                }
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
    }

  # kubernetes_manifest.netpol_monitoring will be created
  + resource "kubernetes_manifest" "netpol_monitoring" {
      + manifest = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + name      = "default-deny-ingress"
              + namespace = "monitoring"
            }
          + spec       = {
              + ingress     = [
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                            },
                        ]
                    },
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                            },
                        ]
                    },
                ]
              + podSelector = {}
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
      + object   = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + annotations                = (known after apply)
              + creationTimestamp          = (known after apply)
              + deletionGracePeriodSeconds = (known after apply)
              + deletionTimestamp          = (known after apply)
              + finalizers                 = (known after apply)
              + generateName               = (known after apply)
              + generation                 = (known after apply)
              + labels                     = (known after apply)
              + managedFields              = (known after apply)
              + name                       = "default-deny-ingress"
              + namespace                  = "monitoring"
              + ownerReferences            = (known after apply)
              + resourceVersion            = (known after apply)
              + selfLink                   = (known after apply)
              + uid                        = (known after apply)
            }
          + spec       = {
              + egress      = (known after apply)
              + ingress     = [
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "tailscale"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "monitoring"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                ]
              + podSelector = {
                  + matchExpressions = (known after apply)
                  + matchLabels      = (known after apply)
                }
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
    }

  # kubernetes_manifest.netpol_ollama will be created
  + resource "kubernetes_manifest" "netpol_ollama" {
      + manifest = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + name      = "default-deny-ingress"
              + namespace = "ollama"
            }
          + spec       = {
              + ingress     = [
                  + {
                      + from = [
                          + {
                              + namespaceSelector = {
                                  + matchLabels = {
                                      + "kubernetes.io/metadata.name" = "pal-e-docs"
                                    }
                                }
                            },
                        ]
                    },
                ]
              + podSelector = {}
              + policyTypes = [
                  + "Ingress",
                ]
            }
        }
      + object   = {
          + apiVersion = "networking.k8s.io/v1"
          + kind       = "NetworkPolicy"
          + metadata   = {
              + annotations                = (known after apply)
              + creationTimestamp          = (known after apply)
              + deletionGracePeriodSeconds = (known after apply)
              + deletionTimestamp          = (known after apply)
              + finalizers                 = (known after apply)
              + generateName               = (known after apply)
              + generation                 = (known after apply)
              + labels                     = (known after apply)
              + managedFields              = (known after apply)
              + name                       = "default-deny-ingress"
              + namespace                  = "ollama"
              + ownerReferences            = (known after apply)
              + resourceVersion            = (known after apply)
              + selfLink                   = (known after apply)
              + uid                        = (known after apply)
            }
          + spec       = {
              + egress      = (known after apply)
              + ingress     = [
                  + {
                      + from  = [
                          + {
                              + ipBlock           = {
                                  + cidr   = (known after apply)
                                  + except = (known after apply)
                                }
                              + namespaceSelector = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = {
                                      + "kubernetes.io/metadata.name" = "pal-e-docs"
                                    }
                                }
                              + podSelector       = {
                                  + matchExpressions = (known after apply)
                                  + matchLabels      = (known after apply)
                                }
                            },
                        ]
                      + ports = (known after apply)
                    },
                ]
              + podSelector = {
                  + matchExpressions = (known ...(truncated)
## Tofu Plan Output ``` tailscale_acl.this: Refreshing state... [id=acl] data.kubernetes_namespace_v1.pal_e_docs: Reading... kubernetes_namespace_v1.woodpecker: Refreshing state... [id=woodpecker] kubernetes_namespace_v1.monitoring: Refreshing state... [id=monitoring] kubernetes_namespace_v1.postgres: Refreshing state... [id=postgres] kubernetes_namespace_v1.tailscale: Refreshing state... [id=tailscale] kubernetes_namespace_v1.minio: Refreshing state... [id=minio] kubernetes_namespace_v1.cnpg_system: Refreshing state... [id=cnpg-system] kubernetes_namespace_v1.harbor: Refreshing state... [id=harbor] helm_release.nvidia_device_plugin: Refreshing state... [id=nvidia-device-plugin] data.kubernetes_namespace_v1.pal_e_docs: Read complete after 0s [id=pal-e-docs] kubernetes_namespace_v1.keycloak: Refreshing state... [id=keycloak] data.kubernetes_namespace_v1.tofu_state: Reading... kubernetes_namespace_v1.ollama: Refreshing state... [id=ollama] kubernetes_namespace_v1.forgejo: Refreshing state... [id=forgejo] kubernetes_service_v1.embedding_worker_metrics: Refreshing state... [id=pal-e-docs/embedding-worker-metrics] kubernetes_secret_v1.paledocs_db_url: Refreshing state... [id=pal-e-docs/paledocs-db-url] kubernetes_config_map_v1.uptime_dashboard: Refreshing state... [id=monitoring/uptime-dashboard] data.kubernetes_namespace_v1.tofu_state: Read complete after 0s [id=tofu-state] helm_release.tailscale_operator: Refreshing state... [id=tailscale-operator] helm_release.loki_stack: Refreshing state... [id=loki-stack] kubernetes_service_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] kubernetes_secret_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] helm_release.kube_prometheus_stack: Refreshing state... [id=kube-prometheus-stack] kubernetes_secret_v1.woodpecker_db_credentials: Refreshing state... [id=woodpecker/woodpecker-db-credentials] helm_release.cnpg: Refreshing state... [id=cnpg] kubernetes_manifest.netpol_monitoring: Refreshing state... kubernetes_manifest.netpol_harbor: Refreshing state... kubernetes_manifest.netpol_minio: Refreshing state... kubernetes_manifest.netpol_postgres: Refreshing state... kubernetes_manifest.netpol_cnpg_system: Refreshing state... kubernetes_manifest.netpol_woodpecker: Refreshing state... kubernetes_role_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_service_account_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_secret_v1.keycloak_admin: Refreshing state... [id=keycloak/keycloak-admin] kubernetes_persistent_volume_claim_v1.keycloak_data: Refreshing state... [id=keycloak/keycloak-data] kubernetes_service_v1.keycloak: Refreshing state... [id=keycloak/keycloak] helm_release.forgejo: Refreshing state... [id=forgejo] kubernetes_manifest.netpol_keycloak: Refreshing state... helm_release.ollama: Refreshing state... [id=ollama] kubernetes_role_binding_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_manifest.netpol_forgejo: Refreshing state... kubernetes_manifest.netpol_ollama: Refreshing state... kubernetes_deployment_v1.keycloak: Refreshing state... [id=keycloak/keycloak] kubernetes_ingress_v1.keycloak_funnel: Refreshing state... [id=keycloak/keycloak-funnel] kubernetes_config_map_v1.grafana_loki_datasource: Refreshing state... [id=monitoring/grafana-loki-datasource] kubernetes_config_map_v1.pal_e_docs_dashboard: Refreshing state... [id=monitoring/pal-e-docs-dashboard] kubernetes_ingress_v1.alertmanager_funnel: Refreshing state... [id=monitoring/alertmanager-funnel] helm_release.blackbox_exporter: Refreshing state... [id=blackbox-exporter] kubernetes_manifest.embedding_alerts: Refreshing state... kubernetes_config_map_v1.dora_dashboard: Refreshing state... [id=monitoring/dora-dashboard] helm_release.minio: Refreshing state... [id=minio] kubernetes_deployment_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] kubernetes_manifest.blackbox_alerts: Refreshing state... kubernetes_ingress_v1.grafana_funnel: Refreshing state... [id=monitoring/grafana-funnel] kubernetes_manifest.dora_exporter_service_monitor: Refreshing state... kubernetes_manifest.embedding_worker_service_monitor: Refreshing state... helm_release.harbor: Refreshing state... [id=harbor] kubernetes_ingress_v1.forgejo_funnel: Refreshing state... [id=forgejo/forgejo-funnel] minio_iam_policy.cnpg_wal: Refreshing state... [id=cnpg-wal] minio_s3_bucket.tf_state_backups: Refreshing state... [id=tf-state-backups] minio_iam_policy.tf_backup: Refreshing state... [id=tf-backup] kubernetes_ingress_v1.minio_funnel: Refreshing state... [id=minio/minio-funnel] minio_s3_bucket.assets: Refreshing state... [id=assets] kubernetes_ingress_v1.minio_api_funnel: Refreshing state... [id=minio/minio-api-funnel] minio_iam_user.tf_backup: Refreshing state... [id=tf-backup] minio_iam_user.cnpg: Refreshing state... [id=cnpg] minio_s3_bucket.postgres_wal: Refreshing state... [id=postgres-wal] minio_iam_user_policy_attachment.cnpg: Refreshing state... [id=cnpg-20260302210642491000000001] minio_iam_user_policy_attachment.tf_backup: Refreshing state... [id=tf-backup-20260314163610110100000001] kubernetes_secret_v1.tf_backup_s3_creds: Refreshing state... [id=tofu-state/tf-backup-s3-creds] kubernetes_secret_v1.woodpecker_cnpg_s3_creds: Refreshing state... [id=woodpecker/cnpg-s3-creds] kubernetes_secret_v1.cnpg_s3_creds: Refreshing state... [id=postgres/cnpg-s3-creds] kubernetes_cron_job_v1.tf_state_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_cron_job_v1.cnpg_backup_verify: Refreshing state... [id=postgres/cnpg-backup-verify] kubernetes_manifest.woodpecker_postgres: Refreshing state... kubernetes_ingress_v1.harbor_funnel: Refreshing state... [id=harbor/harbor-funnel] helm_release.woodpecker: Refreshing state... [id=woodpecker] kubernetes_manifest.woodpecker_postgres_scheduled_backup: Refreshing state... kubernetes_ingress_v1.woodpecker_funnel: Refreshing state... [id=woodpecker/woodpecker-funnel] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create ~ update in-place OpenTofu will perform the following actions: # helm_release.woodpecker will be updated in-place ~ resource "helm_release" "woodpecker" { id = "woodpecker" ~ metadata = [ - { - app_version = "3.13.0" - chart = "woodpecker" - first_deployed = 1773625582 - last_deployed = 1773710708 - name = "woodpecker" - namespace = "woodpecker" - notes = <<-EOT 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace woodpecker -l "app.kubernetes.io/name=server,app.kubernetes.io/instance=woodpecker" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace woodpecker $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace woodpecker port-forward $POD_NAME 8080:$CONTAINER_PORT EOT - revision = 3 - values = jsonencode( { - agent = { - enabled = true - env = { - WOODPECKER_AGENT_SECRET = "(sensitive value)" - WOODPECKER_BACKEND = "kubernetes" - WOODPECKER_BACKEND_K8S_NAMESPACE = "woodpecker" - WOODPECKER_BACKEND_K8S_STORAGE_CLASS = "local-path" - WOODPECKER_BACKEND_K8S_VOLUME_SIZE = "1Gi" } - replicaCount = 1 - resources = { - limits = { - memory = "256Mi" } - requests = { - cpu = "50m" - memory = "64Mi" } } } - server = { - env = { - WOODPECKER_ADMIN = "forgejo_admin" - WOODPECKER_AGENT_SECRET = "(sensitive value)" - WOODPECKER_DATABASE_DATASOURCE = "postgres://woodpecker:kM3L4AhLNiuMhIY7tMQ@woodpecker-db-rw.woodpecker.svc.cluster.local:5432/woodpecker?sslmode=disable" - WOODPECKER_DATABASE_DRIVER = "postgres" - WOODPECKER_FORGEJO = "true" - WOODPECKER_FORGEJO_CLIENT = "(sensitive value)" - WOODPECKER_FORGEJO_CLONE_URL = "http://forgejo-http.forgejo.svc.cluster.local:80" - WOODPECKER_FORGEJO_SECRET = "(sensitive value)" - WOODPECKER_FORGEJO_URL = "https://forgejo.tail5b443a.ts.net" - WOODPECKER_HOST = "https://woodpecker.tail5b443a.ts.net" } - persistentVolume = { - enabled = true - size = "5Gi" - storageClass = "local-path" } - resources = { - limits = { - memory = "512Mi" } - requests = { - cpu = "50m" - memory = "128Mi" } } - statefulSet = { - replicaCount = 1 } } } ) - version = "3.5.1" }, ] -> (known after apply) name = "woodpecker" ~ status = "pending-upgrade" -> "deployed" # (25 unchanged attributes hidden) - set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } - set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } + set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } + set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } # (2 unchanged blocks hidden) } # kubernetes_cron_job_v1.cnpg_backup_verify will be updated in-place ~ resource "kubernetes_cron_job_v1" "cnpg_backup_verify" { id = "postgres/cnpg-backup-verify" ~ spec { # (6 unchanged attributes hidden) ~ job_template { ~ spec { # (7 unchanged attributes hidden) ~ template { ~ spec { # (12 unchanged attributes hidden) ~ container { ~ args = [ - <<-EOT set -euo pipefail apk add --no-cache curl >/dev/null # Install mc (MinIO Client) curl -sSL https://dl.min.io/client/mc/release/linux-amd64/mc -o /tmp/mc chmod +x /tmp/mc # Configure MinIO alias /tmp/mc alias set backup http://minio.minio.svc.cluster.local:9000 "$ACCESS_KEY_ID" "$ACCESS_SECRET_KEY" ERRORS=0 MAX_AGE_HOURS=25 # Allow 1h buffer beyond 24h # Check each backup path prefix for PREFIX in "pal-e-postgres" "woodpecker"; do echo "=== Checking backups for $PREFIX ===" # List objects in the backup path OBJECTS=$(/tmp/mc ls "backup/postgres-wal/$PREFIX/" 2>/dev/null | head -5 || true) if [ -z "$OBJECTS" ]; then echo "ERROR: No backup objects found for $PREFIX" ERRORS=$((ERRORS + 1)) continue fi echo "Found backup objects for $PREFIX:" echo "$OBJECTS" # Check WAL directory for recent files RECENT=$(/tmp/mc find "backup/postgres-wal/$PREFIX/wals/" --newer-than "${MAX_AGE_HOURS}h" 2>/dev/null | head -1 || true) if [ -z "$RECENT" ]; then echo "WARNING: No WAL files newer than ${MAX_AGE_HOURS}h for $PREFIX" ERRORS=$((ERRORS + 1)) else echo "OK: Recent WAL files found for $PREFIX" fi done if [ "$ERRORS" -gt 0 ]; then echo "FAILED: $ERRORS backup verification errors" exit 1 fi echo "All backup verifications passed." EOT, + <<-EOT set -euo pipefail apk add --no-cache curl >/dev/null # Install mc (MinIO Client) curl -sSL https://dl.min.io/client/mc/release/linux-amd64/mc -o /tmp/mc chmod +x /tmp/mc # Configure MinIO alias /tmp/mc alias set backup http://minio.minio.svc.cluster.local:9000 "$ACCESS_KEY_ID" "$ACCESS_SECRET_KEY" ERRORS=0 MAX_AGE_HOURS=25 # Allow 1h buffer beyond 24h # Check each backup path prefix for PREFIX in "pal-e-postgres" "woodpecker"; do echo "=== Checking backups for $PREFIX ===" # List objects in the backup path OBJECTS=$(/tmp/mc ls "backup/postgres-wal/$PREFIX/" 2>/dev/null | head -5 || true) if [ -z "$OBJECTS" ]; then echo "ERROR: No backup objects found for $PREFIX" ERRORS=$((ERRORS + 1)) continue fi echo "Found backup objects for $PREFIX:" echo "$OBJECTS" # Check if WAL directory has content (new clusters may not have archived WALs yet) WAL_EXISTS=$(/tmp/mc ls "backup/postgres-wal/$PREFIX/wals/" 2>/dev/null | head -1 || true) if [ -z "$WAL_EXISTS" ]; then echo "SKIP: No WAL archives yet for $PREFIX (new cluster, base backup only)" continue fi # Check WAL directory for recent files RECENT=$(/tmp/mc find "backup/postgres-wal/$PREFIX/wals/" --newer-than "${MAX_AGE_HOURS}h" 2>/dev/null | head -1 || true) if [ -z "$RECENT" ]; then echo "WARNING: No WAL files newer than ${MAX_AGE_HOURS}h for $PREFIX" ERRORS=$((ERRORS + 1)) else echo "OK: Recent WAL files found for $PREFIX" fi done if [ "$ERRORS" -gt 0 ]; then echo "FAILED: $ERRORS backup verification errors" exit 1 fi echo "All backup verifications passed." EOT, ] name = "verify" # (8 unchanged attributes hidden) # (3 unchanged blocks hidden) } } # (1 unchanged block hidden) } } # (1 unchanged block hidden) } } # (1 unchanged block hidden) } # kubernetes_manifest.netpol_cnpg_system will be created + resource "kubernetes_manifest" "netpol_cnpg_system" { + manifest = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + name = "default-deny-ingress" + namespace = "cnpg-system" } + spec = { + ingress = [ + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "kube-system" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } }, ] }, ] + podSelector = {} + policyTypes = [ + "Ingress", ] } } + object = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + annotations = (known after apply) + creationTimestamp = (known after apply) + deletionGracePeriodSeconds = (known after apply) + deletionTimestamp = (known after apply) + finalizers = (known after apply) + generateName = (known after apply) + generation = (known after apply) + labels = (known after apply) + managedFields = (known after apply) + name = "default-deny-ingress" + namespace = "cnpg-system" + ownerReferences = (known after apply) + resourceVersion = (known after apply) + selfLink = (known after apply) + uid = (known after apply) } + spec = { + egress = (known after apply) + ingress = [ + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "kube-system" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, ] + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } + policyTypes = [ + "Ingress", ] } } } # kubernetes_manifest.netpol_forgejo will be created + resource "kubernetes_manifest" "netpol_forgejo" { + manifest = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + name = "default-deny-ingress" + namespace = "forgejo" } + spec = { + ingress = [ + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "woodpecker" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } }, ] }, ] + podSelector = {} + policyTypes = [ + "Ingress", ] } } + object = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + annotations = (known after apply) + creationTimestamp = (known after apply) + deletionGracePeriodSeconds = (known after apply) + deletionTimestamp = (known after apply) + finalizers = (known after apply) + generateName = (known after apply) + generation = (known after apply) + labels = (known after apply) + managedFields = (known after apply) + name = "default-deny-ingress" + namespace = "forgejo" + ownerReferences = (known after apply) + resourceVersion = (known after apply) + selfLink = (known after apply) + uid = (known after apply) } + spec = { + egress = (known after apply) + ingress = [ + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "woodpecker" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, ] + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } + policyTypes = [ + "Ingress", ] } } } # kubernetes_manifest.netpol_harbor will be created + resource "kubernetes_manifest" "netpol_harbor" { + manifest = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + name = "default-deny-ingress" + namespace = "harbor" } + spec = { + ingress = [ + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "harbor" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "woodpecker" } } }, ] }, ] + podSelector = {} + policyTypes = [ + "Ingress", ] } } + object = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + annotations = (known after apply) + creationTimestamp = (known after apply) + deletionGracePeriodSeconds = (known after apply) + deletionTimestamp = (known after apply) + finalizers = (known after apply) + generateName = (known after apply) + generation = (known after apply) + labels = (known after apply) + managedFields = (known after apply) + name = "default-deny-ingress" + namespace = "harbor" + ownerReferences = (known after apply) + resourceVersion = (known after apply) + selfLink = (known after apply) + uid = (known after apply) } + spec = { + egress = (known after apply) + ingress = [ + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "harbor" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "woodpecker" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, ] + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } + policyTypes = [ + "Ingress", ] } } } # kubernetes_manifest.netpol_keycloak will be created + resource "kubernetes_manifest" "netpol_keycloak" { + manifest = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + name = "default-deny-ingress" + namespace = "keycloak" } + spec = { + ingress = [ + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } }, ] }, ] + podSelector = {} + policyTypes = [ + "Ingress", ] } } + object = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + annotations = (known after apply) + creationTimestamp = (known after apply) + deletionGracePeriodSeconds = (known after apply) + deletionTimestamp = (known after apply) + finalizers = (known after apply) + generateName = (known after apply) + generation = (known after apply) + labels = (known after apply) + managedFields = (known after apply) + name = "default-deny-ingress" + namespace = "keycloak" + ownerReferences = (known after apply) + resourceVersion = (known after apply) + selfLink = (known after apply) + uid = (known after apply) } + spec = { + egress = (known after apply) + ingress = [ + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, ] + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } + policyTypes = [ + "Ingress", ] } } } # kubernetes_manifest.netpol_minio will be created + resource "kubernetes_manifest" "netpol_minio" { + manifest = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + name = "default-deny-ingress" + namespace = "minio" } + spec = { + ingress = [ + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "postgres" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "woodpecker" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "tofu-state" } } }, ] }, ] + podSelector = {} + policyTypes = [ + "Ingress", ] } } + object = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + annotations = (known after apply) + creationTimestamp = (known after apply) + deletionGracePeriodSeconds = (known after apply) + deletionTimestamp = (known after apply) + finalizers = (known after apply) + generateName = (known after apply) + generation = (known after apply) + labels = (known after apply) + managedFields = (known after apply) + name = "default-deny-ingress" + namespace = "minio" + ownerReferences = (known after apply) + resourceVersion = (known after apply) + selfLink = (known after apply) + uid = (known after apply) } + spec = { + egress = (known after apply) + ingress = [ + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "postgres" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "woodpecker" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "tofu-state" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, ] + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } + policyTypes = [ + "Ingress", ] } } } # kubernetes_manifest.netpol_monitoring will be created + resource "kubernetes_manifest" "netpol_monitoring" { + manifest = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + name = "default-deny-ingress" + namespace = "monitoring" } + spec = { + ingress = [ + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } }, ] }, + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } }, ] }, ] + podSelector = {} + policyTypes = [ + "Ingress", ] } } + object = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + annotations = (known after apply) + creationTimestamp = (known after apply) + deletionGracePeriodSeconds = (known after apply) + deletionTimestamp = (known after apply) + finalizers = (known after apply) + generateName = (known after apply) + generation = (known after apply) + labels = (known after apply) + managedFields = (known after apply) + name = "default-deny-ingress" + namespace = "monitoring" + ownerReferences = (known after apply) + resourceVersion = (known after apply) + selfLink = (known after apply) + uid = (known after apply) } + spec = { + egress = (known after apply) + ingress = [ + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "tailscale" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "monitoring" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, ] + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } + policyTypes = [ + "Ingress", ] } } } # kubernetes_manifest.netpol_ollama will be created + resource "kubernetes_manifest" "netpol_ollama" { + manifest = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + name = "default-deny-ingress" + namespace = "ollama" } + spec = { + ingress = [ + { + from = [ + { + namespaceSelector = { + matchLabels = { + "kubernetes.io/metadata.name" = "pal-e-docs" } } }, ] }, ] + podSelector = {} + policyTypes = [ + "Ingress", ] } } + object = { + apiVersion = "networking.k8s.io/v1" + kind = "NetworkPolicy" + metadata = { + annotations = (known after apply) + creationTimestamp = (known after apply) + deletionGracePeriodSeconds = (known after apply) + deletionTimestamp = (known after apply) + finalizers = (known after apply) + generateName = (known after apply) + generation = (known after apply) + labels = (known after apply) + managedFields = (known after apply) + name = "default-deny-ingress" + namespace = "ollama" + ownerReferences = (known after apply) + resourceVersion = (known after apply) + selfLink = (known after apply) + uid = (known after apply) } + spec = { + egress = (known after apply) + ingress = [ + { + from = [ + { + ipBlock = { + cidr = (known after apply) + except = (known after apply) } + namespaceSelector = { + matchExpressions = (known after apply) + matchLabels = { + "kubernetes.io/metadata.name" = "pal-e-docs" } } + podSelector = { + matchExpressions = (known after apply) + matchLabels = (known after apply) } }, ] + ports = (known after apply) }, ] + podSelector = { + matchExpressions = (known ...(truncated) ```
Author
Owner

PR #93 Review

DOMAIN REVIEW

Tech stack: Terraform (HCL) + Kubernetes CronJob with inline shell script.

Terraform/k8s specifics reviewed:

  • Shell script correctness: The WAL existence check is well-structured. set -euo pipefail is set at the top of the script, and the new mc ls call correctly uses 2>/dev/null | head -1 || true to prevent the pipeline from exiting on a missing directory. The continue statement correctly skips to the next PREFIX iteration without incrementing the error counter.

  • Edge case handling: Both scenarios are covered -- (a) WAL directory does not exist (mc ls errors, suppressed, returns empty) and (b) WAL directory exists but is empty (mc ls returns nothing). Both correctly result in SKIP.

  • No injection risk: The $PREFIX variable iterates over a hardcoded list ("pal-e-postgres" "woodpecker"), not user-supplied input. No injection surface.

  • Secrets handling: Credentials are sourced from kubernetes_secret_v1.cnpg_s3_creds via value_from.secret_key_ref. No plaintext secrets.

  • Resource limits: Already set on the container (50m/64Mi requests, 128Mi memory limit). No change needed.

  • Image tag: alpine:3.20 is pinned to a minor version. Acceptable for a verification CronJob, though a SHA pin would be stricter.

Diff scope note: The Forgejo diff shows Ollama volume changes (hostPath rename, OLLAMA_MODELS env var) alongside the CNPG fix. These Ollama changes are already on main via PR #91 (commit 4c8eb04). The diff is a merge-base computation artifact, not scope creep. The net-new change in this PR is solely the 8-line WAL existence check block at lines 2314-2320.

One observation on silent skip behavior: A mature cluster that loses its WAL archives due to storage corruption would be silently skipped rather than flagged. The base backup object check still catches total backup loss, so the risk is low. Worth noting in deployment-lessons that the verify script trusts "no WAL = new cluster" -- if a cluster is known to have had WAL archives previously, manual investigation would be warranted. This is not a blocker, just a design-awareness note.

BLOCKERS

None.

  • No new functionality requiring test coverage (this is an IaC shell script fix; manual verification via kubectl create job is the appropriate test method).
  • No unvalidated user input.
  • No secrets or credentials in code.
  • No DRY violations in auth/security paths.

NITS

  1. Comment precision (line 2314): The comment says "Check if WAL directory has content" but it actually checks if the wals/ subdirectory has any listing output. Minor wording -- no functional impact.

  2. alpine image tag: alpine:3.20 is pinned to minor but not patch. Consider alpine:3.20.x or a digest pin for fully reproducible builds. Low priority for a verification CronJob.

SOP COMPLIANCE

  • Branch named after issue (92-fix-backup-verify-new-clusters references #92)
  • PR body has Summary, Changes, Test Plan, Related sections
  • Related section references todo-cnpg-backup-verify-failure (TODO-driven fix, no plan slug -- acceptable)
  • No secrets committed
  • No unnecessary file changes (Ollama diff is merge-base artifact, not scope creep)
  • Commit message is descriptive (fix: skip WAL freshness check for new CNPG clusters without archives)
  • tofu fmt and tofu validate confirmed passing

PROCESS OBSERVATIONS

  • MTTR: This fix directly addresses a CronJob failure detected in production. Quick turnaround from detection to fix demonstrates good incident response. The TODO-driven workflow (no formal plan) is appropriate for a targeted bugfix of this scope.
  • Change failure risk: Low. The change is additive (8 lines added, 0 removed from the CronJob logic). The worst-case failure mode is a false SKIP on a cluster that should have WAL archives, which is caught by the base backup check layer above.
  • Documentation: PR body references deployment-lessons for lessons learned capture. The silent-skip design observation above would be a good addition there.

VERDICT: APPROVED

## PR #93 Review ### DOMAIN REVIEW **Tech stack**: Terraform (HCL) + Kubernetes CronJob with inline shell script. **Terraform/k8s specifics reviewed:** - **Shell script correctness**: The WAL existence check is well-structured. `set -euo pipefail` is set at the top of the script, and the new `mc ls` call correctly uses `2>/dev/null | head -1 || true` to prevent the pipeline from exiting on a missing directory. The `continue` statement correctly skips to the next PREFIX iteration without incrementing the error counter. - **Edge case handling**: Both scenarios are covered -- (a) WAL directory does not exist (mc ls errors, suppressed, returns empty) and (b) WAL directory exists but is empty (mc ls returns nothing). Both correctly result in SKIP. - **No injection risk**: The `$PREFIX` variable iterates over a hardcoded list (`"pal-e-postgres" "woodpecker"`), not user-supplied input. No injection surface. - **Secrets handling**: Credentials are sourced from `kubernetes_secret_v1.cnpg_s3_creds` via `value_from.secret_key_ref`. No plaintext secrets. - **Resource limits**: Already set on the container (`50m/64Mi` requests, `128Mi` memory limit). No change needed. - **Image tag**: `alpine:3.20` is pinned to a minor version. Acceptable for a verification CronJob, though a SHA pin would be stricter. **Diff scope note**: The Forgejo diff shows Ollama volume changes (hostPath rename, OLLAMA_MODELS env var) alongside the CNPG fix. These Ollama changes are already on main via PR #91 (commit `4c8eb04`). The diff is a merge-base computation artifact, not scope creep. The net-new change in this PR is solely the 8-line WAL existence check block at lines 2314-2320. **One observation on silent skip behavior**: A mature cluster that loses its WAL archives due to storage corruption would be silently skipped rather than flagged. The base backup object check still catches total backup loss, so the risk is low. Worth noting in `deployment-lessons` that the verify script trusts "no WAL = new cluster" -- if a cluster is known to have had WAL archives previously, manual investigation would be warranted. This is not a blocker, just a design-awareness note. ### BLOCKERS None. - No new functionality requiring test coverage (this is an IaC shell script fix; manual verification via `kubectl create job` is the appropriate test method). - No unvalidated user input. - No secrets or credentials in code. - No DRY violations in auth/security paths. ### NITS 1. **Comment precision** (line 2314): The comment says "Check if WAL directory has content" but it actually checks if the `wals/` subdirectory has any listing output. Minor wording -- no functional impact. 2. **alpine image tag**: `alpine:3.20` is pinned to minor but not patch. Consider `alpine:3.20.x` or a digest pin for fully reproducible builds. Low priority for a verification CronJob. ### SOP COMPLIANCE - [x] Branch named after issue (`92-fix-backup-verify-new-clusters` references #92) - [x] PR body has Summary, Changes, Test Plan, Related sections - [x] Related section references `todo-cnpg-backup-verify-failure` (TODO-driven fix, no plan slug -- acceptable) - [x] No secrets committed - [x] No unnecessary file changes (Ollama diff is merge-base artifact, not scope creep) - [x] Commit message is descriptive (`fix: skip WAL freshness check for new CNPG clusters without archives`) - [x] `tofu fmt` and `tofu validate` confirmed passing ### PROCESS OBSERVATIONS - **MTTR**: This fix directly addresses a CronJob failure detected in production. Quick turnaround from detection to fix demonstrates good incident response. The TODO-driven workflow (no formal plan) is appropriate for a targeted bugfix of this scope. - **Change failure risk**: Low. The change is additive (8 lines added, 0 removed from the CronJob logic). The worst-case failure mode is a false SKIP on a cluster that should have WAL archives, which is caught by the base backup check layer above. - **Documentation**: PR body references `deployment-lessons` for lessons learned capture. The silent-skip design observation above would be a good addition there. ### VERDICT: APPROVED
forgejo_admin deleted branch 92-fix-backup-verify-new-clusters 2026-03-17 02:21:39 +00:00
Sign in to join this conversation.
No description provided.