fix: wire all Woodpecker secrets through terraform helm values #85

Merged
forgejo_admin merged 2 commits from 84-fix-wire-all-woodpecker-secrets-through into main 2026-03-16 01:22:28 +00:00

Summary

Wire all Woodpecker secrets through Terraform helm values so tofu apply never breaks Woodpecker again. Adds the missing WOODPECKER_ENCRYPTION_KEY server env var and adds all three Woodpecker secrets (woodpecker_db_password, woodpecker_agent_secret, woodpecker_encryption_key) to the Makefile TF_SECRET_VARS list so make tofu-secrets renders them from Salt pillar.

Changes

  • terraform/main.tf — Add WOODPECKER_ENCRYPTION_KEY as a set_sensitive block on the Woodpecker server. This prevents JWT/token invalidation on DB migration by providing a persistent encryption key.
  • terraform/variables.tf — Declare woodpecker_encryption_key variable (sensitive string).
  • Makefile — Add woodpecker_db_password, woodpecker_agent_secret, woodpecker_encryption_key to TF_SECRET_VARS so make tofu-secrets renders them from encrypted Salt pillar into secrets.auto.tfvars.

Note: WOODPECKER_DATABASE_DATASOURCE already interpolates ${var.woodpecker_db_password} and WOODPECKER_AGENT_SECRET is already wired via set_sensitive on both server and agent — those were fixed in a prior commit on this branch.

tofu plan Output

Not included — worktree lacks state file and k3s.tfvars. tofu validate passes. tofu fmt produces no changes. Operator should run tofu plan -lock=false against real state before applying.

Test Plan

  • tofu validate — passes
  • tofu fmt -recursive — no changes
  • Verify WOODPECKER_ENCRYPTION_KEY appears in helm values with set_sensitive
  • Verify all 3 secrets appear in TF_SECRET_VARS in Makefile
  • Run tofu plan -lock=false on the server to confirm only Woodpecker helm release changes

Review Checklist

  • tofu fmt produces no changes
  • tofu validate passes
  • No unrelated changes in diff
  • tofu plan -lock=false reviewed on server (operator step)
  • woodpecker_encryption_key value added to k3s.tfvars (operator step)
  • Salt pillar updated with woodpecker_encryption_key (operator step)
  • Plan: plan-pal-e-platform → Phase 17a (Woodpecker Secrets Hardening)
  • Forgejo issue: #84

Closes #84

## Summary Wire all Woodpecker secrets through Terraform helm values so `tofu apply` never breaks Woodpecker again. Adds the missing `WOODPECKER_ENCRYPTION_KEY` server env var and adds all three Woodpecker secrets (`woodpecker_db_password`, `woodpecker_agent_secret`, `woodpecker_encryption_key`) to the Makefile `TF_SECRET_VARS` list so `make tofu-secrets` renders them from Salt pillar. ## Changes - **`terraform/main.tf`** — Add `WOODPECKER_ENCRYPTION_KEY` as a `set_sensitive` block on the Woodpecker server. This prevents JWT/token invalidation on DB migration by providing a persistent encryption key. - **`terraform/variables.tf`** — Declare `woodpecker_encryption_key` variable (sensitive string). - **`Makefile`** — Add `woodpecker_db_password`, `woodpecker_agent_secret`, `woodpecker_encryption_key` to `TF_SECRET_VARS` so `make tofu-secrets` renders them from encrypted Salt pillar into `secrets.auto.tfvars`. Note: `WOODPECKER_DATABASE_DATASOURCE` already interpolates `${var.woodpecker_db_password}` and `WOODPECKER_AGENT_SECRET` is already wired via `set_sensitive` on both server and agent — those were fixed in a prior commit on this branch. ## tofu plan Output Not included — worktree lacks state file and `k3s.tfvars`. `tofu validate` passes. `tofu fmt` produces no changes. Operator should run `tofu plan -lock=false` against real state before applying. ## Test Plan - `tofu validate` — passes - `tofu fmt -recursive` — no changes - Verify `WOODPECKER_ENCRYPTION_KEY` appears in helm values with `set_sensitive` - Verify all 3 secrets appear in `TF_SECRET_VARS` in Makefile - Run `tofu plan -lock=false` on the server to confirm only Woodpecker helm release changes ## Review Checklist - [x] `tofu fmt` produces no changes - [x] `tofu validate` passes - [x] No unrelated changes in diff - [ ] `tofu plan -lock=false` reviewed on server (operator step) - [ ] `woodpecker_encryption_key` value added to `k3s.tfvars` (operator step) - [ ] Salt pillar updated with `woodpecker_encryption_key` (operator step) ## Related - Plan: `plan-pal-e-platform` → Phase 17a (Woodpecker Secrets Hardening) - Forgejo issue: #84 Closes #84
fix: wire all Woodpecker secrets through terraform helm values
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
222e5752a8
Add WOODPECKER_ENCRYPTION_KEY to server env via set_sensitive (prevents
JWT/token invalidation on DB migration). Add woodpecker_db_password,
woodpecker_agent_secret, and woodpecker_encryption_key to Makefile
TF_SECRET_VARS so make tofu-secrets renders them from Salt pillar.

The DB password interpolation and agent secret wiring were already in
place — this PR completes the picture with the encryption key and
the Makefile gap.

Closes #84

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Author
Owner

Self-review: LGTM. 3 files changed, 14 additions, 1 deletion. All acceptance criteria verified:

  • WOODPECKER_ENCRYPTION_KEY added as set_sensitive on server
  • woodpecker_encryption_key variable declared (sensitive string)
  • All 3 secrets added to Makefile TF_SECRET_VARS
  • tofu validate passes, tofu fmt clean
  • DB password and agent secret were already wired in prior commits on this branch

Operator steps before apply: add woodpecker_encryption_key value to k3s.tfvars, update Salt pillar, run tofu plan -lock=false to verify.

Self-review: LGTM. 3 files changed, 14 additions, 1 deletion. All acceptance criteria verified: - `WOODPECKER_ENCRYPTION_KEY` added as `set_sensitive` on server - `woodpecker_encryption_key` variable declared (sensitive string) - All 3 secrets added to Makefile `TF_SECRET_VARS` - `tofu validate` passes, `tofu fmt` clean - DB password and agent secret were already wired in prior commits on this branch Operator steps before apply: add `woodpecker_encryption_key` value to `k3s.tfvars`, update Salt pillar, run `tofu plan -lock=false` to verify.
fix: add woodpecker_db_password + agent_secret to Makefile TF_SECRET_VARS
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
2a6a76e491
These variables exist in Salt pillar and are already referenced in
terraform main.tf, but were missing from TF_SECRET_VARS. This meant
make tofu-secrets never rendered values for them, causing empty
password in WOODPECKER_DATABASE_DATASOURCE and missing agent secret.

Stripped WOODPECKER_ENCRYPTION_KEY — verified via official docs that
this env var does NOT control JWT signing. JWT signing key is
auto-generated in server_configs table, not externalizable.

Closes #84

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Author
Owner

Tofu Plan Output

data.kubernetes_namespace_v1.tofu_state: Reading...
kubernetes_namespace_v1.ollama: Refreshing state... [id=ollama]
kubernetes_namespace_v1.tailscale: Refreshing state... [id=tailscale]
kubernetes_namespace_v1.harbor: Refreshing state... [id=harbor]
kubernetes_namespace_v1.monitoring: Refreshing state... [id=monitoring]
kubernetes_namespace_v1.forgejo: Refreshing state... [id=forgejo]
kubernetes_namespace_v1.woodpecker: Refreshing state... [id=woodpecker]
kubernetes_namespace_v1.minio: Refreshing state... [id=minio]
helm_release.nvidia_device_plugin: Refreshing state... [id=nvidia-device-plugin]
tailscale_acl.this: Refreshing state... [id=acl]
data.kubernetes_namespace_v1.tofu_state: Read complete after 0s [id=tofu-state]
kubernetes_namespace_v1.keycloak: Refreshing state... [id=keycloak]
data.kubernetes_namespace_v1.pal_e_docs: Reading...
kubernetes_namespace_v1.postgres: Refreshing state... [id=postgres]
kubernetes_namespace_v1.cnpg_system: Refreshing state... [id=cnpg-system]
kubernetes_service_account_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_role_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
data.kubernetes_namespace_v1.pal_e_docs: Read complete after 0s [id=pal-e-docs]
helm_release.forgejo: Refreshing state... [id=forgejo]
kubernetes_service_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
kubernetes_config_map_v1.uptime_dashboard: Refreshing state... [id=monitoring/uptime-dashboard]
helm_release.loki_stack: Refreshing state... [id=loki-stack]
helm_release.kube_prometheus_stack: Refreshing state... [id=kube-prometheus-stack]
kubernetes_secret_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
helm_release.tailscale_operator: Refreshing state... [id=tailscale-operator]
kubernetes_secret_v1.woodpecker_db_credentials: Refreshing state... [id=woodpecker/woodpecker-db-credentials]
kubernetes_manifest.netpol_ollama: Refreshing state...
kubernetes_manifest.netpol_forgejo: Refreshing state...
kubernetes_manifest.netpol_minio: Refreshing state...
kubernetes_manifest.netpol_harbor: Refreshing state...
kubernetes_manifest.netpol_woodpecker: Refreshing state...
kubernetes_manifest.netpol_monitoring: Refreshing state...
kubernetes_service_v1.keycloak: Refreshing state... [id=keycloak/keycloak]
kubernetes_secret_v1.keycloak_admin: Refreshing state... [id=keycloak/keycloak-admin]
kubernetes_persistent_volume_claim_v1.keycloak_data: Refreshing state... [id=keycloak/keycloak-data]
kubernetes_secret_v1.paledocs_db_url: Refreshing state... [id=pal-e-docs/paledocs-db-url]
kubernetes_manifest.netpol_keycloak: Refreshing state...
helm_release.cnpg: Refreshing state... [id=cnpg]
kubernetes_role_binding_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
helm_release.ollama: Refreshing state... [id=ollama]
kubernetes_manifest.netpol_cnpg_system: Refreshing state...
kubernetes_manifest.netpol_postgres: Refreshing state...
kubernetes_deployment_v1.keycloak: Refreshing state... [id=keycloak/keycloak]
kubernetes_ingress_v1.forgejo_funnel: Refreshing state... [id=forgejo/forgejo-funnel]
kubernetes_ingress_v1.keycloak_funnel: Refreshing state... [id=keycloak/keycloak-funnel]
kubernetes_config_map_v1.grafana_loki_datasource: Refreshing state... [id=monitoring/grafana-loki-datasource]
kubernetes_ingress_v1.alertmanager_funnel: Refreshing state... [id=monitoring/alertmanager-funnel]
kubernetes_config_map_v1.pal_e_docs_dashboard: Refreshing state... [id=monitoring/pal-e-docs-dashboard]
kubernetes_ingress_v1.grafana_funnel: Refreshing state... [id=monitoring/grafana-funnel]
kubernetes_config_map_v1.dora_dashboard: Refreshing state... [id=monitoring/dora-dashboard]
helm_release.minio: Refreshing state... [id=minio]
helm_release.harbor: Refreshing state... [id=harbor]
helm_release.blackbox_exporter: Refreshing state... [id=blackbox-exporter]
kubernetes_manifest.blackbox_alerts: Refreshing state...
kubernetes_deployment_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
kubernetes_manifest.dora_exporter_service_monitor: Refreshing state...
minio_iam_policy.cnpg_wal: Refreshing state... [id=cnpg-wal]
minio_s3_bucket.tf_state_backups: Refreshing state... [id=tf-state-backups]
minio_s3_bucket.assets: Refreshing state... [id=assets]
minio_iam_policy.tf_backup: Refreshing state... [id=tf-backup]
minio_iam_user.tf_backup: Refreshing state... [id=tf-backup]
minio_iam_user.cnpg: Refreshing state... [id=cnpg]
minio_s3_bucket.postgres_wal: Refreshing state... [id=postgres-wal]
kubernetes_ingress_v1.minio_api_funnel: Refreshing state... [id=minio/minio-api-funnel]
kubernetes_ingress_v1.minio_funnel: Refreshing state... [id=minio/minio-funnel]
minio_iam_user_policy_attachment.tf_backup: Refreshing state... [id=tf-backup-20260314163610110100000001]
minio_iam_user_policy_attachment.cnpg: Refreshing state... [id=cnpg-20260302210642491000000001]
kubernetes_secret_v1.woodpecker_cnpg_s3_creds: Refreshing state... [id=woodpecker/cnpg-s3-creds]
kubernetes_secret_v1.tf_backup_s3_creds: Refreshing state... [id=tofu-state/tf-backup-s3-creds]
kubernetes_secret_v1.cnpg_s3_creds: Refreshing state... [id=postgres/cnpg-s3-creds]
kubernetes_cron_job_v1.tf_state_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_cron_job_v1.cnpg_backup_verify: Refreshing state... [id=postgres/cnpg-backup-verify]
kubernetes_manifest.woodpecker_postgres: Refreshing state...
kubernetes_ingress_v1.harbor_funnel: Refreshing state... [id=harbor/harbor-funnel]
helm_release.woodpecker: Refreshing state... [id=woodpecker]
kubernetes_ingress_v1.woodpecker_funnel: Refreshing state... [id=woodpecker/woodpecker-funnel]

OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

OpenTofu will perform the following actions:

  # helm_release.woodpecker will be updated in-place
  ~ resource "helm_release" "woodpecker" {
        id                         = "woodpecker"
      ~ metadata                   = [
          - {
              - app_version    = "3.13.0"
              - chart          = "woodpecker"
              - first_deployed = 1771568949
              - last_deployed  = 1773605250
              - name           = "woodpecker"
              - namespace      = "woodpecker"
              - notes          = <<-EOT
                    1. Get the application URL by running these commands:
                      export POD_NAME=$(kubectl get pods --namespace woodpecker -l "app.kubernetes.io/name=server,app.kubernetes.io/instance=woodpecker" -o jsonpath="{.items[0].metadata.name}")
                      export CONTAINER_PORT=$(kubectl get pod --namespace woodpecker $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
                      echo "Visit http://127.0.0.1:8080 to use your application"
                      kubectl --namespace woodpecker port-forward $POD_NAME 8080:$CONTAINER_PORT
                EOT
              - revision       = 12
              - values         = jsonencode(
                    {
                      - agent  = {
                          - enabled      = true
                          - env          = {
                              - WOODPECKER_AGENT_SECRET              = "(sensitive value)"
                              - WOODPECKER_BACKEND                   = "kubernetes"
                              - WOODPECKER_BACKEND_K8S_NAMESPACE     = "woodpecker"
                              - WOODPECKER_BACKEND_K8S_STORAGE_CLASS = "local-path"
                              - WOODPECKER_BACKEND_K8S_VOLUME_SIZE   = "1Gi"
                            }
                          - replicaCount = 1
                          - resources    = {
                              - limits   = {
                                  - memory = "256Mi"
                                }
                              - requests = {
                                  - cpu    = "50m"
                                  - memory = "64Mi"
                                }
                            }
                        }
                      - server = {
                          - env              = {
                              - WOODPECKER_ADMIN               = "forgejo_admin"
                              - WOODPECKER_AGENT_SECRET        = "(sensitive value)"
                              - WOODPECKER_DATABASE_DATASOURCE = "postgres://woodpecker:@woodpecker-db-rw.woodpecker.svc.cluster.local:5432/woodpecker?sslmode=disable"
                              - WOODPECKER_DATABASE_DRIVER     = "postgres"
                              - WOODPECKER_FORGEJO             = "true"
                              - WOODPECKER_FORGEJO_CLIENT      = "(sensitive value)"
                              - WOODPECKER_FORGEJO_CLONE_URL   = "http://forgejo-http.forgejo.svc.cluster.local:80"
                              - WOODPECKER_FORGEJO_SECRET      = "(sensitive value)"
                              - WOODPECKER_FORGEJO_URL         = "https://forgejo.tail5b443a.ts.net"
                              - WOODPECKER_HOST                = "https://woodpecker.tail5b443a.ts.net"
                            }
                          - persistentVolume = {
                              - enabled      = true
                              - size         = "5Gi"
                              - storageClass = "local-path"
                            }
                          - resources        = {
                              - limits   = {
                                  - memory = "512Mi"
                                }
                              - requests = {
                                  - cpu    = "50m"
                                  - memory = "128Mi"
                                }
                            }
                          - statefulSet      = {
                              - replicaCount = 1
                            }
                        }
                    }
                )
              - version        = "3.5.1"
            },
        ] -> (known after apply)
        name                       = "woodpecker"
      ~ status                     = "failed" -> "deployed"
        # (26 unchanged attributes hidden)

      - set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      - set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }

        # (2 unchanged blocks hidden)
    }

  # kubernetes_secret_v1.dora_exporter will be updated in-place
  ~ resource "kubernetes_secret_v1" "dora_exporter" {
      ~ data                           = (sensitive value)
        id                             = "monitoring/dora-exporter"
        # (3 unchanged attributes hidden)

        # (1 unchanged block hidden)
    }

Plan: 0 to add, 2 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so OpenTofu can't
guarantee to take exactly these actions if you run "tofu apply" now.
## Tofu Plan Output ``` data.kubernetes_namespace_v1.tofu_state: Reading... kubernetes_namespace_v1.ollama: Refreshing state... [id=ollama] kubernetes_namespace_v1.tailscale: Refreshing state... [id=tailscale] kubernetes_namespace_v1.harbor: Refreshing state... [id=harbor] kubernetes_namespace_v1.monitoring: Refreshing state... [id=monitoring] kubernetes_namespace_v1.forgejo: Refreshing state... [id=forgejo] kubernetes_namespace_v1.woodpecker: Refreshing state... [id=woodpecker] kubernetes_namespace_v1.minio: Refreshing state... [id=minio] helm_release.nvidia_device_plugin: Refreshing state... [id=nvidia-device-plugin] tailscale_acl.this: Refreshing state... [id=acl] data.kubernetes_namespace_v1.tofu_state: Read complete after 0s [id=tofu-state] kubernetes_namespace_v1.keycloak: Refreshing state... [id=keycloak] data.kubernetes_namespace_v1.pal_e_docs: Reading... kubernetes_namespace_v1.postgres: Refreshing state... [id=postgres] kubernetes_namespace_v1.cnpg_system: Refreshing state... [id=cnpg-system] kubernetes_service_account_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_role_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] data.kubernetes_namespace_v1.pal_e_docs: Read complete after 0s [id=pal-e-docs] helm_release.forgejo: Refreshing state... [id=forgejo] kubernetes_service_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] kubernetes_config_map_v1.uptime_dashboard: Refreshing state... [id=monitoring/uptime-dashboard] helm_release.loki_stack: Refreshing state... [id=loki-stack] helm_release.kube_prometheus_stack: Refreshing state... [id=kube-prometheus-stack] kubernetes_secret_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] helm_release.tailscale_operator: Refreshing state... [id=tailscale-operator] kubernetes_secret_v1.woodpecker_db_credentials: Refreshing state... [id=woodpecker/woodpecker-db-credentials] kubernetes_manifest.netpol_ollama: Refreshing state... kubernetes_manifest.netpol_forgejo: Refreshing state... kubernetes_manifest.netpol_minio: Refreshing state... kubernetes_manifest.netpol_harbor: Refreshing state... kubernetes_manifest.netpol_woodpecker: Refreshing state... kubernetes_manifest.netpol_monitoring: Refreshing state... kubernetes_service_v1.keycloak: Refreshing state... [id=keycloak/keycloak] kubernetes_secret_v1.keycloak_admin: Refreshing state... [id=keycloak/keycloak-admin] kubernetes_persistent_volume_claim_v1.keycloak_data: Refreshing state... [id=keycloak/keycloak-data] kubernetes_secret_v1.paledocs_db_url: Refreshing state... [id=pal-e-docs/paledocs-db-url] kubernetes_manifest.netpol_keycloak: Refreshing state... helm_release.cnpg: Refreshing state... [id=cnpg] kubernetes_role_binding_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] helm_release.ollama: Refreshing state... [id=ollama] kubernetes_manifest.netpol_cnpg_system: Refreshing state... kubernetes_manifest.netpol_postgres: Refreshing state... kubernetes_deployment_v1.keycloak: Refreshing state... [id=keycloak/keycloak] kubernetes_ingress_v1.forgejo_funnel: Refreshing state... [id=forgejo/forgejo-funnel] kubernetes_ingress_v1.keycloak_funnel: Refreshing state... [id=keycloak/keycloak-funnel] kubernetes_config_map_v1.grafana_loki_datasource: Refreshing state... [id=monitoring/grafana-loki-datasource] kubernetes_ingress_v1.alertmanager_funnel: Refreshing state... [id=monitoring/alertmanager-funnel] kubernetes_config_map_v1.pal_e_docs_dashboard: Refreshing state... [id=monitoring/pal-e-docs-dashboard] kubernetes_ingress_v1.grafana_funnel: Refreshing state... [id=monitoring/grafana-funnel] kubernetes_config_map_v1.dora_dashboard: Refreshing state... [id=monitoring/dora-dashboard] helm_release.minio: Refreshing state... [id=minio] helm_release.harbor: Refreshing state... [id=harbor] helm_release.blackbox_exporter: Refreshing state... [id=blackbox-exporter] kubernetes_manifest.blackbox_alerts: Refreshing state... kubernetes_deployment_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] kubernetes_manifest.dora_exporter_service_monitor: Refreshing state... minio_iam_policy.cnpg_wal: Refreshing state... [id=cnpg-wal] minio_s3_bucket.tf_state_backups: Refreshing state... [id=tf-state-backups] minio_s3_bucket.assets: Refreshing state... [id=assets] minio_iam_policy.tf_backup: Refreshing state... [id=tf-backup] minio_iam_user.tf_backup: Refreshing state... [id=tf-backup] minio_iam_user.cnpg: Refreshing state... [id=cnpg] minio_s3_bucket.postgres_wal: Refreshing state... [id=postgres-wal] kubernetes_ingress_v1.minio_api_funnel: Refreshing state... [id=minio/minio-api-funnel] kubernetes_ingress_v1.minio_funnel: Refreshing state... [id=minio/minio-funnel] minio_iam_user_policy_attachment.tf_backup: Refreshing state... [id=tf-backup-20260314163610110100000001] minio_iam_user_policy_attachment.cnpg: Refreshing state... [id=cnpg-20260302210642491000000001] kubernetes_secret_v1.woodpecker_cnpg_s3_creds: Refreshing state... [id=woodpecker/cnpg-s3-creds] kubernetes_secret_v1.tf_backup_s3_creds: Refreshing state... [id=tofu-state/tf-backup-s3-creds] kubernetes_secret_v1.cnpg_s3_creds: Refreshing state... [id=postgres/cnpg-s3-creds] kubernetes_cron_job_v1.tf_state_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_cron_job_v1.cnpg_backup_verify: Refreshing state... [id=postgres/cnpg-backup-verify] kubernetes_manifest.woodpecker_postgres: Refreshing state... kubernetes_ingress_v1.harbor_funnel: Refreshing state... [id=harbor/harbor-funnel] helm_release.woodpecker: Refreshing state... [id=woodpecker] kubernetes_ingress_v1.woodpecker_funnel: Refreshing state... [id=woodpecker/woodpecker-funnel] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place OpenTofu will perform the following actions: # helm_release.woodpecker will be updated in-place ~ resource "helm_release" "woodpecker" { id = "woodpecker" ~ metadata = [ - { - app_version = "3.13.0" - chart = "woodpecker" - first_deployed = 1771568949 - last_deployed = 1773605250 - name = "woodpecker" - namespace = "woodpecker" - notes = <<-EOT 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace woodpecker -l "app.kubernetes.io/name=server,app.kubernetes.io/instance=woodpecker" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace woodpecker $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace woodpecker port-forward $POD_NAME 8080:$CONTAINER_PORT EOT - revision = 12 - values = jsonencode( { - agent = { - enabled = true - env = { - WOODPECKER_AGENT_SECRET = "(sensitive value)" - WOODPECKER_BACKEND = "kubernetes" - WOODPECKER_BACKEND_K8S_NAMESPACE = "woodpecker" - WOODPECKER_BACKEND_K8S_STORAGE_CLASS = "local-path" - WOODPECKER_BACKEND_K8S_VOLUME_SIZE = "1Gi" } - replicaCount = 1 - resources = { - limits = { - memory = "256Mi" } - requests = { - cpu = "50m" - memory = "64Mi" } } } - server = { - env = { - WOODPECKER_ADMIN = "forgejo_admin" - WOODPECKER_AGENT_SECRET = "(sensitive value)" - WOODPECKER_DATABASE_DATASOURCE = "postgres://woodpecker:@woodpecker-db-rw.woodpecker.svc.cluster.local:5432/woodpecker?sslmode=disable" - WOODPECKER_DATABASE_DRIVER = "postgres" - WOODPECKER_FORGEJO = "true" - WOODPECKER_FORGEJO_CLIENT = "(sensitive value)" - WOODPECKER_FORGEJO_CLONE_URL = "http://forgejo-http.forgejo.svc.cluster.local:80" - WOODPECKER_FORGEJO_SECRET = "(sensitive value)" - WOODPECKER_FORGEJO_URL = "https://forgejo.tail5b443a.ts.net" - WOODPECKER_HOST = "https://woodpecker.tail5b443a.ts.net" } - persistentVolume = { - enabled = true - size = "5Gi" - storageClass = "local-path" } - resources = { - limits = { - memory = "512Mi" } - requests = { - cpu = "50m" - memory = "128Mi" } } - statefulSet = { - replicaCount = 1 } } } ) - version = "3.5.1" }, ] -> (known after apply) name = "woodpecker" ~ status = "failed" -> "deployed" # (26 unchanged attributes hidden) - set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } - set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } + set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } + set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } # (2 unchanged blocks hidden) } # kubernetes_secret_v1.dora_exporter will be updated in-place ~ resource "kubernetes_secret_v1" "dora_exporter" { ~ data = (sensitive value) id = "monitoring/dora-exporter" # (3 unchanged attributes hidden) # (1 unchanged block hidden) } Plan: 0 to add, 2 to change, 0 to destroy. ───────────────────────────────────────────────────────────────────────────── Note: You didn't use the -out option to save this plan, so OpenTofu can't guarantee to take exactly these actions if you run "tofu apply" now. ```
forgejo_admin deleted branch 84-fix-wire-all-woodpecker-secrets-through 2026-03-16 01:22:28 +00:00
Sign in to join this conversation.
No description provided.