fix: remove invalid Slack receiver from alertmanager config #83

Merged
forgejo_admin merged 1 commit from 82-fix-remove-invalid-slack-receiver-from-a into main 2026-03-15 19:08:06 +00:00

Summary

The Slack receiver in the kube-prometheus-stack alertmanager config had api_url: ' ' (single space), which caused Prometheus Operator to fail reconciliation every ~3 minutes with "unsupported scheme for URL". This removes the Slack receiver entirely since Telegram is the primary and only active notification channel.

Changes

  • terraform/main.tf — Removed the slack receiver from alertmanager.config.receivers (simplified concat to a plain list with just default and telegram). Removed the conditional routes block that routed critical/warning alerts to slack. Removed the dynamic set_sensitive block for slack_configs[0].api_url.
  • terraform/variables.tf — Removed the slack_webhook_url variable declaration (had default = "", no longer referenced).
  • .woodpecker.yaml — Removed TF_VAR_slack_webhook_url from both the validate and apply pipeline steps.
  • Makefile — Removed slack_webhook_url from TF_SECRET_VARS list.

Salt pillar entries (secrets_registry.sls, platform.sls) intentionally retained as historical backup records.

tofu plan Output

Plan: 0 to add, 2 to change, 0 to destroy.

# helm_release.kube_prometheus_stack will be updated in-place
  - Removes "slack" receiver and its slack_configs from alertmanager.config.receivers
  - Changes routes from [{receiver: "slack", matchers: [...]}] to []
  - Removes one set_sensitive block (slack api_url)

# kubernetes_secret_v1.dora_exporter will be updated in-place
  (unrelated state drift — write-only attributes)

Test Plan

  • tofu fmt -recursive -- passed, no manual formatting needed
  • tofu validate -- passed
  • tofu plan -lock=false -- confirms 0 to add, 2 to change, 0 to destroy
  • After apply: verify alertmanager config shows no slack receiver
  • Verify Prometheus Operator logs no longer show "unsupported scheme for URL" errors

Review Checklist

  • tofu fmt -recursive passed
  • tofu validate passed
  • tofu plan -lock=false output reviewed -- no unexpected changes
  • No remaining references to slack_webhook_url in terraform or CI (Salt pillar retained as backup)
  • Telegram receiver and default receiver unchanged
## Summary The Slack receiver in the kube-prometheus-stack alertmanager config had `api_url: ' '` (single space), which caused Prometheus Operator to fail reconciliation every ~3 minutes with "unsupported scheme for URL". This removes the Slack receiver entirely since Telegram is the primary and only active notification channel. ## Changes - `terraform/main.tf` — Removed the `slack` receiver from `alertmanager.config.receivers` (simplified `concat` to a plain list with just `default` and `telegram`). Removed the conditional `routes` block that routed critical/warning alerts to slack. Removed the dynamic `set_sensitive` block for `slack_configs[0].api_url`. - `terraform/variables.tf` — Removed the `slack_webhook_url` variable declaration (had `default = ""`, no longer referenced). - `.woodpecker.yaml` — Removed `TF_VAR_slack_webhook_url` from both the `validate` and `apply` pipeline steps. - `Makefile` — Removed `slack_webhook_url` from `TF_SECRET_VARS` list. Salt pillar entries (`secrets_registry.sls`, `platform.sls`) intentionally retained as historical backup records. ## tofu plan Output ``` Plan: 0 to add, 2 to change, 0 to destroy. # helm_release.kube_prometheus_stack will be updated in-place - Removes "slack" receiver and its slack_configs from alertmanager.config.receivers - Changes routes from [{receiver: "slack", matchers: [...]}] to [] - Removes one set_sensitive block (slack api_url) # kubernetes_secret_v1.dora_exporter will be updated in-place (unrelated state drift — write-only attributes) ``` ## Test Plan - `tofu fmt -recursive` -- passed, no manual formatting needed - `tofu validate` -- passed - `tofu plan -lock=false` -- confirms 0 to add, 2 to change, 0 to destroy - After apply: verify alertmanager config shows no slack receiver - Verify Prometheus Operator logs no longer show "unsupported scheme for URL" errors ## Review Checklist - [x] `tofu fmt -recursive` passed - [x] `tofu validate` passed - [x] `tofu plan -lock=false` output reviewed -- no unexpected changes - [x] No remaining references to `slack_webhook_url` in terraform or CI (Salt pillar retained as backup) - [x] Telegram receiver and default receiver unchanged ## Related - Closes #82
fix: remove invalid Slack receiver from alertmanager config
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
758b0ac9ff
The Slack receiver had api_url: ' ' (single space) which caused
Prometheus Operator to fail reconciliation every ~3 minutes with
"unsupported scheme for URL". Telegram is the primary receiver.

Also removes the slack_webhook_url variable, Makefile secret, and
Woodpecker CI secret references since the variable is no longer used.

Closes #82

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Author
Owner

Tofu Plan Output

tailscale_acl.this: Refreshing state... [id=acl]
helm_release.nvidia_device_plugin: Refreshing state... [id=nvidia-device-plugin]
data.kubernetes_namespace_v1.pal_e_docs: Reading...
kubernetes_namespace_v1.ollama: Refreshing state... [id=ollama]
kubernetes_namespace_v1.woodpecker: Refreshing state... [id=woodpecker]
kubernetes_namespace_v1.tailscale: Refreshing state... [id=tailscale]
kubernetes_namespace_v1.cnpg_system: Refreshing state... [id=cnpg-system]
kubernetes_namespace_v1.monitoring: Refreshing state... [id=monitoring]
kubernetes_namespace_v1.postgres: Refreshing state... [id=postgres]
data.kubernetes_namespace_v1.tofu_state: Reading...
data.kubernetes_namespace_v1.tofu_state: Read complete after 0s [id=tofu-state]
kubernetes_namespace_v1.harbor: Refreshing state... [id=harbor]
data.kubernetes_namespace_v1.pal_e_docs: Read complete after 0s [id=pal-e-docs]
kubernetes_namespace_v1.forgejo: Refreshing state... [id=forgejo]
kubernetes_namespace_v1.minio: Refreshing state... [id=minio]
kubernetes_service_account_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_namespace_v1.keycloak: Refreshing state... [id=keycloak]
kubernetes_role_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_secret_v1.paledocs_db_url: Refreshing state... [id=pal-e-docs/paledocs-db-url]
kubernetes_service_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
helm_release.tailscale_operator: Refreshing state... [id=tailscale-operator]
kubernetes_secret_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
kubernetes_config_map_v1.uptime_dashboard: Refreshing state... [id=monitoring/uptime-dashboard]
helm_release.loki_stack: Refreshing state... [id=loki-stack]
helm_release.kube_prometheus_stack: Refreshing state... [id=kube-prometheus-stack]
helm_release.cnpg: Refreshing state... [id=cnpg]
kubernetes_manifest.netpol_ollama: Refreshing state...
kubernetes_manifest.netpol_cnpg_system: Refreshing state...
kubernetes_manifest.netpol_monitoring: Refreshing state...
kubernetes_manifest.netpol_postgres: Refreshing state...
kubernetes_manifest.netpol_woodpecker: Refreshing state...
kubernetes_manifest.netpol_harbor: Refreshing state...
kubernetes_secret_v1.woodpecker_db_credentials: Refreshing state... [id=woodpecker/woodpecker-db-credentials]
helm_release.forgejo: Refreshing state... [id=forgejo]
kubernetes_persistent_volume_claim_v1.keycloak_data: Refreshing state... [id=keycloak/keycloak-data]
kubernetes_manifest.netpol_forgejo: Refreshing state...
kubernetes_role_binding_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_manifest.netpol_minio: Refreshing state...
kubernetes_service_v1.keycloak: Refreshing state... [id=keycloak/keycloak]
kubernetes_secret_v1.keycloak_admin: Refreshing state... [id=keycloak/keycloak-admin]
kubernetes_manifest.netpol_keycloak: Refreshing state...
helm_release.ollama: Refreshing state... [id=ollama]
kubernetes_deployment_v1.keycloak: Refreshing state... [id=keycloak/keycloak]
kubernetes_ingress_v1.keycloak_funnel: Refreshing state... [id=keycloak/keycloak-funnel]
kubernetes_config_map_v1.grafana_loki_datasource: Refreshing state... [id=monitoring/grafana-loki-datasource]
helm_release.minio: Refreshing state... [id=minio]
kubernetes_config_map_v1.pal_e_docs_dashboard: Refreshing state... [id=monitoring/pal-e-docs-dashboard]
helm_release.harbor: Refreshing state... [id=harbor]
kubernetes_ingress_v1.grafana_funnel: Refreshing state... [id=monitoring/grafana-funnel]
helm_release.blackbox_exporter: Refreshing state... [id=blackbox-exporter]
kubernetes_manifest.blackbox_alerts: Refreshing state...
kubernetes_ingress_v1.alertmanager_funnel: Refreshing state... [id=monitoring/alertmanager-funnel]
kubernetes_config_map_v1.dora_dashboard: Refreshing state... [id=monitoring/dora-dashboard]
kubernetes_deployment_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
kubernetes_manifest.dora_exporter_service_monitor: Refreshing state...
kubernetes_ingress_v1.forgejo_funnel: Refreshing state... [id=forgejo/forgejo-funnel]
minio_s3_bucket.assets: Refreshing state... [id=assets]
minio_iam_policy.tf_backup: Refreshing state... [id=tf-backup]
minio_iam_user.tf_backup: Refreshing state... [id=tf-backup]
kubernetes_ingress_v1.minio_api_funnel: Refreshing state... [id=minio/minio-api-funnel]
minio_s3_bucket.postgres_wal: Refreshing state... [id=postgres-wal]
minio_iam_policy.cnpg_wal: Refreshing state... [id=cnpg-wal]
minio_s3_bucket.tf_state_backups: Refreshing state... [id=tf-state-backups]
kubernetes_ingress_v1.minio_funnel: Refreshing state... [id=minio/minio-funnel]
minio_iam_user.cnpg: Refreshing state... [id=cnpg]
minio_iam_user_policy_attachment.cnpg: Refreshing state... [id=cnpg-20260302210642491000000001]
minio_iam_user_policy_attachment.tf_backup: Refreshing state... [id=tf-backup-20260314163610110100000001]
kubernetes_secret_v1.woodpecker_cnpg_s3_creds: Refreshing state... [id=woodpecker/cnpg-s3-creds]
kubernetes_secret_v1.tf_backup_s3_creds: Refreshing state... [id=tofu-state/tf-backup-s3-creds]
kubernetes_secret_v1.cnpg_s3_creds: Refreshing state... [id=postgres/cnpg-s3-creds]
kubernetes_cron_job_v1.cnpg_backup_verify: Refreshing state... [id=postgres/cnpg-backup-verify]
kubernetes_cron_job_v1.tf_state_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_manifest.woodpecker_postgres: Refreshing state...
kubernetes_ingress_v1.harbor_funnel: Refreshing state... [id=harbor/harbor-funnel]
helm_release.woodpecker: Refreshing state... [id=woodpecker]
kubernetes_ingress_v1.woodpecker_funnel: Refreshing state... [id=woodpecker/woodpecker-funnel]

OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

OpenTofu will perform the following actions:

  # helm_release.kube_prometheus_stack will be updated in-place
  ~ resource "helm_release" "kube_prometheus_stack" {
        id                         = "kube-prometheus-stack"
      ~ metadata                   = [
          - {
              - app_version    = "v0.89.0"
              - chart          = "kube-prometheus-stack"
              - first_deployed = 1771560679
              - last_deployed  = 1773538888
              - name           = "kube-prometheus-stack"
              - namespace      = "monitoring"
              - notes          = <<-EOT
                    1. Get the application URL by running these commands:
                      export POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=prometheus-node-exporter,app.kubernetes.io/instance=kube-prometheus-stack" -o jsonpath="{.items[0].metadata.name}")
                      echo "Visit http://127.0.0.1:9100 to use your application"
                      kubectl port-forward --namespace monitoring $POD_NAME 9100
                    kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects.
                    The exposed metrics can be found here:
                    https://github.com/kubernetes/kube-state-metrics/blob/master/docs/README.md#exposed-metrics
                    
                    The metrics are exported on the HTTP endpoint /metrics on the listening port.
                    In your case, kube-prometheus-stack-kube-state-metrics.monitoring.svc.cluster.local:8080/metrics
                    
                    They are served either as plaintext or protobuf depending on the Accept header.
                    They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint.
                    
                    kube-prometheus-stack has been installed. Check its status by running:
                      kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"
                    
                    Get Grafana 'admin' user password by running:
                    
                      kubectl --namespace monitoring get secrets kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo
                    
                    Access Grafana local instance:
                    
                      export POD_NAME=$(kubectl --namespace monitoring get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus-stack" -oname)
                      kubectl --namespace monitoring port-forward $POD_NAME 3000
                    
                    Get your grafana admin user password by running:
                    
                      kubectl get secret --namespace monitoring -l app.kubernetes.io/component=admin-secret -o jsonpath="{.items[0].data.admin-password}" | base64 --decode ; echo
                    
                    
                    Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator.
                    
                    1. Get your 'admin' user password by running:
                    
                       kubectl get secret --namespace monitoring kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
                    
                    
                    2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster:
                    
                       kube-prometheus-stack-grafana.monitoring.svc.cluster.local
                    
                       Get the Grafana URL to visit by running these commands in the same shell:
                         export POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus-stack" -o jsonpath="{.items[0].metadata.name}")
                         kubectl --namespace monitoring port-forward $POD_NAME 3000
                    
                    3. Login with the password from step 1 and the username: admin
                EOT
              - revision       = 14
              - values         = jsonencode(
                    {
                      - additionalPrometheusRules = [
                          - {
                              - groups = [
                                  - {
                                      - name  = "pod-health"
                                      - rules = [
                                          - {
                                              - alert       = "PodRestartStorm"
                                              - annotations = {
                                                  - description = "Pod {{ $labels.namespace }}/{{ $labels.pod }} has restarted {{ $value }} times in the last 15 minutes."
                                                  - summary     = "Pod {{ $labels.namespace }}/{{ $labels.pod }} restarting frequently"
                                                }
                                              - expr        = "increase(kube_pod_container_status_restarts_total[15m]) > 3"
                                              - for         = "0m"
                                              - labels      = {
                                                  - severity = "warning"
                                                }
                                            },
                                          - {
                                              - alert       = "OOMKilled"
                                              - annotations = {
                                                  - description = "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} was OOMKilled."
                                                  - summary     = "Pod {{ $labels.namespace }}/{{ $labels.pod }} OOMKilled"
                                                }
                                              - expr        = "kube_pod_container_status_last_terminated_reason{reason=\"OOMKilled\"} > 0"
                                              - for         = "0m"
                                              - labels      = {
                                                  - severity = "critical"
                                                }
                                            },
                                        ]
                                    },
                                  - {
                                      - name  = "node-health"
                                      - rules = [
                                          - {
                                              - alert       = "DiskPressure"
                                              - annotations = {
                                                  - description = "Filesystem {{ $labels.mountpoint }} on {{ $labels.instance }} has only {{ $value | printf \"%.1f\" }}% space remaining."
                                                  - summary     = "Disk pressure on {{ $labels.instance }}"
                                                }
                                              - expr        = "(node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 15"
                                              - for         = "5m"
                                              - labels      = {
                                                  - severity = "critical"
                                                }
                                            },
                                        ]
                                    },
                                  - {
                                      - name  = "target-health"
                                      - rules = [
                                          - {
                                              - alert       = "TargetDown"
                                              - annotations = {
                                                  - description = "Target {{ $labels.job }}/{{ $labels.instance }} has been down for more than 5 minutes."
                                                  - summary     = "Target {{ $labels.instance }} is down"
                                                }
                                              - expr        = "up == 0"
                                              - for         = "5m"
                                              - labels      = {
                                                  - severity = "warning"
                                                }
                                            },
                                        ]
                                    },
                                ]
                              - name   = "platform-alerts"
                            },
                        ]
                      - alertmanager              = {
                          - alertmanagerSpec = {
                              - resources = {
                                  - limits   = {
                                      - memory = "128Mi"
                                    }
                                  - requests = {
                                      - cpu    = "10m"
                                      - memory = "64Mi"
                                    }
                                }
                              - storage   = {
                                  - volumeClaimTemplate = {
                                      - spec = {
                                          - accessModes      = [
                                              - "ReadWriteOnce",
                                            ]
                                          - resources        = {
                                              - requests = {
                                                  - storage = "1Gi"
                                                }
                                            }
                                          - storageClassName = "local-path"
                                        }
                                    }
                                }
                            }
                          - config           = {
                              - global    = {
                                  - resolve_timeout = "5m"
                                }
                              - receivers = [
                                  - {
                                      - name = "default"
                                    },
                                  - {
                                      - name             = "telegram"
                                      - telegram_configs = [
                                          - {
                                              - bot_token     = "8256326037:AAEZ-LlhhkyaDs8TtWhGqm9dUzYj_7hkpiE"
                                              - chat_id       = -5200965094
                                              - parse_mode    = "HTML"
                                              - send_resolved = true
                                            },
                                        ]
                                    },
                                  - {
                                      - name          = "slack"
                                      - slack_configs = [
                                          - {
                                              - api_url       = " "
                                              - channel       = "#alerts"
                                              - send_resolved = true
                                              - text          = <<-EOT
                                                    {{ range .Alerts }}*{{ .Annotations.summary }}*
                                                    {{ .Annotations.description }}
                                                    {{ end }}
                                                EOT
                                              - title         = "{{ .GroupLabels.alertname }}"
                                            },
                                        ]
                                    },
                                ]
                              - route     = {
                                  - group_by        = [
                                      - "alertname",
                                      - "namespace",
                                    ]
                                  - group_interval  = "5m"
                                  - group_wait      = "30s"
                                  - receiver        = "telegram"
                                  - repeat_interval = "12h"
                                  - routes          = [
                                      - {
                                          - matchers = [
                                              - "severity=~\"critical|warning\"",
                                            ]
                                          - receiver = "slack"
                                        },
                                    ]
                                }
                            }
                        }
                      - grafana                   = {
                          - adminPassword = "(sensitive value)"
                          - persistence   = {
                              - enabled          = true
                              - size             = "2Gi"
                              - storageClassName = "local-path"
                            }
                          - resources     = {
                              - limits   = {
                                  - memory = "256Mi"
                                }
                              - requests = {
                                  - cpu    = "50m"
                                  - memory = "128Mi"
                                }
                            }
                          - sidecar       = {
                              - dashboards  = {
                                  - enabled         = true
                                  - searchNamespace = "ALL"
                                }
                              - datasources = {
                                  - enabled         = true
                                  - searchNamespace = "ALL"
                                }
                            }
                        }
                      - kube-state-metrics        = {
                          - resources = {
                              - limits   = {
                                  - memory = "128Mi"
                                }
                              - requests = {
                                  - cpu    = "10m"
                                  - memory = "32Mi"
                                }
                            }
                        }
                      - kubeControllerManager     = {
                          - enabled = false
                        }
                      - kubeEtcd                  = {
                          - enabled = false
                        }
                      - kubeProxy                 = {
                          - enabled = false
                        }
                      - kubeScheduler             = {
                          - enabled = false
                        }
                      - nodeExporter              = {
                          - resources = {
                              - limits   = {
                                  - memory = "64Mi"
                                }
                              - requests = {
                                  - cpu    = "20m"
                                  - memory = "32Mi"
                                }
                            }
                        }
                      - prometheus                = {
                          - prometheusSpec = {
                              - podMonitorSelectorNilUsesHelmValues     = false
                              - resources                               = {
                                  - limits   = {
                                      - memory = "1Gi"
                                    }
                                  - requests = {
                                      - cpu    = "200m"
                                      - memory = "512Mi"
                                    }
                                }
                              - retention                               = "15d"
                              - retentionSize                           = "10GB"
                              - ruleSelectorNilUsesHelmValues           = false
                              - serviceMonitorSelectorNilUsesHelmValues = false
                              - storageSpec                             = {
                                  - volumeClaimTemplate = {
                                      - spec = {
                                          - accessModes      = [
                                              - "ReadWriteOnce",
                                            ]
                                          - resources        = {
                                              - requests = {
                                                  - storage = "15Gi"
                                                }
                                            }
                                          - storageClassName = "local-path"
                                        }
                                    }
                                }
                            }
                        }
                    }
                )
              - version        = "82.0.0"
            },
        ] -> (known after apply)
        name                       = "kube-prometheus-stack"
      ~ values                     = [
          - <<-EOT
                "additionalPrometheusRules":
                - "groups":
                  - "name": "pod-health"
                    "rules":
                    - "alert": "PodRestartStorm"
                      "annotations":
                        "description": "Pod {{ $labels.namespace }}/{{ $labels.pod }} has restarted
                          {{ $value }} times in the last 15 minutes."
                        "summary": "Pod {{ $labels.namespace }}/{{ $labels.pod }} restarting frequently"
                      "expr": "increase(kube_pod_container_status_restarts_total[15m]) > 3"
                      "for": "0m"
                      "labels":
                        "severity": "warning"
                    - "alert": "OOMKilled"
                      "annotations":
                        "description": "Container {{ $labels.container }} in pod {{ $labels.namespace
                          }}/{{ $labels.pod }} was OOMKilled."
                        "summary": "Pod {{ $labels.namespace }}/{{ $labels.pod }} OOMKilled"
                      "expr": "kube_pod_container_status_last_terminated_reason{reason=\"OOMKilled\"}
                        > 0"
                      "for": "0m"
                      "labels":
                        "severity": "critical"
                  - "name": "node-health"
                    "rules":
                    - "alert": "DiskPressure"
                      "annotations":
                        "description": "Filesystem {{ $labels.mountpoint }} on {{ $labels.instance
                          }} has only {{ $value | printf \"%.1f\" }}% space remaining."
                        "summary": "Disk pressure on {{ $labels.instance }}"
                      "expr": "(node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 <
                        15"
                      "for": "5m"
                      "labels":
                        "severity": "critical"
                  - "name": "target-health"
                    "rules":
                    - "alert": "TargetDown"
                      "annotations":
                        "description": "Target {{ $labels.job }}/{{ $labels.instance }} has been down
                          for more than 5 minutes."
                        "summary": "Target {{ $labels.instance }} is down"
                      "expr": "up == 0"
                      "for": "5m"
                      "labels":
                        "severity": "warning"
                  "name": "platform-alerts"
                "alertmanager":
                  "alertmanagerSpec":
                    "resources":
                      "limits":
                        "memory": "128Mi"
                      "requests":
                        "cpu": "10m"
                        "memory": "64Mi"
                    "storage":
                      "volumeClaimTemplate":
                        "spec":
                          "accessModes":
                          - "ReadWriteOnce"
                          "resources":
                            "requests":
                              "storage": "1Gi"
                          "storageClassName": "local-path"
                  "config":
                    "global":
                      "resolve_timeout": "5m"
                    "receivers":
                    - "name": "default"
                    - "name": "telegram"
                      "telegram_configs":
                      - "parse_mode": "HTML"
                        "send_resolved": true
                    - "name": "slack"
                      "slack_configs":
                      - "channel": "#alerts"
                        "send_resolved": true
                        "text": |-
                          {{ range .Alerts }}*{{ .Annotations.summary }}*
                          {{ .Annotations.description }}
                          {{ end }}
                        "title": "{{ .GroupLabels.alertname }}"
                    "route":
                      "group_by":
                      - "alertname"
                      - "namespace"
                      "group_interval": "5m"
                      "group_wait": "30s"
                      "receiver": "telegram"
                      "repeat_interval": "12h"
                      "routes":
                      - "matchers":
                        - "severity=~\"critical|warning\""
                        "receiver": "slack"
                "grafana":
                  "persistence":
                    "enabled": true
                    "size": "2Gi"
                    "storageClassName": "local-path"
                  "resources":
                    "limits":
                      "memory": "256Mi"
                    "requests":
                      "cpu": "50m"
                      "memory": "128Mi"
                  "sidecar":
                    "dashboards":
                      "enabled": true
                      "searchNamespace": "ALL"
                    "datasources":
                      "enabled": true
                      "searchNamespace": "ALL"
                "kube-state-metrics":
                  "resources":
                    "limits":
                      "memory": "128Mi"
                    "requests":
                      "cpu": "10m"
                      "memory": "32Mi"
                "kubeControllerManager":
                  "enabled": false
                "kubeEtcd":
                  "enabled": false
                "kubeProxy":
                  "enabled": false
                "kubeScheduler":
                  "enabled": false
                "nodeExporter":
                  "resources":
                    "limits":
                      "memory": "64Mi"
                    "requests":
                      "cpu": "20m"
                      "memory": "32Mi"
                "prometheus":
                  "prometheusSpec":
                    "podMonitorSelectorNilUsesHelmValues": false
                    "resources":
                      "limits":
                        "memory": "1Gi"
                      "requests":
                        "cpu": "200m"
                        "memory": "512Mi"
                    "retention": "15d"
                    "retentionSize": "10GB"
                    "ruleSelectorNilUsesHelmValues": false
                    "serviceMonitorSelectorNilUsesHelmValues": false
                    "storageSpec":
                      "volumeClaimTemplate":
                        "spec":
                          "accessModes":
                          - "ReadWriteOnce"
                          "resources":
                            "requests":
                              "storage": "15Gi"
                          "storageClassName": "local-path"
            EOT,
          + <<-EOT
                "additionalPrometheusRules":
                - "groups":
                  - "name": "pod-health"
                    "rules":
                    - "alert": "PodRestartStorm"
                      "annotations":
                        "description": "Pod {{ $labels.namespace }}/{{ $labels.pod }} has restarted
                          {{ $value }} times in the last 15 minutes."
                        "summary": "Pod {{ $labels.namespace }}/{{ $labels.pod }} restarting frequently"
                      "expr": "increase(kube_pod_container_status_restarts_total[15m]) > 3"
                      "for": "0m"
                      "labels":
                        "severity": "warning"
                    - "alert": "OOMKilled"
                      "annotations":
                        "description": "Container {{ $labels.container }} in pod {{ $labels.namespace
                          }}/{{ $labels.pod }} was OOMKilled."
                        "summary": "Pod {{ $labels.namespace }}/{{ $labels.pod }} OOMKilled"
                      "expr": "kube_pod_container_status_last_terminated_reason{reason=\"OOMKilled\"}
                        > 0"
                      "for": "0m"
                      "labels":
                        "severity": "critical"
                  - "name": "node-health"
                    "rules":
                    - "alert": "DiskPressure"
                      "annotations":
                        "description": "Filesystem {{ $labels.mountpoint }} on {{ $labels.instance
                          }} has only {{ $value | printf \"%.1f\" }}% space remaining."
                        "summary": "Disk pressure on {{ $labels.instance }}"
                      "expr": "(node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 <
                        15"
                      "for": "5m"
                      "labels":
                        "severity": "critical"
                  - "name": "target-health"
                    "rules":
                    - "alert": "TargetDown"
                      "annotations":
                        "description": "Target {{ $labels.job }}/{{ $labels.instance }} has been down
                          for more than 5 minutes."
                        "summary": "Target {{ $labels.instance }} is down"
                      "expr": "up == 0"
                      "for": "5m"
                      "labels":
                        "severity": "warning"
                  "name": "platform-alerts"
                "alertmanager":
                  "alertmanagerSpec":
                    "resources":
                      "limits":
                        "memory": "128Mi"
                      "requests":
                        "cpu": "10m"
                        "memory": "64Mi"
                    "storage":
                      "volumeClaimTemplate":
                        "spec":
                          "accessModes":
                          - "ReadWriteOnce"
                          "resources":
                            "requests":
                              "storage": "1Gi"
                          "storageClassName": "local-path"
                  "config":
                    "global":
                      "resolve_timeout": "5m"
                    "receivers":
                    - "name": "default"
                    - "name": "telegram"
                      "telegram_configs":
                      - "parse_mode": "HTML"
                        "send_resolved": true
                    "route":
                      "group_by":
                      - "alertname"
                      - "namespace"
                      "group_interval": "5m"
                      "group_wait": "30s"
                      "receiver": "telegram"
                      "repeat_interval": "12h"
                      "routes": []
                "grafana":
                  "persistence":
                    "enabled": true
                    "size": "2Gi"
                    "storageClassName": "local-path"
                  "resources":
                    "limits":
                      "memory": "256Mi"
                    "requests":
                      "cpu": "50m"
                      "memory": "128Mi"
                  "sidecar":
                    "dashboards":
                      "enabled": true
                      "searchNamespace": "ALL"
                    "datasources":
                      "enabled": true
                      "searchNamespace": "ALL"
                "kube-state-metrics":
                  "resources":
                    "limits":
                      "memory": "128Mi"
                    "requests":
                      "cpu": "10m"
                      "memory": "32Mi"
                "kubeControllerManager":
                  "enabled": false
                "kubeEtcd":
                  "enabled": false
                "kubeProxy":
                  "enabled": false
                "kubeScheduler":
                  "enabled": false
                "nodeExporter":
                  "resources":
                    "limits":
                      "memory": "64Mi"
                    "requests":
                      "cpu": "20m"
                      "memory": "32Mi"
                "prometheus":
                  "prometheusSpec":
                    "podMonitorSelectorNilUsesHelmValues": false
                    "resources":
                      "limits":
                        "memory": "1Gi"
                      "requests":
                        "cpu": "200m"
                        "memory": "512Mi"
                    "retention": "15d"
                    "retentionSize": "10GB"
                    "ruleSelectorNilUsesHelmValues": false
                    "serviceMonitorSelectorNilUsesHelmValues": false
                    "storageSpec":
                      "volumeClaimTemplate":
                        "spec":
                          "accessModes":
                          - "ReadWriteOnce"
                          "resources":
                            "requests":
                              "storage": "15Gi"
                          "storageClassName": "local-path"
            EOT,
        ]
        # (27 unchanged attributes hidden)

      - set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }

        # (3 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so OpenTofu can't
guarantee to take exactly these actions if you run "tofu apply" now.
## Tofu Plan Output ``` tailscale_acl.this: Refreshing state... [id=acl] helm_release.nvidia_device_plugin: Refreshing state... [id=nvidia-device-plugin] data.kubernetes_namespace_v1.pal_e_docs: Reading... kubernetes_namespace_v1.ollama: Refreshing state... [id=ollama] kubernetes_namespace_v1.woodpecker: Refreshing state... [id=woodpecker] kubernetes_namespace_v1.tailscale: Refreshing state... [id=tailscale] kubernetes_namespace_v1.cnpg_system: Refreshing state... [id=cnpg-system] kubernetes_namespace_v1.monitoring: Refreshing state... [id=monitoring] kubernetes_namespace_v1.postgres: Refreshing state... [id=postgres] data.kubernetes_namespace_v1.tofu_state: Reading... data.kubernetes_namespace_v1.tofu_state: Read complete after 0s [id=tofu-state] kubernetes_namespace_v1.harbor: Refreshing state... [id=harbor] data.kubernetes_namespace_v1.pal_e_docs: Read complete after 0s [id=pal-e-docs] kubernetes_namespace_v1.forgejo: Refreshing state... [id=forgejo] kubernetes_namespace_v1.minio: Refreshing state... [id=minio] kubernetes_service_account_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_namespace_v1.keycloak: Refreshing state... [id=keycloak] kubernetes_role_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_secret_v1.paledocs_db_url: Refreshing state... [id=pal-e-docs/paledocs-db-url] kubernetes_service_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] helm_release.tailscale_operator: Refreshing state... [id=tailscale-operator] kubernetes_secret_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] kubernetes_config_map_v1.uptime_dashboard: Refreshing state... [id=monitoring/uptime-dashboard] helm_release.loki_stack: Refreshing state... [id=loki-stack] helm_release.kube_prometheus_stack: Refreshing state... [id=kube-prometheus-stack] helm_release.cnpg: Refreshing state... [id=cnpg] kubernetes_manifest.netpol_ollama: Refreshing state... kubernetes_manifest.netpol_cnpg_system: Refreshing state... kubernetes_manifest.netpol_monitoring: Refreshing state... kubernetes_manifest.netpol_postgres: Refreshing state... kubernetes_manifest.netpol_woodpecker: Refreshing state... kubernetes_manifest.netpol_harbor: Refreshing state... kubernetes_secret_v1.woodpecker_db_credentials: Refreshing state... [id=woodpecker/woodpecker-db-credentials] helm_release.forgejo: Refreshing state... [id=forgejo] kubernetes_persistent_volume_claim_v1.keycloak_data: Refreshing state... [id=keycloak/keycloak-data] kubernetes_manifest.netpol_forgejo: Refreshing state... kubernetes_role_binding_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_manifest.netpol_minio: Refreshing state... kubernetes_service_v1.keycloak: Refreshing state... [id=keycloak/keycloak] kubernetes_secret_v1.keycloak_admin: Refreshing state... [id=keycloak/keycloak-admin] kubernetes_manifest.netpol_keycloak: Refreshing state... helm_release.ollama: Refreshing state... [id=ollama] kubernetes_deployment_v1.keycloak: Refreshing state... [id=keycloak/keycloak] kubernetes_ingress_v1.keycloak_funnel: Refreshing state... [id=keycloak/keycloak-funnel] kubernetes_config_map_v1.grafana_loki_datasource: Refreshing state... [id=monitoring/grafana-loki-datasource] helm_release.minio: Refreshing state... [id=minio] kubernetes_config_map_v1.pal_e_docs_dashboard: Refreshing state... [id=monitoring/pal-e-docs-dashboard] helm_release.harbor: Refreshing state... [id=harbor] kubernetes_ingress_v1.grafana_funnel: Refreshing state... [id=monitoring/grafana-funnel] helm_release.blackbox_exporter: Refreshing state... [id=blackbox-exporter] kubernetes_manifest.blackbox_alerts: Refreshing state... kubernetes_ingress_v1.alertmanager_funnel: Refreshing state... [id=monitoring/alertmanager-funnel] kubernetes_config_map_v1.dora_dashboard: Refreshing state... [id=monitoring/dora-dashboard] kubernetes_deployment_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] kubernetes_manifest.dora_exporter_service_monitor: Refreshing state... kubernetes_ingress_v1.forgejo_funnel: Refreshing state... [id=forgejo/forgejo-funnel] minio_s3_bucket.assets: Refreshing state... [id=assets] minio_iam_policy.tf_backup: Refreshing state... [id=tf-backup] minio_iam_user.tf_backup: Refreshing state... [id=tf-backup] kubernetes_ingress_v1.minio_api_funnel: Refreshing state... [id=minio/minio-api-funnel] minio_s3_bucket.postgres_wal: Refreshing state... [id=postgres-wal] minio_iam_policy.cnpg_wal: Refreshing state... [id=cnpg-wal] minio_s3_bucket.tf_state_backups: Refreshing state... [id=tf-state-backups] kubernetes_ingress_v1.minio_funnel: Refreshing state... [id=minio/minio-funnel] minio_iam_user.cnpg: Refreshing state... [id=cnpg] minio_iam_user_policy_attachment.cnpg: Refreshing state... [id=cnpg-20260302210642491000000001] minio_iam_user_policy_attachment.tf_backup: Refreshing state... [id=tf-backup-20260314163610110100000001] kubernetes_secret_v1.woodpecker_cnpg_s3_creds: Refreshing state... [id=woodpecker/cnpg-s3-creds] kubernetes_secret_v1.tf_backup_s3_creds: Refreshing state... [id=tofu-state/tf-backup-s3-creds] kubernetes_secret_v1.cnpg_s3_creds: Refreshing state... [id=postgres/cnpg-s3-creds] kubernetes_cron_job_v1.cnpg_backup_verify: Refreshing state... [id=postgres/cnpg-backup-verify] kubernetes_cron_job_v1.tf_state_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_manifest.woodpecker_postgres: Refreshing state... kubernetes_ingress_v1.harbor_funnel: Refreshing state... [id=harbor/harbor-funnel] helm_release.woodpecker: Refreshing state... [id=woodpecker] kubernetes_ingress_v1.woodpecker_funnel: Refreshing state... [id=woodpecker/woodpecker-funnel] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place OpenTofu will perform the following actions: # helm_release.kube_prometheus_stack will be updated in-place ~ resource "helm_release" "kube_prometheus_stack" { id = "kube-prometheus-stack" ~ metadata = [ - { - app_version = "v0.89.0" - chart = "kube-prometheus-stack" - first_deployed = 1771560679 - last_deployed = 1773538888 - name = "kube-prometheus-stack" - namespace = "monitoring" - notes = <<-EOT 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=prometheus-node-exporter,app.kubernetes.io/instance=kube-prometheus-stack" -o jsonpath="{.items[0].metadata.name}") echo "Visit http://127.0.0.1:9100 to use your application" kubectl port-forward --namespace monitoring $POD_NAME 9100 kube-state-metrics is a simple service that listens to the Kubernetes API server and generates metrics about the state of the objects. The exposed metrics can be found here: https://github.com/kubernetes/kube-state-metrics/blob/master/docs/README.md#exposed-metrics The metrics are exported on the HTTP endpoint /metrics on the listening port. In your case, kube-prometheus-stack-kube-state-metrics.monitoring.svc.cluster.local:8080/metrics They are served either as plaintext or protobuf depending on the Accept header. They are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint. kube-prometheus-stack has been installed. Check its status by running: kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack" Get Grafana 'admin' user password by running: kubectl --namespace monitoring get secrets kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 -d ; echo Access Grafana local instance: export POD_NAME=$(kubectl --namespace monitoring get pod -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus-stack" -oname) kubectl --namespace monitoring port-forward $POD_NAME 3000 Get your grafana admin user password by running: kubectl get secret --namespace monitoring -l app.kubernetes.io/component=admin-secret -o jsonpath="{.items[0].data.admin-password}" | base64 --decode ; echo Visit https://github.com/prometheus-operator/kube-prometheus for instructions on how to create & configure Alertmanager and Prometheus instances using the Operator. 1. Get your 'admin' user password by running: kubectl get secret --namespace monitoring kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo 2. The Grafana server can be accessed via port 80 on the following DNS name from within your cluster: kube-prometheus-stack-grafana.monitoring.svc.cluster.local Get the Grafana URL to visit by running these commands in the same shell: export POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=kube-prometheus-stack" -o jsonpath="{.items[0].metadata.name}") kubectl --namespace monitoring port-forward $POD_NAME 3000 3. Login with the password from step 1 and the username: admin EOT - revision = 14 - values = jsonencode( { - additionalPrometheusRules = [ - { - groups = [ - { - name = "pod-health" - rules = [ - { - alert = "PodRestartStorm" - annotations = { - description = "Pod {{ $labels.namespace }}/{{ $labels.pod }} has restarted {{ $value }} times in the last 15 minutes." - summary = "Pod {{ $labels.namespace }}/{{ $labels.pod }} restarting frequently" } - expr = "increase(kube_pod_container_status_restarts_total[15m]) > 3" - for = "0m" - labels = { - severity = "warning" } }, - { - alert = "OOMKilled" - annotations = { - description = "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} was OOMKilled." - summary = "Pod {{ $labels.namespace }}/{{ $labels.pod }} OOMKilled" } - expr = "kube_pod_container_status_last_terminated_reason{reason=\"OOMKilled\"} > 0" - for = "0m" - labels = { - severity = "critical" } }, ] }, - { - name = "node-health" - rules = [ - { - alert = "DiskPressure" - annotations = { - description = "Filesystem {{ $labels.mountpoint }} on {{ $labels.instance }} has only {{ $value | printf \"%.1f\" }}% space remaining." - summary = "Disk pressure on {{ $labels.instance }}" } - expr = "(node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 15" - for = "5m" - labels = { - severity = "critical" } }, ] }, - { - name = "target-health" - rules = [ - { - alert = "TargetDown" - annotations = { - description = "Target {{ $labels.job }}/{{ $labels.instance }} has been down for more than 5 minutes." - summary = "Target {{ $labels.instance }} is down" } - expr = "up == 0" - for = "5m" - labels = { - severity = "warning" } }, ] }, ] - name = "platform-alerts" }, ] - alertmanager = { - alertmanagerSpec = { - resources = { - limits = { - memory = "128Mi" } - requests = { - cpu = "10m" - memory = "64Mi" } } - storage = { - volumeClaimTemplate = { - spec = { - accessModes = [ - "ReadWriteOnce", ] - resources = { - requests = { - storage = "1Gi" } } - storageClassName = "local-path" } } } } - config = { - global = { - resolve_timeout = "5m" } - receivers = [ - { - name = "default" }, - { - name = "telegram" - telegram_configs = [ - { - bot_token = "8256326037:AAEZ-LlhhkyaDs8TtWhGqm9dUzYj_7hkpiE" - chat_id = -5200965094 - parse_mode = "HTML" - send_resolved = true }, ] }, - { - name = "slack" - slack_configs = [ - { - api_url = " " - channel = "#alerts" - send_resolved = true - text = <<-EOT {{ range .Alerts }}*{{ .Annotations.summary }}* {{ .Annotations.description }} {{ end }} EOT - title = "{{ .GroupLabels.alertname }}" }, ] }, ] - route = { - group_by = [ - "alertname", - "namespace", ] - group_interval = "5m" - group_wait = "30s" - receiver = "telegram" - repeat_interval = "12h" - routes = [ - { - matchers = [ - "severity=~\"critical|warning\"", ] - receiver = "slack" }, ] } } } - grafana = { - adminPassword = "(sensitive value)" - persistence = { - enabled = true - size = "2Gi" - storageClassName = "local-path" } - resources = { - limits = { - memory = "256Mi" } - requests = { - cpu = "50m" - memory = "128Mi" } } - sidecar = { - dashboards = { - enabled = true - searchNamespace = "ALL" } - datasources = { - enabled = true - searchNamespace = "ALL" } } } - kube-state-metrics = { - resources = { - limits = { - memory = "128Mi" } - requests = { - cpu = "10m" - memory = "32Mi" } } } - kubeControllerManager = { - enabled = false } - kubeEtcd = { - enabled = false } - kubeProxy = { - enabled = false } - kubeScheduler = { - enabled = false } - nodeExporter = { - resources = { - limits = { - memory = "64Mi" } - requests = { - cpu = "20m" - memory = "32Mi" } } } - prometheus = { - prometheusSpec = { - podMonitorSelectorNilUsesHelmValues = false - resources = { - limits = { - memory = "1Gi" } - requests = { - cpu = "200m" - memory = "512Mi" } } - retention = "15d" - retentionSize = "10GB" - ruleSelectorNilUsesHelmValues = false - serviceMonitorSelectorNilUsesHelmValues = false - storageSpec = { - volumeClaimTemplate = { - spec = { - accessModes = [ - "ReadWriteOnce", ] - resources = { - requests = { - storage = "15Gi" } } - storageClassName = "local-path" } } } } } } ) - version = "82.0.0" }, ] -> (known after apply) name = "kube-prometheus-stack" ~ values = [ - <<-EOT "additionalPrometheusRules": - "groups": - "name": "pod-health" "rules": - "alert": "PodRestartStorm" "annotations": "description": "Pod {{ $labels.namespace }}/{{ $labels.pod }} has restarted {{ $value }} times in the last 15 minutes." "summary": "Pod {{ $labels.namespace }}/{{ $labels.pod }} restarting frequently" "expr": "increase(kube_pod_container_status_restarts_total[15m]) > 3" "for": "0m" "labels": "severity": "warning" - "alert": "OOMKilled" "annotations": "description": "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} was OOMKilled." "summary": "Pod {{ $labels.namespace }}/{{ $labels.pod }} OOMKilled" "expr": "kube_pod_container_status_last_terminated_reason{reason=\"OOMKilled\"} > 0" "for": "0m" "labels": "severity": "critical" - "name": "node-health" "rules": - "alert": "DiskPressure" "annotations": "description": "Filesystem {{ $labels.mountpoint }} on {{ $labels.instance }} has only {{ $value | printf \"%.1f\" }}% space remaining." "summary": "Disk pressure on {{ $labels.instance }}" "expr": "(node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 15" "for": "5m" "labels": "severity": "critical" - "name": "target-health" "rules": - "alert": "TargetDown" "annotations": "description": "Target {{ $labels.job }}/{{ $labels.instance }} has been down for more than 5 minutes." "summary": "Target {{ $labels.instance }} is down" "expr": "up == 0" "for": "5m" "labels": "severity": "warning" "name": "platform-alerts" "alertmanager": "alertmanagerSpec": "resources": "limits": "memory": "128Mi" "requests": "cpu": "10m" "memory": "64Mi" "storage": "volumeClaimTemplate": "spec": "accessModes": - "ReadWriteOnce" "resources": "requests": "storage": "1Gi" "storageClassName": "local-path" "config": "global": "resolve_timeout": "5m" "receivers": - "name": "default" - "name": "telegram" "telegram_configs": - "parse_mode": "HTML" "send_resolved": true - "name": "slack" "slack_configs": - "channel": "#alerts" "send_resolved": true "text": |- {{ range .Alerts }}*{{ .Annotations.summary }}* {{ .Annotations.description }} {{ end }} "title": "{{ .GroupLabels.alertname }}" "route": "group_by": - "alertname" - "namespace" "group_interval": "5m" "group_wait": "30s" "receiver": "telegram" "repeat_interval": "12h" "routes": - "matchers": - "severity=~\"critical|warning\"" "receiver": "slack" "grafana": "persistence": "enabled": true "size": "2Gi" "storageClassName": "local-path" "resources": "limits": "memory": "256Mi" "requests": "cpu": "50m" "memory": "128Mi" "sidecar": "dashboards": "enabled": true "searchNamespace": "ALL" "datasources": "enabled": true "searchNamespace": "ALL" "kube-state-metrics": "resources": "limits": "memory": "128Mi" "requests": "cpu": "10m" "memory": "32Mi" "kubeControllerManager": "enabled": false "kubeEtcd": "enabled": false "kubeProxy": "enabled": false "kubeScheduler": "enabled": false "nodeExporter": "resources": "limits": "memory": "64Mi" "requests": "cpu": "20m" "memory": "32Mi" "prometheus": "prometheusSpec": "podMonitorSelectorNilUsesHelmValues": false "resources": "limits": "memory": "1Gi" "requests": "cpu": "200m" "memory": "512Mi" "retention": "15d" "retentionSize": "10GB" "ruleSelectorNilUsesHelmValues": false "serviceMonitorSelectorNilUsesHelmValues": false "storageSpec": "volumeClaimTemplate": "spec": "accessModes": - "ReadWriteOnce" "resources": "requests": "storage": "15Gi" "storageClassName": "local-path" EOT, + <<-EOT "additionalPrometheusRules": - "groups": - "name": "pod-health" "rules": - "alert": "PodRestartStorm" "annotations": "description": "Pod {{ $labels.namespace }}/{{ $labels.pod }} has restarted {{ $value }} times in the last 15 minutes." "summary": "Pod {{ $labels.namespace }}/{{ $labels.pod }} restarting frequently" "expr": "increase(kube_pod_container_status_restarts_total[15m]) > 3" "for": "0m" "labels": "severity": "warning" - "alert": "OOMKilled" "annotations": "description": "Container {{ $labels.container }} in pod {{ $labels.namespace }}/{{ $labels.pod }} was OOMKilled." "summary": "Pod {{ $labels.namespace }}/{{ $labels.pod }} OOMKilled" "expr": "kube_pod_container_status_last_terminated_reason{reason=\"OOMKilled\"} > 0" "for": "0m" "labels": "severity": "critical" - "name": "node-health" "rules": - "alert": "DiskPressure" "annotations": "description": "Filesystem {{ $labels.mountpoint }} on {{ $labels.instance }} has only {{ $value | printf \"%.1f\" }}% space remaining." "summary": "Disk pressure on {{ $labels.instance }}" "expr": "(node_filesystem_avail_bytes / node_filesystem_size_bytes) * 100 < 15" "for": "5m" "labels": "severity": "critical" - "name": "target-health" "rules": - "alert": "TargetDown" "annotations": "description": "Target {{ $labels.job }}/{{ $labels.instance }} has been down for more than 5 minutes." "summary": "Target {{ $labels.instance }} is down" "expr": "up == 0" "for": "5m" "labels": "severity": "warning" "name": "platform-alerts" "alertmanager": "alertmanagerSpec": "resources": "limits": "memory": "128Mi" "requests": "cpu": "10m" "memory": "64Mi" "storage": "volumeClaimTemplate": "spec": "accessModes": - "ReadWriteOnce" "resources": "requests": "storage": "1Gi" "storageClassName": "local-path" "config": "global": "resolve_timeout": "5m" "receivers": - "name": "default" - "name": "telegram" "telegram_configs": - "parse_mode": "HTML" "send_resolved": true "route": "group_by": - "alertname" - "namespace" "group_interval": "5m" "group_wait": "30s" "receiver": "telegram" "repeat_interval": "12h" "routes": [] "grafana": "persistence": "enabled": true "size": "2Gi" "storageClassName": "local-path" "resources": "limits": "memory": "256Mi" "requests": "cpu": "50m" "memory": "128Mi" "sidecar": "dashboards": "enabled": true "searchNamespace": "ALL" "datasources": "enabled": true "searchNamespace": "ALL" "kube-state-metrics": "resources": "limits": "memory": "128Mi" "requests": "cpu": "10m" "memory": "32Mi" "kubeControllerManager": "enabled": false "kubeEtcd": "enabled": false "kubeProxy": "enabled": false "kubeScheduler": "enabled": false "nodeExporter": "resources": "limits": "memory": "64Mi" "requests": "cpu": "20m" "memory": "32Mi" "prometheus": "prometheusSpec": "podMonitorSelectorNilUsesHelmValues": false "resources": "limits": "memory": "1Gi" "requests": "cpu": "200m" "memory": "512Mi" "retention": "15d" "retentionSize": "10GB" "ruleSelectorNilUsesHelmValues": false "serviceMonitorSelectorNilUsesHelmValues": false "storageSpec": "volumeClaimTemplate": "spec": "accessModes": - "ReadWriteOnce" "resources": "requests": "storage": "15Gi" "storageClassName": "local-path" EOT, ] # (27 unchanged attributes hidden) - set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } # (3 unchanged blocks hidden) } Plan: 0 to add, 1 to change, 0 to destroy. ───────────────────────────────────────────────────────────────────────────── Note: You didn't use the -out option to save this plan, so OpenTofu can't guarantee to take exactly these actions if you run "tofu apply" now. ```
forgejo_admin deleted branch 82-fix-remove-invalid-slack-receiver-from-a 2026-03-15 19:08:06 +00:00
Sign in to join this conversation.
No description provided.