fix: correct Ollama hostPath volume keys for otwld Helm chart #91

Merged
forgejo_admin merged 1 commit from 89-ollama-hostpath-hotfix into main 2026-03-16 18:40:23 +00:00

Summary

Hotfix for PR #90's volume configuration. The otwld/ollama-helm chart uses volumes/volumeMounts (not extraVolumes/extraVolumeMounts) and extraEnv (not extraEnvVars). Fixed keys, added OLLAMA_MODELS=/ollama-models to avoid mount path collision with chart's built-in emptyDir at /root/.ollama.

Changes

  • terraform/main.tf: Changed extraEnvVarsextraEnv, extraVolumesvolumes, extraVolumeMountsvolumeMounts to match otwld/ollama-helm chart values. Added OLLAMA_MODELS=/ollama-models env var so Ollama uses the hostPath mount instead of the chart's default /root/.ollama emptyDir.

Test Plan

  • tofu apply -target=helm_release.ollama succeeded
  • Model (qwen3-embedding:4b, 2.4GB) stored at /var/lib/ollama/models/ on host
  • Pod deleted → new pod started → model still present (hostPath persists)
  • semantic_search MCP tool returns results end-to-end after restart

Review Checklist

  • tofu fmt passes
  • tofu validate passes
  • No unrelated changes
  • Applied and verified on production
  • Hotfix for PR #90
  • Closes #89
  • plan-pal-e-docsphase-pal-e-docs-f12-semantic-search-recovery
## Summary Hotfix for PR #90's volume configuration. The otwld/ollama-helm chart uses `volumes`/`volumeMounts` (not `extraVolumes`/`extraVolumeMounts`) and `extraEnv` (not `extraEnvVars`). Fixed keys, added `OLLAMA_MODELS=/ollama-models` to avoid mount path collision with chart's built-in emptyDir at `/root/.ollama`. ## Changes - `terraform/main.tf`: Changed `extraEnvVars` → `extraEnv`, `extraVolumes` → `volumes`, `extraVolumeMounts` → `volumeMounts` to match otwld/ollama-helm chart values. Added `OLLAMA_MODELS=/ollama-models` env var so Ollama uses the hostPath mount instead of the chart's default `/root/.ollama` emptyDir. ## Test Plan - [x] `tofu apply -target=helm_release.ollama` succeeded - [x] Model (`qwen3-embedding:4b`, 2.4GB) stored at `/var/lib/ollama/models/` on host - [x] Pod deleted → new pod started → model still present (hostPath persists) - [x] `semantic_search` MCP tool returns results end-to-end after restart ## Review Checklist - [x] `tofu fmt` passes - [x] `tofu validate` passes - [x] No unrelated changes - [x] Applied and verified on production ## Related - Hotfix for PR #90 - Closes #89 - `plan-pal-e-docs` → `phase-pal-e-docs-f12-semantic-search-recovery`
fix: correct Ollama hostPath volume keys for otwld Helm chart
All checks were successful
ci/woodpecker/push/woodpecker Pipeline was successful
ci/woodpecker/pr/woodpecker Pipeline was successful
ci/woodpecker/pull_request_closed/woodpecker Pipeline was successful
4c8eb04a73
PR #90 used extraVolumes/extraVolumeMounts which don't exist in the
otwld/ollama-helm chart, causing duplicate volume name and mountPath
collisions. Fix uses the chart's actual keys: volumes/volumeMounts
for the hostPath mount at /ollama-models, extraEnv for OLLAMA_MODELS,
and persistentVolume.enabled=false to disable the default PVC.

The chart's emptyDir at /root/.ollama is harmless — unused because
OLLAMA_MODELS points to /ollama-models (the hostPath mount).
Models verified surviving pod restart at /var/lib/ollama/models/.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Author
Owner

Tofu Plan Output

tailscale_acl.this: Refreshing state... [id=acl]
helm_release.nvidia_device_plugin: Refreshing state... [id=nvidia-device-plugin]
kubernetes_namespace_v1.tailscale: Refreshing state... [id=tailscale]
kubernetes_namespace_v1.minio: Refreshing state... [id=minio]
kubernetes_namespace_v1.forgejo: Refreshing state... [id=forgejo]
kubernetes_namespace_v1.cnpg_system: Refreshing state... [id=cnpg-system]
kubernetes_namespace_v1.woodpecker: Refreshing state... [id=woodpecker]
kubernetes_namespace_v1.monitoring: Refreshing state... [id=monitoring]
kubernetes_namespace_v1.keycloak: Refreshing state... [id=keycloak]
kubernetes_namespace_v1.postgres: Refreshing state... [id=postgres]
data.kubernetes_namespace_v1.tofu_state: Reading...
kubernetes_namespace_v1.harbor: Refreshing state... [id=harbor]
data.kubernetes_namespace_v1.pal_e_docs: Reading...
data.kubernetes_namespace_v1.tofu_state: Read complete after 0s [id=tofu-state]
kubernetes_namespace_v1.ollama: Refreshing state... [id=ollama]
data.kubernetes_namespace_v1.pal_e_docs: Read complete after 0s [id=pal-e-docs]
kubernetes_config_map_v1.uptime_dashboard: Refreshing state... [id=monitoring/uptime-dashboard]
helm_release.tailscale_operator: Refreshing state... [id=tailscale-operator]
helm_release.loki_stack: Refreshing state... [id=loki-stack]
helm_release.cnpg: Refreshing state... [id=cnpg]
kubernetes_secret_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
kubernetes_service_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
helm_release.kube_prometheus_stack: Refreshing state... [id=kube-prometheus-stack]
kubernetes_service_v1.keycloak: Refreshing state... [id=keycloak/keycloak]
kubernetes_secret_v1.woodpecker_db_credentials: Refreshing state... [id=woodpecker/woodpecker-db-credentials]
kubernetes_persistent_volume_claim_v1.keycloak_data: Refreshing state... [id=keycloak/keycloak-data]
kubernetes_manifest.netpol_cnpg_system: Refreshing state...
kubernetes_manifest.netpol_monitoring: Refreshing state...
kubernetes_manifest.netpol_postgres: Refreshing state...
kubernetes_manifest.netpol_woodpecker: Refreshing state...
kubernetes_manifest.netpol_keycloak: Refreshing state...
kubernetes_manifest.netpol_minio: Refreshing state...
kubernetes_secret_v1.keycloak_admin: Refreshing state... [id=keycloak/keycloak-admin]
helm_release.forgejo: Refreshing state... [id=forgejo]
kubernetes_service_account_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_role_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_service_v1.embedding_worker_metrics: Refreshing state... [id=pal-e-docs/embedding-worker-metrics]
kubernetes_secret_v1.paledocs_db_url: Refreshing state... [id=pal-e-docs/paledocs-db-url]
kubernetes_manifest.netpol_forgejo: Refreshing state...
helm_release.ollama: Refreshing state... [id=ollama]
kubernetes_manifest.netpol_ollama: Refreshing state...
kubernetes_deployment_v1.keycloak: Refreshing state... [id=keycloak/keycloak]
kubernetes_manifest.netpol_harbor: Refreshing state...
kubernetes_role_binding_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_ingress_v1.keycloak_funnel: Refreshing state... [id=keycloak/keycloak-funnel]
helm_release.harbor: Refreshing state... [id=harbor]
kubernetes_config_map_v1.grafana_loki_datasource: Refreshing state... [id=monitoring/grafana-loki-datasource]
kubernetes_config_map_v1.pal_e_docs_dashboard: Refreshing state... [id=monitoring/pal-e-docs-dashboard]
kubernetes_ingress_v1.grafana_funnel: Refreshing state... [id=monitoring/grafana-funnel]
kubernetes_ingress_v1.alertmanager_funnel: Refreshing state... [id=monitoring/alertmanager-funnel]
kubernetes_config_map_v1.dora_dashboard: Refreshing state... [id=monitoring/dora-dashboard]
kubernetes_manifest.blackbox_alerts: Refreshing state...
kubernetes_deployment_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter]
helm_release.blackbox_exporter: Refreshing state... [id=blackbox-exporter]
helm_release.minio: Refreshing state... [id=minio]
kubernetes_manifest.embedding_alerts: Refreshing state...
kubernetes_manifest.embedding_worker_service_monitor: Refreshing state...
kubernetes_manifest.dora_exporter_service_monitor: Refreshing state...
kubernetes_ingress_v1.forgejo_funnel: Refreshing state... [id=forgejo/forgejo-funnel]
kubernetes_ingress_v1.harbor_funnel: Refreshing state... [id=harbor/harbor-funnel]
minio_s3_bucket.tf_state_backups: Refreshing state... [id=tf-state-backups]
minio_iam_user.cnpg: Refreshing state... [id=cnpg]
minio_s3_bucket.assets: Refreshing state... [id=assets]
minio_iam_user.tf_backup: Refreshing state... [id=tf-backup]
minio_iam_policy.cnpg_wal: Refreshing state... [id=cnpg-wal]
minio_s3_bucket.postgres_wal: Refreshing state... [id=postgres-wal]
kubernetes_ingress_v1.minio_funnel: Refreshing state... [id=minio/minio-funnel]
kubernetes_ingress_v1.minio_api_funnel: Refreshing state... [id=minio/minio-api-funnel]
minio_iam_policy.tf_backup: Refreshing state... [id=tf-backup]
minio_iam_user_policy_attachment.tf_backup: Refreshing state... [id=tf-backup-20260314163610110100000001]
minio_iam_user_policy_attachment.cnpg: Refreshing state... [id=cnpg-20260302210642491000000001]
kubernetes_secret_v1.woodpecker_cnpg_s3_creds: Refreshing state... [id=woodpecker/cnpg-s3-creds]
kubernetes_secret_v1.tf_backup_s3_creds: Refreshing state... [id=tofu-state/tf-backup-s3-creds]
kubernetes_secret_v1.cnpg_s3_creds: Refreshing state... [id=postgres/cnpg-s3-creds]
kubernetes_cron_job_v1.tf_state_backup: Refreshing state... [id=tofu-state/tf-state-backup]
kubernetes_cron_job_v1.cnpg_backup_verify: Refreshing state... [id=postgres/cnpg-backup-verify]
kubernetes_manifest.woodpecker_postgres: Refreshing state...
helm_release.woodpecker: Refreshing state... [id=woodpecker]
kubernetes_manifest.woodpecker_postgres_scheduled_backup: Refreshing state...
kubernetes_ingress_v1.woodpecker_funnel: Refreshing state... [id=woodpecker/woodpecker-funnel]

OpenTofu used the selected providers to generate the following execution
plan. Resource actions are indicated with the following symbols:
  ~ update in-place

OpenTofu will perform the following actions:

  # helm_release.woodpecker will be updated in-place
  ~ resource "helm_release" "woodpecker" {
        id                         = "woodpecker"
      ~ metadata                   = [
          - {
              - app_version    = "3.13.0"
              - chart          = "woodpecker"
              - first_deployed = 1773625582
              - last_deployed  = 1773652818
              - name           = "woodpecker"
              - namespace      = "woodpecker"
              - notes          = <<-EOT
                    1. Get the application URL by running these commands:
                      export POD_NAME=$(kubectl get pods --namespace woodpecker -l "app.kubernetes.io/name=server,app.kubernetes.io/instance=woodpecker" -o jsonpath="{.items[0].metadata.name}")
                      export CONTAINER_PORT=$(kubectl get pod --namespace woodpecker $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}")
                      echo "Visit http://127.0.0.1:8080 to use your application"
                      kubectl --namespace woodpecker port-forward $POD_NAME 8080:$CONTAINER_PORT
                EOT
              - revision       = 2
              - values         = jsonencode(
                    {
                      - agent  = {
                          - enabled      = true
                          - env          = {
                              - WOODPECKER_AGENT_SECRET              = "(sensitive value)"
                              - WOODPECKER_BACKEND                   = "kubernetes"
                              - WOODPECKER_BACKEND_K8S_NAMESPACE     = "woodpecker"
                              - WOODPECKER_BACKEND_K8S_STORAGE_CLASS = "local-path"
                              - WOODPECKER_BACKEND_K8S_VOLUME_SIZE   = "1Gi"
                            }
                          - replicaCount = 1
                          - resources    = {
                              - limits   = {
                                  - memory = "256Mi"
                                }
                              - requests = {
                                  - cpu    = "50m"
                                  - memory = "64Mi"
                                }
                            }
                        }
                      - server = {
                          - env              = {
                              - WOODPECKER_ADMIN               = "forgejo_admin"
                              - WOODPECKER_AGENT_SECRET        = "(sensitive value)"
                              - WOODPECKER_DATABASE_DATASOURCE = "postgres://woodpecker:kM3L4AhLNiuMhIY7tMQ@woodpecker-db-rw.woodpecker.svc.cluster.local:5432/woodpecker?sslmode=disable"
                              - WOODPECKER_DATABASE_DRIVER     = "postgres"
                              - WOODPECKER_FORGEJO             = "true"
                              - WOODPECKER_FORGEJO_CLIENT      = "(sensitive value)"
                              - WOODPECKER_FORGEJO_CLONE_URL   = "http://forgejo-http.forgejo.svc.cluster.local:80"
                              - WOODPECKER_FORGEJO_SECRET      = "(sensitive value)"
                              - WOODPECKER_FORGEJO_URL         = "https://forgejo.tail5b443a.ts.net"
                              - WOODPECKER_HOST                = "https://woodpecker.tail5b443a.ts.net"
                            }
                          - persistentVolume = {
                              - enabled      = true
                              - size         = "5Gi"
                              - storageClass = "local-path"
                            }
                          - resources        = {
                              - limits   = {
                                  - memory = "512Mi"
                                }
                              - requests = {
                                  - cpu    = "50m"
                                  - memory = "128Mi"
                                }
                            }
                          - statefulSet      = {
                              - replicaCount = 1
                            }
                        }
                    }
                )
              - version        = "3.5.1"
            },
        ] -> (known after apply)
        name                       = "woodpecker"
        # (26 unchanged attributes hidden)

      - set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      - set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }
      + set_sensitive {
          # At least one attribute in this block is (or was) sensitive,
          # so its contents will not be displayed.
        }

        # (2 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

─────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so OpenTofu can't
guarantee to take exactly these actions if you run "tofu apply" now.
## Tofu Plan Output ``` tailscale_acl.this: Refreshing state... [id=acl] helm_release.nvidia_device_plugin: Refreshing state... [id=nvidia-device-plugin] kubernetes_namespace_v1.tailscale: Refreshing state... [id=tailscale] kubernetes_namespace_v1.minio: Refreshing state... [id=minio] kubernetes_namespace_v1.forgejo: Refreshing state... [id=forgejo] kubernetes_namespace_v1.cnpg_system: Refreshing state... [id=cnpg-system] kubernetes_namespace_v1.woodpecker: Refreshing state... [id=woodpecker] kubernetes_namespace_v1.monitoring: Refreshing state... [id=monitoring] kubernetes_namespace_v1.keycloak: Refreshing state... [id=keycloak] kubernetes_namespace_v1.postgres: Refreshing state... [id=postgres] data.kubernetes_namespace_v1.tofu_state: Reading... kubernetes_namespace_v1.harbor: Refreshing state... [id=harbor] data.kubernetes_namespace_v1.pal_e_docs: Reading... data.kubernetes_namespace_v1.tofu_state: Read complete after 0s [id=tofu-state] kubernetes_namespace_v1.ollama: Refreshing state... [id=ollama] data.kubernetes_namespace_v1.pal_e_docs: Read complete after 0s [id=pal-e-docs] kubernetes_config_map_v1.uptime_dashboard: Refreshing state... [id=monitoring/uptime-dashboard] helm_release.tailscale_operator: Refreshing state... [id=tailscale-operator] helm_release.loki_stack: Refreshing state... [id=loki-stack] helm_release.cnpg: Refreshing state... [id=cnpg] kubernetes_secret_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] kubernetes_service_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] helm_release.kube_prometheus_stack: Refreshing state... [id=kube-prometheus-stack] kubernetes_service_v1.keycloak: Refreshing state... [id=keycloak/keycloak] kubernetes_secret_v1.woodpecker_db_credentials: Refreshing state... [id=woodpecker/woodpecker-db-credentials] kubernetes_persistent_volume_claim_v1.keycloak_data: Refreshing state... [id=keycloak/keycloak-data] kubernetes_manifest.netpol_cnpg_system: Refreshing state... kubernetes_manifest.netpol_monitoring: Refreshing state... kubernetes_manifest.netpol_postgres: Refreshing state... kubernetes_manifest.netpol_woodpecker: Refreshing state... kubernetes_manifest.netpol_keycloak: Refreshing state... kubernetes_manifest.netpol_minio: Refreshing state... kubernetes_secret_v1.keycloak_admin: Refreshing state... [id=keycloak/keycloak-admin] helm_release.forgejo: Refreshing state... [id=forgejo] kubernetes_service_account_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_role_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_service_v1.embedding_worker_metrics: Refreshing state... [id=pal-e-docs/embedding-worker-metrics] kubernetes_secret_v1.paledocs_db_url: Refreshing state... [id=pal-e-docs/paledocs-db-url] kubernetes_manifest.netpol_forgejo: Refreshing state... helm_release.ollama: Refreshing state... [id=ollama] kubernetes_manifest.netpol_ollama: Refreshing state... kubernetes_deployment_v1.keycloak: Refreshing state... [id=keycloak/keycloak] kubernetes_manifest.netpol_harbor: Refreshing state... kubernetes_role_binding_v1.tf_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_ingress_v1.keycloak_funnel: Refreshing state... [id=keycloak/keycloak-funnel] helm_release.harbor: Refreshing state... [id=harbor] kubernetes_config_map_v1.grafana_loki_datasource: Refreshing state... [id=monitoring/grafana-loki-datasource] kubernetes_config_map_v1.pal_e_docs_dashboard: Refreshing state... [id=monitoring/pal-e-docs-dashboard] kubernetes_ingress_v1.grafana_funnel: Refreshing state... [id=monitoring/grafana-funnel] kubernetes_ingress_v1.alertmanager_funnel: Refreshing state... [id=monitoring/alertmanager-funnel] kubernetes_config_map_v1.dora_dashboard: Refreshing state... [id=monitoring/dora-dashboard] kubernetes_manifest.blackbox_alerts: Refreshing state... kubernetes_deployment_v1.dora_exporter: Refreshing state... [id=monitoring/dora-exporter] helm_release.blackbox_exporter: Refreshing state... [id=blackbox-exporter] helm_release.minio: Refreshing state... [id=minio] kubernetes_manifest.embedding_alerts: Refreshing state... kubernetes_manifest.embedding_worker_service_monitor: Refreshing state... kubernetes_manifest.dora_exporter_service_monitor: Refreshing state... kubernetes_ingress_v1.forgejo_funnel: Refreshing state... [id=forgejo/forgejo-funnel] kubernetes_ingress_v1.harbor_funnel: Refreshing state... [id=harbor/harbor-funnel] minio_s3_bucket.tf_state_backups: Refreshing state... [id=tf-state-backups] minio_iam_user.cnpg: Refreshing state... [id=cnpg] minio_s3_bucket.assets: Refreshing state... [id=assets] minio_iam_user.tf_backup: Refreshing state... [id=tf-backup] minio_iam_policy.cnpg_wal: Refreshing state... [id=cnpg-wal] minio_s3_bucket.postgres_wal: Refreshing state... [id=postgres-wal] kubernetes_ingress_v1.minio_funnel: Refreshing state... [id=minio/minio-funnel] kubernetes_ingress_v1.minio_api_funnel: Refreshing state... [id=minio/minio-api-funnel] minio_iam_policy.tf_backup: Refreshing state... [id=tf-backup] minio_iam_user_policy_attachment.tf_backup: Refreshing state... [id=tf-backup-20260314163610110100000001] minio_iam_user_policy_attachment.cnpg: Refreshing state... [id=cnpg-20260302210642491000000001] kubernetes_secret_v1.woodpecker_cnpg_s3_creds: Refreshing state... [id=woodpecker/cnpg-s3-creds] kubernetes_secret_v1.tf_backup_s3_creds: Refreshing state... [id=tofu-state/tf-backup-s3-creds] kubernetes_secret_v1.cnpg_s3_creds: Refreshing state... [id=postgres/cnpg-s3-creds] kubernetes_cron_job_v1.tf_state_backup: Refreshing state... [id=tofu-state/tf-state-backup] kubernetes_cron_job_v1.cnpg_backup_verify: Refreshing state... [id=postgres/cnpg-backup-verify] kubernetes_manifest.woodpecker_postgres: Refreshing state... helm_release.woodpecker: Refreshing state... [id=woodpecker] kubernetes_manifest.woodpecker_postgres_scheduled_backup: Refreshing state... kubernetes_ingress_v1.woodpecker_funnel: Refreshing state... [id=woodpecker/woodpecker-funnel] OpenTofu used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: ~ update in-place OpenTofu will perform the following actions: # helm_release.woodpecker will be updated in-place ~ resource "helm_release" "woodpecker" { id = "woodpecker" ~ metadata = [ - { - app_version = "3.13.0" - chart = "woodpecker" - first_deployed = 1773625582 - last_deployed = 1773652818 - name = "woodpecker" - namespace = "woodpecker" - notes = <<-EOT 1. Get the application URL by running these commands: export POD_NAME=$(kubectl get pods --namespace woodpecker -l "app.kubernetes.io/name=server,app.kubernetes.io/instance=woodpecker" -o jsonpath="{.items[0].metadata.name}") export CONTAINER_PORT=$(kubectl get pod --namespace woodpecker $POD_NAME -o jsonpath="{.spec.containers[0].ports[0].containerPort}") echo "Visit http://127.0.0.1:8080 to use your application" kubectl --namespace woodpecker port-forward $POD_NAME 8080:$CONTAINER_PORT EOT - revision = 2 - values = jsonencode( { - agent = { - enabled = true - env = { - WOODPECKER_AGENT_SECRET = "(sensitive value)" - WOODPECKER_BACKEND = "kubernetes" - WOODPECKER_BACKEND_K8S_NAMESPACE = "woodpecker" - WOODPECKER_BACKEND_K8S_STORAGE_CLASS = "local-path" - WOODPECKER_BACKEND_K8S_VOLUME_SIZE = "1Gi" } - replicaCount = 1 - resources = { - limits = { - memory = "256Mi" } - requests = { - cpu = "50m" - memory = "64Mi" } } } - server = { - env = { - WOODPECKER_ADMIN = "forgejo_admin" - WOODPECKER_AGENT_SECRET = "(sensitive value)" - WOODPECKER_DATABASE_DATASOURCE = "postgres://woodpecker:kM3L4AhLNiuMhIY7tMQ@woodpecker-db-rw.woodpecker.svc.cluster.local:5432/woodpecker?sslmode=disable" - WOODPECKER_DATABASE_DRIVER = "postgres" - WOODPECKER_FORGEJO = "true" - WOODPECKER_FORGEJO_CLIENT = "(sensitive value)" - WOODPECKER_FORGEJO_CLONE_URL = "http://forgejo-http.forgejo.svc.cluster.local:80" - WOODPECKER_FORGEJO_SECRET = "(sensitive value)" - WOODPECKER_FORGEJO_URL = "https://forgejo.tail5b443a.ts.net" - WOODPECKER_HOST = "https://woodpecker.tail5b443a.ts.net" } - persistentVolume = { - enabled = true - size = "5Gi" - storageClass = "local-path" } - resources = { - limits = { - memory = "512Mi" } - requests = { - cpu = "50m" - memory = "128Mi" } } - statefulSet = { - replicaCount = 1 } } } ) - version = "3.5.1" }, ] -> (known after apply) name = "woodpecker" # (26 unchanged attributes hidden) - set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } - set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } + set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } + set_sensitive { # At least one attribute in this block is (or was) sensitive, # so its contents will not be displayed. } # (2 unchanged blocks hidden) } Plan: 0 to add, 1 to change, 0 to destroy. ───────────────────────────────────────────────────────────────────────────── Note: You didn't use the -out option to save this plan, so OpenTofu can't guarantee to take exactly these actions if you run "tofu apply" now. ```
Author
Owner

PR #91 Review

DOMAIN REVIEW

Tech stack: Terraform (Helm provider), Kubernetes (hostPath volumes, env vars), otwld/ollama-helm chart.

This is a targeted hotfix for PR #90, which used incorrect Helm value keys for the otwld/ollama-helm chart. The fix addresses three issues:

  1. Helm value key correction: extraEnvVars -> extraEnv, extraVolumes -> volumes, extraVolumeMounts -> volumeMounts. These are the correct keys for the otwld/ollama-helm chart (the extra* variants are a Bitnami convention, not otwld).

  2. Mount path collision avoidance: The chart creates a built-in emptyDir at /root/.ollama when persistentVolume.enabled = false. PR #90 mounted the hostPath at /root/.ollama, which would collide with or be masked by the chart's emptyDir. This PR mounts at /ollama-models instead and sets OLLAMA_MODELS=/ollama-models so Ollama writes to the hostPath mount.

  3. Host path refinement: /var/lib/ollama -> /var/lib/ollama/models -- more specific, avoids mixing model data with any future Ollama host-level state.

All three changes are correct and well-reasoned. The volume name change from ollama-data to ollama-hostpath is a clarity improvement. The comments in the code clearly explain the architecture (chart emptyDir is unused, OLLAMA_MODELS redirects to hostPath mount, models survive any k8s lifecycle event).

Helm/k8s specifics:

  • DirectoryOrCreate type is appropriate for a hostPath that may not exist on first deploy.
  • The hostPath is safe here because the Ollama pod is pinned to the GPU node via nvidia.com/gpu: 1 resource request -- it will always land on the same node. This is documented in the header comment above the resource block.
  • No secrets, no credentials, no hardcoded values beyond the mount paths (which are infrastructure constants, not magic numbers).
  • The extraEnv list format with name/value objects matches the otwld chart's expected schema.

No scope creep: The diff touches exactly one file, exactly the Ollama helm release block, and only the keys that were wrong. The embedding worker metrics/alerting resources added in PR #90 are untouched.

BLOCKERS

None.

  • No new functionality requiring tests (this is a Helm values correction, validated by apply + manual verification).
  • No user input handling.
  • No secrets or credentials.
  • No DRY violations.

NITS

  1. Minor: The host path /var/lib/ollama/models is a reasonable convention, but it is not documented anywhere outside this Terraform file. If the GPU node is ever rebuilt, the operator needs to know this path holds persistent model data. Consider adding it to a recovery SOP or the plan phase notes. Non-blocking -- the inline comment is sufficient for now.

SOP COMPLIANCE

  • Branch named after issue: 89-ollama-hostpath-hotfix references issue #89
  • PR body follows template: Summary, Changes, Test Plan, Review Checklist, Related all present
  • Related references plan slug: plan-pal-e-docs and phase phase-pal-e-docs-f12-semantic-search-recovery
  • Closes #89 in PR body
  • No secrets committed
  • No unnecessary file changes (1 file, scoped to Ollama block only)
  • Test plan is thorough: tofu apply, model persistence across pod delete, end-to-end semantic_search validation
  • tofu fmt and tofu validate confirmed passing

PROCESS OBSERVATIONS

  • Deployment frequency: This hotfix was applied and verified on production before PR submission, which is appropriate for a hotfix that fixes a broken deployment from PR #90. The apply-then-PR pattern is acceptable for hotfixes where the service is down.
  • Change failure risk: Low. The change is a key rename + mount path adjustment. The test plan confirms end-to-end functionality (model persistence + semantic_search). The failure mode of wrong keys is "chart ignores them silently," which is exactly what happened with PR #90.
  • MTTR observation: PR #90 introduced incorrect keys that were merged and applied. The hotfix was identified, applied, and verified quickly. For future Helm chart integrations, checking the chart's values.yaml for the actual key names before the initial PR would prevent this class of error. The otwld chart's values.yaml is the source of truth for accepted keys.

VERDICT: APPROVED

## PR #91 Review ### DOMAIN REVIEW **Tech stack:** Terraform (Helm provider), Kubernetes (hostPath volumes, env vars), otwld/ollama-helm chart. This is a targeted hotfix for PR #90, which used incorrect Helm value keys for the otwld/ollama-helm chart. The fix addresses three issues: 1. **Helm value key correction:** `extraEnvVars` -> `extraEnv`, `extraVolumes` -> `volumes`, `extraVolumeMounts` -> `volumeMounts`. These are the correct keys for the otwld/ollama-helm chart (the `extra*` variants are a Bitnami convention, not otwld). 2. **Mount path collision avoidance:** The chart creates a built-in emptyDir at `/root/.ollama` when `persistentVolume.enabled = false`. PR #90 mounted the hostPath at `/root/.ollama`, which would collide with or be masked by the chart's emptyDir. This PR mounts at `/ollama-models` instead and sets `OLLAMA_MODELS=/ollama-models` so Ollama writes to the hostPath mount. 3. **Host path refinement:** `/var/lib/ollama` -> `/var/lib/ollama/models` -- more specific, avoids mixing model data with any future Ollama host-level state. All three changes are correct and well-reasoned. The volume name change from `ollama-data` to `ollama-hostpath` is a clarity improvement. The comments in the code clearly explain the architecture (chart emptyDir is unused, `OLLAMA_MODELS` redirects to hostPath mount, models survive any k8s lifecycle event). **Helm/k8s specifics:** - `DirectoryOrCreate` type is appropriate for a hostPath that may not exist on first deploy. - The hostPath is safe here because the Ollama pod is pinned to the GPU node via `nvidia.com/gpu: 1` resource request -- it will always land on the same node. This is documented in the header comment above the resource block. - No secrets, no credentials, no hardcoded values beyond the mount paths (which are infrastructure constants, not magic numbers). - The `extraEnv` list format with `name`/`value` objects matches the otwld chart's expected schema. **No scope creep:** The diff touches exactly one file, exactly the Ollama helm release block, and only the keys that were wrong. The embedding worker metrics/alerting resources added in PR #90 are untouched. ### BLOCKERS None. - No new functionality requiring tests (this is a Helm values correction, validated by apply + manual verification). - No user input handling. - No secrets or credentials. - No DRY violations. ### NITS 1. **Minor:** The host path `/var/lib/ollama/models` is a reasonable convention, but it is not documented anywhere outside this Terraform file. If the GPU node is ever rebuilt, the operator needs to know this path holds persistent model data. Consider adding it to a recovery SOP or the plan phase notes. Non-blocking -- the inline comment is sufficient for now. ### SOP COMPLIANCE - [x] Branch named after issue: `89-ollama-hostpath-hotfix` references issue #89 - [x] PR body follows template: Summary, Changes, Test Plan, Review Checklist, Related all present - [x] Related references plan slug: `plan-pal-e-docs` and phase `phase-pal-e-docs-f12-semantic-search-recovery` - [x] `Closes #89` in PR body - [x] No secrets committed - [x] No unnecessary file changes (1 file, scoped to Ollama block only) - [x] Test plan is thorough: tofu apply, model persistence across pod delete, end-to-end semantic_search validation - [x] `tofu fmt` and `tofu validate` confirmed passing ### PROCESS OBSERVATIONS - **Deployment frequency:** This hotfix was applied and verified on production before PR submission, which is appropriate for a hotfix that fixes a broken deployment from PR #90. The apply-then-PR pattern is acceptable for hotfixes where the service is down. - **Change failure risk:** Low. The change is a key rename + mount path adjustment. The test plan confirms end-to-end functionality (model persistence + semantic_search). The failure mode of wrong keys is "chart ignores them silently," which is exactly what happened with PR #90. - **MTTR observation:** PR #90 introduced incorrect keys that were merged and applied. The hotfix was identified, applied, and verified quickly. For future Helm chart integrations, checking the chart's `values.yaml` for the actual key names before the initial PR would prevent this class of error. The otwld chart's `values.yaml` is the source of truth for accepted keys. ### VERDICT: APPROVED
forgejo_admin deleted branch 89-ollama-hostpath-hotfix 2026-03-16 18:40:23 +00:00
Sign in to join this conversation.
No description provided.