- HCL 48.5%
- SaltStack 17.2%
- Scheme 17.1%
- Shell 11.9%
- CSS 2.1%
- Other 3.2%
| .github | ||
| docs/spikes | ||
| keycloak/themes/westside/login | ||
| salt | ||
| scripts | ||
| terraform | ||
| .gitignore | ||
| .woodpecker.yaml | ||
| CLAUDE.md | ||
| Makefile | ||
| README.md | ||
pal-e-platform
Self-hosted Internal Developer Platform on k3s, managed entirely through OpenTofu.
A single tofu apply provisions a complete IDP: git hosting, CI/CD pipelines, container registry, object storage, SSO, PostgreSQL, GPU inference, and full observability -- all running on a single-node k3s cluster with Tailscale mesh networking for zero-config ingress and TLS.
Architecture
Woodpecker CI --> Harbor Registry --> ArgoCD --> k3s
| |
Forgejo (git push) Tailscale Funnels (TLS)
10 OpenTofu modules compose the platform:
| Module | What it deploys |
|---|---|
| networking | Tailscale operator, ACL policies, subnet router, 8 ingress funnels |
| monitoring | Prometheus, Grafana, Loki, Blackbox exporter, DORA metrics exporter |
| forgejo | Self-hosted git forge (Gitea fork) |
| ci | Woodpecker CI server + CNPG PostgreSQL cluster |
| harbor | CNCF container registry with vulnerability scanning |
| storage | MinIO (S3-compatible), buckets, IAM policies |
| keycloak | SSO/OIDC identity provider with custom themes |
| database | CloudNativePG operator, shared postgres namespace, backup verification |
| ops | NVIDIA GPU plugin, Ollama, embedding worker, tofu state backup CronJobs |
| staging | Pre-production validation namespace |
Key Design Decisions
- Tailscale for ingress. No cert-manager, no Traefik, no cloud LB. Every service gets a Tailscale funnel with automatic TLS. One operator manages all ingress.
- Salt for secrets. GPG-encrypted Salt pillars are the single source of truth for 15 platform secrets.
make tofu-secretsdecrypts and renders them to.tfvarsat apply time. No HashiCorp Vault, no SOPS -- just GPG and Salt. - In-cluster state. Terraform state lives in Kubernetes secrets via the
kubernetesbackend. No cloud storage dependency. - DORA from day one. A custom exporter scrapes Forgejo and Woodpecker APIs to produce deployment frequency, lead time, MTTR, and change failure rate metrics in Grafana.
GitOps Pipeline
git push --> Forgejo --> Woodpecker CI --> Harbor (build + scan)
|
ArgoCD (kustomize overlay) --> k3s
pal-e-deployments holds the kustomize overlays. Woodpecker pushes a tag, ArgoCD syncs.
Observability
4 Grafana dashboards ship as code: DORA metrics, service uptime (Blackbox), pal-e-docs golden signals, and Mac CI agent health. Alerting routes through Telegram.
Project Layout
terraform/
main.tf # Root orchestrator -- wires 10 modules
modules/{10 domains}/ # One module per platform concern
network-policies.tf # Namespace isolation rules
dashboards/ # Grafana dashboard JSON (4)
salt/
pillar/secrets/ # GPG-encrypted secret values
states/ # Host configuration (firewall, kernel, packages, SSH)
Makefile # tofu-secrets, apply, plan shortcuts
Related Repositories
| Repo | Purpose |
|---|---|
| pal-e-services | Service onboarding terraform (consumes this platform) |
| pal-e-deployments | Kustomize overlays for ArgoCD |
| pal-e-docs | Knowledge base and project management API |
| pal-e-app | SvelteKit frontend |
Tech Stack
OpenTofu, k3s, Tailscale, Helm, Prometheus, Grafana, Loki, Forgejo, Woodpecker CI, Harbor, MinIO, Keycloak, CloudNativePG, Ollama, Salt, NVIDIA GPU operator.