FrankenPress writes structured logs to stdout/stderr. Any cluster-side shipper that tails container streams (Vector, Grafana Alloy, Promtail, Datadog Agent, Fluent Bit) picks them up without pod-level config. The platform stays backend-neutral; you choose the destination.Documentation Index
Fetch the complete documentation index at: https://docs.frankenpress.com/llms.txt
Use this file to discover all available pages before exploring further.
What’s emitted
| Stream | Source | Format | When |
|---|---|---|---|
| stdout | Caddy access log | JSON, one line per request | Always (public server only) |
| stderr | PHP errors | Plain text | Always — php.ini error_log = /dev/stderr |
| stderr | WordPress debug log | Plain text | Only when WP_ENV=staging (routed to php://stderr) |
FP_METRICS_PORT (default 9145) is not access-logged
on purpose — Prometheus scrapes every ~15s would otherwise dominate the
log stream.
A Caddy access log line looks like:
Cache-Status response header tells you whether Souin served from
cache (hit) or fell through to PHP (miss).
Recommended: Vector DaemonSet
Vector is a vendor-neutral agent maintained by Datadog. A single DaemonSet ships to Loki, Datadog, both, or any other supported sink. This is the recommended path because it keeps the FrankenPress chart free of backend-specific glue.Provide a `vector.yaml` config
The example below tails container logs cluster-wide, parses Caddy’s JSON
access log, and ships to both Loki and Datadog. Drop whichever sink
you don’t want.
vector-values.yaml
Alternative: Grafana Alloy / Promtail (Loki only)
If you’re a Loki-only shop and already run Grafana Alloy (or its predecessor Promtail), no FrankenPress-specific config is needed. The defaultdiscovery.kubernetes + loki.source.kubernetes pipeline tails all pod
logs in the cluster. Filter by namespace or by the
app.kubernetes.io/name=fp-site label in your Alloy config to scope to
FrankenPress sites.
Alternative: Datadog Agent (autodiscovery)
If you run the Datadog Agent DaemonSet with log autodiscovery, set pod annotations on the FrankenPress release:site segment of the annotation key is the container name in the
deployment template (it’s not configurable). The Datadog Agent will
auto-tag each line with source:caddy and service:fp-site so its
default Caddy log pipeline applies.
What’s not shipped
- Subchart logs. The chart’s bundled
mariadb,redis, andminiosubcharts each write their own logs to their own pod stdout. A cluster-side shipper picks those up automatically — they’re not FrankenPress’s concern. Production deploys typically disable the subcharts and use external services (RDS, DragonflyDB Operator, AWS S3) whose logs live wherever that operator/service writes them. - wp-cron Job logs. The
wpCronCronJob writes to its own pod stdout; a DaemonSet-mode shipper captures those lines too. - Prometheus scrape access logs. Suppressed on purpose (see above).
If you want them, add a
logblock inside the:{$FP_METRICS_PORT}server in a forkedCaddyfile— but you almost certainly don’t.
Failure modes
| Symptom | Likely cause | |
|---|---|---|
| Only startup chatter in stdout, no per-request lines | You’re on an fp-runtime image older than the JSON-access-log change. Pull the latest tag and helm upgrade. | |
| WP debug entries missing from logs in staging | WP_DEBUG_LOG was overridden to true (writes to disk); confirm it resolves to php://stderr in your config/environments/staging.php. | |
| Vector / Alloy isn’t seeing the pod’s logs | Label selector mismatch — verify app.kubernetes.io/name=fp-site is present on the pod (`kubectl get pod -o yaml | grep -A1 labels`). |
| Cache-Status header missing from access log lines | Souin only sets it on requests that flow through the cache directive. Logged-in / authenticated requests bypass cache by design and show no Cache-Status. |