Add a second Grafana dashboard focused on daily operational KPIs and live
dispatch, keeping the NOC Live dashboard untouched.
- grafana/provisioning/dashboards-json/daily_operations_dashboard.json
New dashboard covering §7 Blueprint Panels 3-8 and the §4 dispatch lens:
freshness banner, today-at-a-glance stat row, active vehicles map,
currently-idle table, vehicles-not-moved-today, per-vehicle daily KPI
roll-up, driver behaviour leaderboard, distance trend, alarm frequency,
idle cost MTD, utilisation heatmap, SLA row (collapsed, data-gated).
- 07_analytics_views.sql
Nine views in tracksolid.* wrapping the BA-file [DASHBOARD]-tagged
queries. Each view carries COMMENT ON VIEW with its spec section.
SELECT granted to grafana_ro. Smoke-tested against live DB.
- run_migrations.py
Register 06 and 07 in MIGRATIONS list with idempotent seed checks so
future fresh deploys apply them correctly.
- CLAUDE.md
Retire the tracksolid_2 schema references (schema no longer exists);
§9 Fleet State dated 2026-04-19 with correct pipeline status (running,
875 runs/24h, 0 failures) and accurate position_history row counts
(hypertable stats don't show in pg_stat_user_tables).
- docs/superpowers/specs/2026-04-19-daily-operations-dashboard-design.md
Design spec covering architecture, views, panel layout, deployment,
rollback, and known data gaps.
grafana_ro DB role was created with placeholder password 'SET_PASSWORD_IN_ENV'
and GRAFANA_DB_RO_PASSWORD was never set in .env, so Grafana's TracksolidDB
datasource could not authenticate — causing 'Failed to load home dashboard'.
Fix:
- Add GRAFANA_DB_RO_PASSWORD to .env with a secure generated password
- Add sync_role_passwords() to run_migrations.py — runs ALTER ROLE on every
startup so DB password stays in sync with the env var (idempotent)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Containers share one DB — when ingest_movement applies 04, ingest_events
and webhook_receiver start later and find distance_m already renamed,
causing a spurious FATAL before the next restart catches the recorded row.
Added sentinels for all four migrations so any container self-heals
on first startup regardless of which container ran first:
04 — trips.distance_km column exists
05 — tracksolid.device_events table exists
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Migration 02 and 03 were applied before the schema_migrations tracking
table existed, so they had no record and the runner tried to re-run them,
hitting non-idempotent TimescaleDB policy/trigger/cagg statements.
seed_pre_tracking_migrations() checks for sentinel schema objects and
inserts records for any migration that was clearly already applied:
- 02: tracksolid.devices table exists
- 03: position_history.altitude column exists
Called immediately after ensure_tracking_table() on every startup.
Safe on fresh databases (objects absent → nothing seeded → runs normally).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
On a fresh database the tracksolid schema doesn't exist yet —
migration 02 creates it, but ensure_tracking_table() ran first.
Added CREATE SCHEMA IF NOT EXISTS tracksolid before the table DDL.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add 04_bug_fix_migration.sql and 05_enhancement_migration.sql to list
- Use schema_migrations table to skip already-applied migrations (prevents
migration 04's RENAME from failing on re-run after first deployment)
- Expand CRITICAL_TABLES to include all 5 new tables from migration 05
- record_applied() writes to schema_migrations after each success
- Cleaner output: APPLY / SKIP / OK per file with summary count
On next Coolify redeploy each container will skip 02-05 (already applied)
and apply any new migrations added in future commits.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Change image from timescaledb-ha:pg16-ts2.15-oss to pg16-ts2.15
(OSS edition lacks compression, retention, continuous aggregates)
- Add postgresql-client to Dockerfile for psql binary
- Rewrite run_migrations.py to use psql instead of psycopg2
(psql runs each statement independently; psycopg2 wraps the
entire file in one transaction so one error rolls back everything)
- Add schema verification: exits 1 if critical tables missing,
preventing services from starting with broken schema
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- New run_migrations.py: executes 02_*.sql and 03_*.sql in order
- New db_migrate service: runs once before all other services start
- All services now depend on db_migrate (service_completed_successfully)
- Tolerates re-deploy: catches errors from already-existing objects
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>