Commit graph

10 commits

Author SHA1 Message Date
David Kiania
144dedee90 feat(trips): [FIX-M20] enrich tracksolid.trips with coords, route polyline, addresses, plate
Some checks are pending
Static Analysis / static (push) Waiting to run
Tests / test (push) Waiting to run
Polling jimi.device.track.mileage does not return start/end coordinates,
fuel, idle, or trip sequence — leaving most trip columns NULL. This change
closes those gaps using data we already have in position_history plus a
best-effort Nominatim lookup.

Migration 09_trips_enrichment.sql adds:
  • route_geom (LineString), start_address, end_address, vehicle_plate,
    waypoints_count on tracksolid.trips
  • GIST indexes on the three geometry columns
  • view tracksolid.v_trips_enriched exposing daily_seq + trip_date_eat
    (replaces reliance on the device-supplied trip_seq, which is only
     populated when /pushtripreport fires)

ingest_movement_rev.py::poll_trips now:
  • extracts idleSecond from the poll response (was previously dropped)
  • per-trip: SELECTs start fix, end fix, ST_MakeLine route, and waypoint
    count from position_history within (start_time, end_time)
  • reverse-geocodes start/end via the new ts_shared_rev.reverse_geocode
    helper (Nominatim, LRU-cached at ~11m precision, 1 req/sec, never raises)
  • caches vehicle_plate from a per-cycle plates dict
  • ON CONFLICT preserves webhook-supplied data when /pushtripreport later
    delivers native coords/fuel/trip_seq

backfill_trips_enrichment.py is a one-shot script (dry-run by default,
--apply to commit, --imei / --since flags) that runs the same enrichment
against historical NULL rows and COALESCEs only — never overwrites.

DWH bronze mirrors and Grafana panels intentionally not touched (frozen
on this branch until the schema work lands).

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-05-01 21:30:20 +03:00
David Kiania
898fd25a5a feat(analytics): Phase 0 — analytics-config migration and CSV importer rewrite
Some checks failed
Static Analysis / static (push) Has been cancelled
Tests / test (push) Has been cancelled
Phase 0 of the three-stakeholder analytics redesign:

- 08_analytics_config.sql: ops.cost_rates + ops.kpi_targets with seed
  fuel rates (KES 195/L NBO+MBA, UGX 5200/L KLA) and 6 seed KPI
  targets (utilisation_pct, idle_pct global+osp-patrol,
  fuel_kes_per_100km, mttr_hours, alarms_per_100km). Granted SELECT to
  grafana_ro. Wired into run_migrations.py MIGRATIONS.

- import_drivers_csv.py: full rewrite for the new Mitieng CSV
  (20260427_FSG_Vehicles_mitieng.csv). Snake_case columns, drops
  _infer_city() plate-prefix logic in favour of reading assigned_city
  directly. Adds cost_centre, assigned_route, vehicle_category,
  vehicle_brand, fuel_100km, depot_address. Treats the literal "NULL"
  string as missing. Reuses clean(), clean_num(), clean_ts(),
  get_conn(), get_logger() from ts_shared_rev. Special-cases numeric
  and timestamptz columns in the UPDATE clause.

- audit_device_reconciliation.py: read-only audit comparing the CSV
  against tracksolid.devices. Reports per-account row counts, IMEIs
  on one side only, and devices on both sides whose metadata is still
  NULL.

- 260427_device_reconciliation.md + 260427_audit_output.txt: Phase 0.2
  reconciliation record. First run: DB has 172 devices, CSV has 162,
  delta +10 (10 IMEIs in DB-only, mostly fireside-account auto-syncs).
  Importer run with --only-null --apply filled 154 rows; coverage now
  assigned_city 152/172, cost_centre 150/172.

Applied to stage on 2026-04-27 23:35 UTC.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
2026-04-27 23:42:37 +03:00
David Kiania
85d02c81a5 feat: Daily Operations dashboard + tracksolid analytics views
Some checks failed
Static Analysis / static (push) Has been cancelled
Tests / test (push) Has been cancelled
Static Analysis / static (pull_request) Has been cancelled
Tests / test (pull_request) Has been cancelled
Add a second Grafana dashboard focused on daily operational KPIs and live
dispatch, keeping the NOC Live dashboard untouched.

- grafana/provisioning/dashboards-json/daily_operations_dashboard.json
  New dashboard covering §7 Blueprint Panels 3-8 and the §4 dispatch lens:
  freshness banner, today-at-a-glance stat row, active vehicles map,
  currently-idle table, vehicles-not-moved-today, per-vehicle daily KPI
  roll-up, driver behaviour leaderboard, distance trend, alarm frequency,
  idle cost MTD, utilisation heatmap, SLA row (collapsed, data-gated).

- 07_analytics_views.sql
  Nine views in tracksolid.* wrapping the BA-file [DASHBOARD]-tagged
  queries. Each view carries COMMENT ON VIEW with its spec section.
  SELECT granted to grafana_ro. Smoke-tested against live DB.

- run_migrations.py
  Register 06 and 07 in MIGRATIONS list with idempotent seed checks so
  future fresh deploys apply them correctly.

- CLAUDE.md
  Retire the tracksolid_2 schema references (schema no longer exists);
  §9 Fleet State dated 2026-04-19 with correct pipeline status (running,
  875 runs/24h, 0 failures) and accurate position_history row counts
  (hypertable stats don't show in pg_stat_user_tables).

- docs/superpowers/specs/2026-04-19-daily-operations-dashboard-design.md
  Design spec covering architecture, views, panel layout, deployment,
  rollback, and known data gaps.
2026-04-19 13:44:18 +03:00
David Kiania
d706d17cc8 Fix Grafana datasource: add GRAFANA_DB_RO_PASSWORD and sync grafana_ro on startup
grafana_ro DB role was created with placeholder password 'SET_PASSWORD_IN_ENV'
and GRAFANA_DB_RO_PASSWORD was never set in .env, so Grafana's TracksolidDB
datasource could not authenticate — causing 'Failed to load home dashboard'.

Fix:
- Add GRAFANA_DB_RO_PASSWORD to .env with a secure generated password
- Add sync_role_passwords() to run_migrations.py — runs ALTER ROLE on every
  startup so DB password stays in sync with the env var (idempotent)

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-11 19:22:30 +03:00
David Kiania
5f1b32f1dc Extend seed sentinels to cover migrations 04 and 05
Containers share one DB — when ingest_movement applies 04, ingest_events
and webhook_receiver start later and find distance_m already renamed,
causing a spurious FATAL before the next restart catches the recorded row.

Added sentinels for all four migrations so any container self-heals
on first startup regardless of which container ran first:
  04 — trips.distance_km column exists
  05 — tracksolid.device_events table exists

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 23:48:30 +03:00
David Kiania
5d47eece6b Fix: seed pre-tracking migrations to skip already-applied 02 and 03
Migration 02 and 03 were applied before the schema_migrations tracking
table existed, so they had no record and the runner tried to re-run them,
hitting non-idempotent TimescaleDB policy/trigger/cagg statements.

seed_pre_tracking_migrations() checks for sentinel schema objects and
inserts records for any migration that was clearly already applied:
  - 02: tracksolid.devices table exists
  - 03: position_history.altitude column exists

Called immediately after ensure_tracking_table() on every startup.
Safe on fresh databases (objects absent → nothing seeded → runs normally).

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 23:43:44 +03:00
David Kiania
63e555b822 Fix: create tracksolid schema before schema_migrations table
On a fresh database the tracksolid schema doesn't exist yet —
migration 02 creates it, but ensure_tracking_table() ran first.
Added CREATE SCHEMA IF NOT EXISTS tracksolid before the table DDL.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 23:40:32 +03:00
David Kiania
aa290151ea Update run_migrations.py: add 04+05, idempotency tracking, expanded verify
- Add 04_bug_fix_migration.sql and 05_enhancement_migration.sql to list
- Use schema_migrations table to skip already-applied migrations (prevents
  migration 04's RENAME from failing on re-run after first deployment)
- Expand CRITICAL_TABLES to include all 5 new tables from migration 05
- record_applied() writes to schema_migrations after each success
- Cleaner output: APPLY / SKIP / OK per file with summary count

On next Coolify redeploy each container will skip 02-05 (already applied)
and apply any new migrations added in future commits.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-10 23:34:57 +03:00
David Kiania
326764e1a0 Fix migration failures: switch to full TimescaleDB + use psql runner
- Change image from timescaledb-ha:pg16-ts2.15-oss to pg16-ts2.15
  (OSS edition lacks compression, retention, continuous aggregates)
- Add postgresql-client to Dockerfile for psql binary
- Rewrite run_migrations.py to use psql instead of psycopg2
  (psql runs each statement independently; psycopg2 wraps the
  entire file in one transaction so one error rolls back everything)
- Add schema verification: exits 1 if critical tables missing,
  preventing services from starting with broken schema

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-08 17:17:58 +03:00
David Kiania
4a31de30b1 Add db_migrate init service to auto-run SQL schema on deploy
- New run_migrations.py: executes 02_*.sql and 03_*.sql in order
- New db_migrate service: runs once before all other services start
- All services now depend on db_migrate (service_completed_successfully)
- Tolerates re-deploy: catches errors from already-existing objects

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-08 17:02:09 +03:00