Grafana's lengthkm and h units auto-scale with SI prefixes — fleet km
totals rendered as "Mm" (megametres) and drive-hour totals as days/weeks,
which read as "millions" and "weeks" on the Daily Ops dashboard. Switched
the affected panels (Fleet km today, Drive/Idle hours today, the per-vehicle
roll-up table, the driver leaderboard, and the 7-day distance trend) to
unit "none" with decimals: 1 so values stay in km/h with units carried by
panel titles and column displayNames.
Geomap view recentred to lat -2.0, lon 35.5, zoom 5 with minZoom 5 /
maxZoom 12 so the Active Vehicles map opens on the East African Community
region and cannot zoom out past it.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- Default TZ=Africa/Nairobi baked into the sidecar image; override via
compose TZ env var if another region is ever needed.
- Rename BACKUP_TIMES_UTC → BACKUP_TIMES (legacy var still honored for
back-compat). Times are now interpreted in the container's local TZ,
so "02:30" means 02:30 EAT, not UTC.
- Log timestamps and dump filenames use %FT%T%z / %Y%m%d_%H%M%S_%Z
(e.g. tracksolid_db_20260424_115729_EAT.sql.gz) so the TZ is visible
on every artifact.
- Prune cutoff computed in local time; YYYYMMDD regex unchanged so it
still matches legacy UTC filenames during the transition.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Replace the single BACKUP_HOUR/BACKUP_MINUTE slot with a comma-separated
list of UTC times. Scheduler walks all slots and sleeps until the soonest
future one, so four daily backups become a one-line env change:
BACKUP_TIMES_UTC=02:30,08:30,14:30,20:30 (default)
Legacy BACKUP_HOUR/BACKUP_MINUTE still honored as a single slot for
backwards compatibility with existing .env files.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
The timescale/timescaledb-ha image uses /home/postgres/pgdata/data as
PGDATA, not /var/lib/postgresql/data. The previous mount pointed at an
empty directory that postgres never wrote to, so Coolify redeploys
destroyed all data with the container's overlay filesystem.
Pin PGDATA explicitly and move the named timescale-data volume to
/home/postgres/pgdata so the real data dir is persisted.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Fleet lives across three Tracksolid sub-accounts:
fireside — 63 devices
Fireside@HQ — 52 devices
Fireside_MSA — 41 devices
Previously sync_devices / poll_live_positions / poll_parking only
queried a single TARGET_ACCOUNT, so ~64% of the fleet was invisible to
the pipeline.
Changes:
- ts_shared_rev.py: new TARGETS list (env TRACKSOLID_TARGETS,
comma-separated; falls back to the single TARGET_ACCOUNT).
- ts_shared_rev.py: new get_active_imeis_by_target() helper that
groups active IMEIs by their stored account so parking calls can
pass the right account param per batch.
- ingest_movement_rev.py: sync_devices and poll_live_positions loop
over every target and dedupe by IMEI before upserting. poll_parking
loops over imeis_by_target so each batch carries the matching
account.
- CLAUDE.md: FIX-M19 entry.
Requires new env var TRACKSOLID_TARGETS="fireside,Fireside@HQ,Fireside_MSA"
on the ingest services in Coolify.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
jimi.user.device.list returns null for vehicleName, vehicleNumber,
driverName, driverPhone, and sim even after those fields are set via
jimi.open.device.update — the values only surface through
jimi.track.device.detail. sync_devices() now reads from dtl first with
d as fallback, which unblocks backfill of the 144 CSV-driven updates
pushed on 2026-04-22.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Adds a `db_backup` sidecar that dumps tracksolid_db every night at
02:30 UTC (configurable via BACKUP_HOUR/BACKUP_MINUTE), gzips the
output, and uploads to s3://fleet-db/daily/<dbname>_<ts>.sql.gz on
the rustfs S3-compatible instance (s3.rahamafresh.com). Prunes
objects older than BACKUP_KEEP_DAYS (default 30).
Required .env additions (Coolify UI):
RUSTFS_ENDPOINT=https://s3.rahamafresh.com
RUSTFS_ACCESS_KEY=...
RUSTFS_SECRET_KEY=...
RUSTFS_BUCKET=fleet-db
Mitigates data loss when Coolify service recreation wipes the
service-ID-scoped timescale-data volume.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Three changes that together close the FK-violation loop on /pushalarm:
1. import_drivers_csv.py: when an IMEI is in the CSV but not in
tracksolid.devices, INSERT a new row instead of skipping. Unblocks
the 140 X3/JC400P devices listed as a HIGH open item in CLAUDE.md §10.
2. webhook_receiver_rev.py: new _ensure_device() helper upserts a stub
devices row (status='unknown') before inserting an alarm. Handles the
third class of devices — not in API sync, not in CSV (e.g. the
X3-63282 Kampala device flagged in CLAUDE.md §10).
3. CSV refreshed from Downloads (Apr 21 version, 140 active rows).
Also fixes alarm error log previously showing "None" (read deviceImei
instead of the integration push's imei field).
CSV import already applied live on the instance (63 → 201 devices).
Webhook patch requires a Coolify redeploy to pick up _ensure_device().
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Diagnostic logging revealed the real Jimi integration push format:
Content-Type: application/x-www-form-urlencoded
Body: msgType=jimi.push.device.alarm&data=<URL-encoded JSON>
Differences from docs:
- data is one JSON object per POST (not a data_list array)
- alarm uses imei+alarmTime, NOT deviceImei+gateTime
_parse_request now reads form field `data` (falls back to `data_list`) and
JSON-decodes a single object or array. push_alarm handler accepts either
field naming for forward-compat.
Removes diagnostic INFO log now that format is confirmed.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Temporary diagnostic to see what format Jimi actually sends on /pushalarm.
New container is parsing to empty items (pushes arrive but no DB insert),
so we need to see the real body shape. Remove once format is confirmed.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Jimi's integration push API (tracksolidprodocs.jimicloud.com) sends
Content-Type: application/json with body {"token":"...","data_list":[...]},
not form-encoded. FastAPI Form() silently defaulted to "" so all pushes
were discarded with "Failed to parse data_list:" warnings.
Replaces per-endpoint Form() params with a shared _parse_request() helper
that tries JSON body first, falls back to form-encoded. All seven push
endpoints (pushobd, pushfaultinfo, pushalarm, pushgps, pushhb,
pushtripreport, pushevent) updated.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Add a second Grafana dashboard focused on daily operational KPIs and live
dispatch, keeping the NOC Live dashboard untouched.
- grafana/provisioning/dashboards-json/daily_operations_dashboard.json
New dashboard covering §7 Blueprint Panels 3-8 and the §4 dispatch lens:
freshness banner, today-at-a-glance stat row, active vehicles map,
currently-idle table, vehicles-not-moved-today, per-vehicle daily KPI
roll-up, driver behaviour leaderboard, distance trend, alarm frequency,
idle cost MTD, utilisation heatmap, SLA row (collapsed, data-gated).
- 07_analytics_views.sql
Nine views in tracksolid.* wrapping the BA-file [DASHBOARD]-tagged
queries. Each view carries COMMENT ON VIEW with its spec section.
SELECT granted to grafana_ro. Smoke-tested against live DB.
- run_migrations.py
Register 06 and 07 in MIGRATIONS list with idempotent seed checks so
future fresh deploys apply them correctly.
- CLAUDE.md
Retire the tracksolid_2 schema references (schema no longer exists);
§9 Fleet State dated 2026-04-19 with correct pipeline status (running,
875 runs/24h, 0 failures) and accurate position_history row counts
(hypertable stats don't show in pg_stat_user_tables).
- docs/superpowers/specs/2026-04-19-daily-operations-dashboard-design.md
Design spec covering architecture, views, panel layout, deployment,
rollback, and known data gaps.
Full slide-by-slide copy for elicitation pitch: 6 pain questions, feature
reveal, business case, optional add-ons (RustFS + DuckDB), and one-pager.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
RustFS (S3-compatible blob) and DuckDB (historical analytics) added as
optional add-on tiers with elicitation pain questions and tier model.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Ingest scripts were connecting to the old tracksolid_2 database instead of
the timescale_db container in this stack. Grafana was already correct
(uses service name timescale_db:5432). Also strip leading space and quotes
from DATABASE_URL and API_BASE_URL so os.getenv() returns clean values.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Maps host port 5888 → container port 5432 so the DB can be reached
directly from the MacBook (requires UFW allow 5888/tcp on the server).
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
- §3: note tracksolid_2 as live schema, tracksolid as empty target;
add DB direct access tip (31.97.44.246:5888, leading space in .env)
- §4: add import_drivers_csv.py and migration 06 to codebase map
- §5: document tracksolid_2 live tables with column differences
(assigned_team vs cost_centre, city vs assigned_city); add ops.*
- §8: add rule 9 (Forgejo API auth via keychain) and rule 10
(always check active schema before querying)
- §9: update fleet state — pipeline stopped Apr 6, CSV fleet pending,
0 driver names, 19 stale positions
- §10: replace driver-name manual item with deploy + CSV import tasks
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
Audit fixes across the ingestion stack:
Observability
- Move log_ingestion out of batch loops in poll_alarms and poll_parking
(was emitting N cumulative log rows per run instead of one).
- Add missing log_ingestion + t0 to poll_trips.
- Count inserted via cur.rowcount instead of naive +=1 so ON CONFLICT
DO NOTHING no longer inflates the metric.
Resilience
- SAVEPOINT-per-item added to poll_alarms, poll_live_positions,
poll_trips, poll_parking so one bad row no longer aborts the batch
(webhook handlers already had this; pollers were inconsistent).
Performance
- /pushgps and poll_track_list now use psycopg2.extras.execute_values
with ON CONFLICT DO NOTHING — 10-50x write throughput on larger
batches.
- sync_devices and sync_driver_audit fetch jimi.track.device.detail
concurrently via ThreadPoolExecutor(max_workers=8), cutting the
daily registry sync from ~24s to ~3s for an 80-device fleet.
- poll_track_list split into two phases: parallel API fetch (4 workers,
no DB connection held) then one batched write. Previously the DB
connection was held across every per-IMEI HTTP call, risking pool
starvation.
Security
- _validate_token uses hmac.compare_digest for constant-time token
comparison (closes timing side-channel).
- _parse_data_list caps incoming items at WEBHOOK_MAX_ITEMS (default
5000) so a pathological push cannot blow memory.
Tests
- Fix test_null_alarm_type_skipped: its INSERT-count assertion was
catching the ingestion_log insert written by log_ingestion. Filter
that out so the test checks only data-table inserts.
- Full suite: 66 passed.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
57 unit tests covering clean helpers, API signing, and field mapping fixes
(FIX-E06, FIX-M16, BUG-01, BUG-03); integration tests for webhook endpoints
with mocked DB; Forgejo CI workflow with TimescaleDB service container.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
CLAUDE.md: cached context file covering project identity, tech stack,
codebase map, schema quick-ref, API gotchas, fix history, working rules,
fleet state, and open items. Structured for maximum cache efficiency —
stable content first, dynamic state at the end.
docs/CONNECTIONS.md: connection parameter shapes (no secrets) for SSH,
DB, API, container resolution, Forgejo, Grafana, n8n.
docs/PROJECT_CONTEXT.md: client business context (telco field service,
3 cities, service types), data quality gaps, KPI framework by domain,
integration roadmap.
docs/KPI_FRAMEWORK.md: living KPI register with status tracking,
thresholds, client feedback log, and review checklist. To be co-developed
with client iteratively.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Post-deployment snapshot at ~00:15 EAT 2026-04-12. Key changes vs 260410:
- 3 trips recorded (FRED KMGW 538W HULETI, 6.94 km total) — pipeline validated
- FIX-M16 distance unit fix confirmed: implied speed matches API avgSpeed exactly
- 70 track_list fixes in 24h (was 13) — dense trail from active driving
- KDK 829A GP returned to primary depot from secondary Nairobi East cluster
- Uganda anomaly (X3-63282) persists — flagged for management
- Driver name root cause confirmed: not assigned in Tracksolid Pro UI
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
[FIX-M16] jimi.device.track.mileage returns distance in metres despite
docs claiming km. Confirmed: avgSpeed × runTimeSecond / 3600 = distance/1000.
poll_trips() now divides raw value by 1000 before storing as distance_km.
3 existing bad rows corrected in prod DB (distance_km / 1000).
[FIX-M17] sync_devices() ON CONFLICT clause was only updating 5 of 26
fields, silently dropping driver_phone, sim, iccid, vehicle_name, status
etc. on subsequent syncs. Expanded to update all device fields so driver
assignments made in Tracksolid Pro UI propagate to DB on next daily sync.
Add sync_driver_audit.py: one-shot script to compare API vs DB device
registry, report driver/IMEI gaps, and force a full field upsert.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Coolify only copies docker-compose.yaml and .env to its working directory —
the ./grafana/provisioning bind mount source was always empty on the server,
so Grafana started with no datasource or dashboard configured (causing the
'Failed to load home dashboard' error).
Fix: build a custom Grafana image (grafana/Dockerfile) that COPYs the
provisioning directory at image build time. Grafana substitutes
${GRAFANA_DB_RO_PASSWORD} at startup from the env var now in Coolify's store.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
grafana_ro DB role was created with placeholder password 'SET_PASSWORD_IN_ENV'
and GRAFANA_DB_RO_PASSWORD was never set in .env, so Grafana's TracksolidDB
datasource could not authenticate — causing 'Failed to load home dashboard'.
Fix:
- Add GRAFANA_DB_RO_PASSWORD to .env with a secure generated password
- Add sync_role_passwords() to run_migrations.py — runs ALTER ROLE on every
startup so DB password stays in sync with the env var (idempotent)
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
LOGIN/LOGOUT events from Jimi now persist to tracksolid.device_events.
Table already existed with correct schema (imei, event_type, event_time,
timezone, unique constraint). Follows same SAVEPOINT + log_ingestion
pattern as all other DB-writing endpoints.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
BUG-01: OBD event_time — try unix_to_ts before clean_ts (Jimi sends epoch ints)
BUG-02: push_alarm — guard alarm_type not null (NULL breaks ON CONFLICT dedup)
BUG-03: push_trip_report — _parse_trip_ts handles Jimi BCD format YYMMDDHHmmss
BUG-04: SAVEPOINT per item in all 5 DB endpoints (FK violation on one item no
longer aborts the whole batch; SAVEPOINT now inside try for safety)
BUG-05: Add /pushevent endpoint (log-only; was returning 404 to Jimi)
FIX: push_fault_info — skip null fault_code (NULL != NULL in PG unique index)
FIX: log_ingestion — pass SQL NULL not string "None" when no error occurred
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Reflects accurate field names, behaviours, and status from production:
Polling endpoints:
- 5.1 location.list: add full response schema (direction, gpsSignal, gpsNum,
powerValue, elecQuantity, posType, locDesc); add implementation note
(311 calls, ~19 devices/sweep, ~200ms, missing devices silently omitted)
- 5.4 track.mileage: add maxSpeed field (BUG-03); add distance unit note
(BUG-02 — values are km from API, corrected via migration 04)
- 5.5 track.list: add altitude/satellite fields; add POLL-01 implementation
note (30-min schedule, 35-min lookback, source='track_list', ~137s/call)
- 5.7 parking: clarify acc_type=0 required; note durSecond vs stopSecond;
add POLL-02 production status (60 calls, 0 rows, overnight expected)
- Rate limits: document track.list latency (~137s per call)
Alarms:
- 6.1: replace vague note with explicit poll-vs-push field name table
(alertTypeId/alarmTypeName vs alarmType/alarmName); confirm BUG-01 fix
verified in production (type 3 / "Vibration alert" now stored correctly)
Webhooks:
- 10.1 /pushevent: mark implemented (PUSH-01), db table
- 10.2 /pushhb: mark as not yet wired, table ready
- 10.4 /pushalarm: mark implemented, cross-ref field name table
- 10.7 /pushoil: mark implemented (PUSH-02), unit int→text note
- 10.9 /pushtem: mark implemented (PUSH-03)
- 10.10 /pushlbs: mark implemented (PUSH-04)
- 10.20 /pushobd: mark implemented, document OBD scalar extraction
- 10.21 /pushfaultinfo: mark not yet wired, table ready
- 10.22 /pushtripreport: mark implemented
Appendix B: full rewrite — split into polling and push tables with
accurate status (✅/⚠️/not used), call counts, and DB table references
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Full live-query refresh against tracksolid_db at 07:38 EAT 2026-04-11.
All data sourced directly from the server via 10 targeted psql queries.
Report covers: all 17 table row counts, full 63-device registry with
odometer/SIM/expiry, live position detail for all 19 reporting devices
with GPS signal quality, geographic cluster map, position_history by
source (poll=124 / track_list=13 = 137 total), alarm detail confirming
BUG-01 fix, ingestion log health (399 calls, 0 failures), subscription
status breakdown, silent device full list (44 devices), schema additions
verification, Grafana readiness matrix, and P0/P1/P2 action plan.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>