feat: nightly pg_dump sidecar → rustfs fleet-db #10

Merged
kianiadee merged 6 commits from quality-program-2026-04-12 into main 2026-04-24 08:12:09 +00:00
Owner

Adds db_backup sidecar. Dumps tracksolid_db nightly at 02:30 UTC, uploads gzipped to s3://fleet-db/daily/ on rustfs. Keeps 30 days. Requires RUSTFS_ENDPOINT/ACCESS_KEY/SECRET_KEY/BUCKET in Coolify .env before redeploy.

Adds db_backup sidecar. Dumps tracksolid_db nightly at 02:30 UTC, uploads gzipped to s3://fleet-db/daily/ on rustfs. Keeps 30 days. Requires RUSTFS_ENDPOINT/ACCESS_KEY/SECRET_KEY/BUCKET in Coolify .env before redeploy.
kianiadee added 1 commit 2026-04-21 09:53:37 +00:00
feat: nightly pg_dump sidecar uploads to rustfs fleet-db bucket
Some checks failed
Static Analysis / static (push) Waiting to run
Tests / test (push) Waiting to run
Static Analysis / static (pull_request) Has been cancelled
Tests / test (pull_request) Has been cancelled
108c1be057
Adds a `db_backup` sidecar that dumps tracksolid_db every night at
02:30 UTC (configurable via BACKUP_HOUR/BACKUP_MINUTE), gzips the
output, and uploads to s3://fleet-db/daily/<dbname>_<ts>.sql.gz on
the rustfs S3-compatible instance (s3.rahamafresh.com). Prunes
objects older than BACKUP_KEEP_DAYS (default 30).

Required .env additions (Coolify UI):
  RUSTFS_ENDPOINT=https://s3.rahamafresh.com
  RUSTFS_ACCESS_KEY=...
  RUSTFS_SECRET_KEY=...
  RUSTFS_BUCKET=fleet-db

Mitigates data loss when Coolify service recreation wipes the
service-ID-scoped timescale-data volume.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
kianiadee added 1 commit 2026-04-21 13:01:47 +00:00
docs: CLAUDE.md audit — add backup sidecar, missing files, update open items
Some checks failed
Static Analysis / static (push) Waiting to run
Tests / test (push) Waiting to run
Static Analysis / static (pull_request) Has been cancelled
Tests / test (pull_request) Has been cancelled
778686e7ce
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
kianiadee added 1 commit 2026-04-22 15:21:32 +00:00
fix: [FIX-M18] pull driverName/vehicleNumber/sim from detail endpoint
Some checks failed
Static Analysis / static (push) Has been cancelled
Tests / test (push) Has been cancelled
Static Analysis / static (pull_request) Has been cancelled
Tests / test (pull_request) Has been cancelled
417627675e
jimi.user.device.list returns null for vehicleName, vehicleNumber,
driverName, driverPhone, and sim even after those fields are set via
jimi.open.device.update — the values only surface through
jimi.track.device.detail. sync_devices() now reads from dtl first with
d as fallback, which unblocks backfill of the 144 CSV-driven updates
pushed on 2026-04-22.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
kianiadee added 1 commit 2026-04-24 07:43:17 +00:00
feat: [FIX-M19] multi-account ingest across fireside sub-accounts
Some checks failed
Static Analysis / static (push) Waiting to run
Tests / test (push) Waiting to run
Static Analysis / static (pull_request) Has been cancelled
Tests / test (pull_request) Has been cancelled
fa110f4313
Fleet lives across three Tracksolid sub-accounts:
  fireside         —  63 devices
  Fireside@HQ      —  52 devices
  Fireside_MSA     —  41 devices

Previously sync_devices / poll_live_positions / poll_parking only
queried a single TARGET_ACCOUNT, so ~64% of the fleet was invisible to
the pipeline.

Changes:
  - ts_shared_rev.py: new TARGETS list (env TRACKSOLID_TARGETS,
    comma-separated; falls back to the single TARGET_ACCOUNT).
  - ts_shared_rev.py: new get_active_imeis_by_target() helper that
    groups active IMEIs by their stored account so parking calls can
    pass the right account param per batch.
  - ingest_movement_rev.py: sync_devices and poll_live_positions loop
    over every target and dedupe by IMEI before upserting. poll_parking
    loops over imeis_by_target so each batch carries the matching
    account.
  - CLAUDE.md: FIX-M19 entry.

Requires new env var TRACKSOLID_TARGETS="fireside,Fireside@HQ,Fireside_MSA"
on the ingest services in Coolify.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
kianiadee added 2 commits 2026-04-24 08:00:34 +00:00
The timescale/timescaledb-ha image uses /home/postgres/pgdata/data as
PGDATA, not /var/lib/postgresql/data. The previous mount pointed at an
empty directory that postgres never wrote to, so Coolify redeploys
destroyed all data with the container's overlay filesystem.

Pin PGDATA explicitly and move the named timescale-data volume to
/home/postgres/pgdata so the real data dir is persisted.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
feat(backup): run pg_dump multiple times per day via BACKUP_TIMES_UTC
Some checks failed
Static Analysis / static (push) Waiting to run
Tests / test (push) Waiting to run
Static Analysis / static (pull_request) Has been cancelled
Tests / test (pull_request) Has been cancelled
c585e67482
Replace the single BACKUP_HOUR/BACKUP_MINUTE slot with a comma-separated
list of UTC times. Scheduler walks all slots and sleeps until the soonest
future one, so four daily backups become a one-line env change:

    BACKUP_TIMES_UTC=02:30,08:30,14:30,20:30  (default)

Legacy BACKUP_HOUR/BACKUP_MINUTE still honored as a single slot for
backwards compatibility with existing .env files.

Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
kianiadee merged commit f0de4057b3 into main 2026-04-24 08:12:09 +00:00
Sign in to join this conversation.
No reviewers
No labels
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference: kianiadee/tracksolid_timescale_grafana_prod#10
No description provided.