tracksolid_timescale_grafan.../backup/backup_db.sh
David Kiania 108c1be057
Some checks failed
Static Analysis / static (push) Waiting to run
Tests / test (push) Waiting to run
Static Analysis / static (pull_request) Has been cancelled
Tests / test (pull_request) Has been cancelled
feat: nightly pg_dump sidecar uploads to rustfs fleet-db bucket
Adds a `db_backup` sidecar that dumps tracksolid_db every night at
02:30 UTC (configurable via BACKUP_HOUR/BACKUP_MINUTE), gzips the
output, and uploads to s3://fleet-db/daily/<dbname>_<ts>.sql.gz on
the rustfs S3-compatible instance (s3.rahamafresh.com). Prunes
objects older than BACKUP_KEEP_DAYS (default 30).

Required .env additions (Coolify UI):
  RUSTFS_ENDPOINT=https://s3.rahamafresh.com
  RUSTFS_ACCESS_KEY=...
  RUSTFS_SECRET_KEY=...
  RUSTFS_BUCKET=fleet-db

Mitigates data loss when Coolify service recreation wipes the
service-ID-scoped timescale-data volume.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-21 12:53:23 +03:00

58 lines
2 KiB
Bash
Executable file

#!/bin/sh
# Nightly pg_dump → rustfs (S3-compatible).
# Required env: POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB,
# RUSTFS_ENDPOINT, RUSTFS_ACCESS_KEY, RUSTFS_SECRET_KEY, RUSTFS_BUCKET.
# Optional: BACKUP_KEEP_DAYS (default 30), PGHOST (default timescale_db).
set -eu
: "${POSTGRES_USER:?}"
: "${POSTGRES_PASSWORD:?}"
: "${POSTGRES_DB:?}"
: "${RUSTFS_ENDPOINT:?}"
: "${RUSTFS_ACCESS_KEY:?}"
: "${RUSTFS_SECRET_KEY:?}"
: "${RUSTFS_BUCKET:?}"
PGHOST="${PGHOST:-timescale_db}"
PGPORT="${PGPORT:-5432}"
KEEP_DAYS="${BACKUP_KEEP_DAYS:-30}"
TS="$(date -u +%Y%m%d_%H%M%SZ)"
FILE="${POSTGRES_DB}_${TS}.sql.gz"
TMP="/tmp/${FILE}"
export AWS_ACCESS_KEY_ID="$RUSTFS_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="$RUSTFS_SECRET_KEY"
export AWS_DEFAULT_REGION="${RUSTFS_REGION:-us-east-1}"
export PGPASSWORD="$POSTGRES_PASSWORD"
echo "[$(date -u +%FT%TZ)] pg_dump ${POSTGRES_DB}@${PGHOST} -> ${FILE}"
pg_dump -h "$PGHOST" -p "$PGPORT" -U "$POSTGRES_USER" -d "$POSTGRES_DB" \
--no-owner --no-privileges --format=plain \
| gzip -9 > "$TMP"
SIZE=$(wc -c < "$TMP")
echo "[$(date -u +%FT%TZ)] dump size: ${SIZE} bytes"
KEY="daily/${FILE}"
echo "[$(date -u +%FT%TZ)] uploading s3://${RUSTFS_BUCKET}/${KEY}"
aws --endpoint-url "$RUSTFS_ENDPOINT" s3 cp "$TMP" "s3://${RUSTFS_BUCKET}/${KEY}"
rm -f "$TMP"
# Prune anything older than KEEP_DAYS.
CUTOFF="$(date -u -d "-${KEEP_DAYS} days" +%Y%m%d 2>/dev/null || date -u -v -"${KEEP_DAYS}"d +%Y%m%d)"
aws --endpoint-url "$RUSTFS_ENDPOINT" s3 ls "s3://${RUSTFS_BUCKET}/daily/" \
| awk '{print $4}' \
| while read -r OBJ; do
[ -z "$OBJ" ] && continue
OBJ_DATE=$(echo "$OBJ" | sed -n 's/.*_\([0-9]\{8\}\)_.*/\1/p')
[ -z "$OBJ_DATE" ] && continue
if [ "$OBJ_DATE" -lt "$CUTOFF" ]; then
echo "[$(date -u +%FT%TZ)] prune s3://${RUSTFS_BUCKET}/daily/${OBJ}"
aws --endpoint-url "$RUSTFS_ENDPOINT" s3 rm "s3://${RUSTFS_BUCKET}/daily/${OBJ}"
fi
done
echo "[$(date -u +%FT%TZ)] backup complete"