A better approach to backing up the database #1721

Closed
opened 2025-10-09 17:27:11 +03:00 by OVERLORD · 2 comments
Owner

Originally created by @gchamon on GitHub.

The backup section only approaches backup with cron jobs. The docker solutions that attaches to the running container are neat solutions that facilitates the process quite a bit, but they are also cron based.

Many of us run bitwarden_rs servers in raspberry pi and these machines can fail quite easily (common SD cards are particularly unreliable) and power/internet outages could render the unit unreachable, during which time the only way to access the database would be to spin another one up from backup.

Therefore I think another approach to backing up the database is necessary, one that shortens the delay between saving a password and it actually being backed up to a more reliable storage solution.

My solution I managed to cook was to use inotify-tools to watch for changes in either db.sqlite3-wal or config.json files, however these changes also occur on login and database access, not only when saving data, which causes unnecessary backups to be executed.

My solution in more details:

  • A script called watch-for-changes.sh is ran by a systemd service. It monitors the folder for changes. If changes happens to bitwarden files, it writes "1" to a file called bitwarden-folder-updated
[Unit]
Description=bitwarden-watch-for-changes

[Service]
Type=application
ExecStart=/home/pi/bitwardenrs/watch-for-changes.sh
Restart=always
WorkingDirectory=/home/pi

[Install]
WantedBy=multi-user.target
#!/usr/bin/env bash
path_to_watch=/home/pi/bitwardenrs/bw-data

inotifywait -m "$path_to_watch" -e create -e moved_to -e modify |
  while read path action file; do
    if [[ $file =~ (wal|config.json) ]]; then
      echo '1' > /home/pi/bitwardenrs/bitwarden-folder-updated;
    fi
  done
  • another script called backup.sh is ran by a cronjob every minute and reads from bitwarden-folder-updated. If its contents is '1', the backup is run. This is to filter high frequency updates to the folder and, in the worst case, limit the update to one every minute.
#!/usr/bin/env bash

set -e

bitwarden_folder_updated=/home/pi/bitwardenrs/bitwarden-folder-updated
touch $bitwarden_folder_updated

if [[ "$(cat $bitwarden_folder_updated)" == "1" ]]; then
  rm -f /home/pi/bw-bk.tar.gz

  docker exec bitwardenrs_bitwarden_1 bash -c 'mkdir -p /data/db-backup && sqlite3 /data/db.sqlite3 ".backup /data/db-backup/backup.sqlite3"'

  cd /home/pi/bitwardenrs/bw-data
  tar -czvf /home/pi/bw-bk.tar.gz \
    config.json \
    icon_cache \
    attachments \
    db-backup/backup.sqlite3

  cd /home/pi/bitwardenrs/
  tar -czvf /home/pi/bw-scripts.tar.gz \
    backup.sh \
    bitwarden-watch-for-changes.service \
    docker-compose.yml \
    watch-for-changes.sh

  aws s3 cp /home/pi/bw-bk.tar.gz s3://bitwarden-backup-bucket
  aws s3 cp /home/pi/bw-scripts.tar.gz s3://bitwarden-backup-bucket

  echo "0" > /home/pi/bitwardenrs/bitwarden-folder-updated
else
  echo 'nothing to backup'
fi
*   *   *   *   *   ~/bitwardenrs/backup.sh

In this case, I chose AWS S3 as my reliable storage for storing updates. Also, bitwarden-backup-bucket is not my actual backup bucket.

What I would like to know is if there is a better way to watch for database updates without actually having to query the database. And, if not, would it be possible to request for a feature in which the server makes it easier to know if there were changes to the database? This would save a little bit of money both in terms of backup versions storage size and PUT access to the remote service.

Thanks for this neat project!


EDIT: running docker exec on crontab with options -it breaks the script, as crontab doesn't run the code in a TTY. The command must be run without the -it.

Originally created by @gchamon on GitHub. The backup section only approaches backup with cron jobs. The docker solutions that attaches to the running container are neat solutions that facilitates the process quite a bit, but they are also cron based. Many of us run bitwarden_rs servers in raspberry pi and these machines can fail quite easily (common SD cards are particularly unreliable) and power/internet outages could render the unit unreachable, during which time the only way to access the database would be to spin another one up from backup. Therefore I think another approach to backing up the database is necessary, one that shortens the delay between saving a password and it actually being backed up to a more reliable storage solution. My solution I managed to cook was to use `inotify-tools` to watch for changes in either `db.sqlite3-wal` or `config.json` files, however these changes also occur on login and database access, not only when saving data, which causes unnecessary backups to be executed. My solution in more details: - A script called `watch-for-changes.sh` is ran by a systemd service. It monitors the folder for changes. If changes happens to bitwarden files, it writes "1" to a file called `bitwarden-folder-updated` ```service [Unit] Description=bitwarden-watch-for-changes [Service] Type=application ExecStart=/home/pi/bitwardenrs/watch-for-changes.sh Restart=always WorkingDirectory=/home/pi [Install] WantedBy=multi-user.target ``` ```sh #!/usr/bin/env bash path_to_watch=/home/pi/bitwardenrs/bw-data inotifywait -m "$path_to_watch" -e create -e moved_to -e modify | while read path action file; do if [[ $file =~ (wal|config.json) ]]; then echo '1' > /home/pi/bitwardenrs/bitwarden-folder-updated; fi done ``` - another script called `backup.sh` is ran by a cronjob every minute and reads from `bitwarden-folder-updated`. If its contents is '1', the backup is run. This is to filter high frequency updates to the folder and, in the worst case, limit the update to one every minute. ```sh #!/usr/bin/env bash set -e bitwarden_folder_updated=/home/pi/bitwardenrs/bitwarden-folder-updated touch $bitwarden_folder_updated if [[ "$(cat $bitwarden_folder_updated)" == "1" ]]; then rm -f /home/pi/bw-bk.tar.gz docker exec bitwardenrs_bitwarden_1 bash -c 'mkdir -p /data/db-backup && sqlite3 /data/db.sqlite3 ".backup /data/db-backup/backup.sqlite3"' cd /home/pi/bitwardenrs/bw-data tar -czvf /home/pi/bw-bk.tar.gz \ config.json \ icon_cache \ attachments \ db-backup/backup.sqlite3 cd /home/pi/bitwardenrs/ tar -czvf /home/pi/bw-scripts.tar.gz \ backup.sh \ bitwarden-watch-for-changes.service \ docker-compose.yml \ watch-for-changes.sh aws s3 cp /home/pi/bw-bk.tar.gz s3://bitwarden-backup-bucket aws s3 cp /home/pi/bw-scripts.tar.gz s3://bitwarden-backup-bucket echo "0" > /home/pi/bitwardenrs/bitwarden-folder-updated else echo 'nothing to backup' fi ``` ```cron * * * * * ~/bitwardenrs/backup.sh ``` In this case, I chose AWS S3 as my reliable storage for storing updates. Also, `bitwarden-backup-bucket` is not my actual backup bucket. What I would like to know is if there is a better way to watch for database updates without actually having to query the database. And, if not, would it be possible to request for a feature in which the server makes it easier to know if there were changes to the database? This would save a little bit of money both in terms of backup versions storage size and PUT access to the remote service. Thanks for this neat project! ---- **EDIT:** running `docker exec` on crontab with options `-it` breaks the script, as crontab doesn't run the code in a TTY. The command must be run without the `-it`.
OVERLORD added the better for forum label 2025-10-09 17:27:11 +03:00
Author
Owner

@gchamon commented on GitHub:

@jjlin that really simplifies things!

bitwarden_last_backup_file=/home/pi/bitwardenrs/bitwarden-last-backup
touch $bitwarden_last_backup_file
bitwarden_last_backup=$(cat $bitwarden_last_backup_file)
bitwarden_last_update=$(stat -c %Y bw-data/db.sqlite3* bw-data/config.json | sort -nr | head -n 1)

if [[ "$bitwarden_last_backup" != "$bitwarden_last_update" ]]; then
  # ...
  echo "$bitwarden_last_update" > $bitwarden_last_backup_file
fi

thanks

I decided to save the timestamp instead of checking if it was done in the last minute because of potential execution delay. If the database update was triggered immediately after a cron execution, the script could potentially skip backing up altogether.

@gchamon commented on GitHub: @jjlin that really simplifies things! ```sh bitwarden_last_backup_file=/home/pi/bitwardenrs/bitwarden-last-backup touch $bitwarden_last_backup_file bitwarden_last_backup=$(cat $bitwarden_last_backup_file) bitwarden_last_update=$(stat -c %Y bw-data/db.sqlite3* bw-data/config.json | sort -nr | head -n 1) if [[ "$bitwarden_last_backup" != "$bitwarden_last_update" ]]; then # ... echo "$bitwarden_last_update" > $bitwarden_last_backup_file fi ``` thanks I decided to save the timestamp instead of checking if it was done in the last minute because of potential execution delay. If the database update was triggered immediately after a cron execution, the script could potentially skip backing up altogether.
Author
Owner

@jjlin commented on GitHub:

It would be simpler to just check the modification times of the SQLite files when your backup script runs, and if any have been updated within the last minute, then presumably there has been a database write. Something like stat -c %Y db.sqlite3* | sort -nr | head -n 1 should work.

@jjlin commented on GitHub: It would be simpler to just check the modification times of the SQLite files when your backup script runs, and if any have been updated within the last minute, then presumably there has been a database write. Something like `stat -c %Y db.sqlite3* | sort -nr | head -n 1` should work.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/vaultwarden#1721