mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2025-12-11 09:13:02 +03:00
A better approach to backing up the database #1721
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @gchamon on GitHub.
The backup section only approaches backup with cron jobs. The docker solutions that attaches to the running container are neat solutions that facilitates the process quite a bit, but they are also cron based.
Many of us run bitwarden_rs servers in raspberry pi and these machines can fail quite easily (common SD cards are particularly unreliable) and power/internet outages could render the unit unreachable, during which time the only way to access the database would be to spin another one up from backup.
Therefore I think another approach to backing up the database is necessary, one that shortens the delay between saving a password and it actually being backed up to a more reliable storage solution.
My solution I managed to cook was to use
inotify-toolsto watch for changes in eitherdb.sqlite3-walorconfig.jsonfiles, however these changes also occur on login and database access, not only when saving data, which causes unnecessary backups to be executed.My solution in more details:
watch-for-changes.shis ran by a systemd service. It monitors the folder for changes. If changes happens to bitwarden files, it writes "1" to a file calledbitwarden-folder-updatedbackup.shis ran by a cronjob every minute and reads frombitwarden-folder-updated. If its contents is '1', the backup is run. This is to filter high frequency updates to the folder and, in the worst case, limit the update to one every minute.In this case, I chose AWS S3 as my reliable storage for storing updates. Also,
bitwarden-backup-bucketis not my actual backup bucket.What I would like to know is if there is a better way to watch for database updates without actually having to query the database. And, if not, would it be possible to request for a feature in which the server makes it easier to know if there were changes to the database? This would save a little bit of money both in terms of backup versions storage size and PUT access to the remote service.
Thanks for this neat project!
EDIT: running
docker execon crontab with options-itbreaks the script, as crontab doesn't run the code in a TTY. The command must be run without the-it.@gchamon commented on GitHub:
@jjlin that really simplifies things!
thanks
I decided to save the timestamp instead of checking if it was done in the last minute because of potential execution delay. If the database update was triggered immediately after a cron execution, the script could potentially skip backing up altogether.
@jjlin commented on GitHub:
It would be simpler to just check the modification times of the SQLite files when your backup script runs, and if any have been updated within the last minute, then presumably there has been a database write. Something like
stat -c %Y db.sqlite3* | sort -nr | head -n 1should work.