Impossible to clear Event_logs on busy server #427

Closed
opened 2025-10-09 16:29:47 +03:00 by OVERLORD · 1 comment
Owner

Originally created by @ySp-chld on GitHub.

Using Vaultwarden 1.31.0

I have log events for 90 days enable, with the popularity of password manager our instance came to have a constant flow of people using the API and the service in the org.
The constant flow makes the DB 15GB now and basically any attempt to reduce the size of the event table has failed.
And the backup of our poor sqlite is failling because the activity is non stop apparently.

I've tried to reduce the amount of day in the table but it fails, I've try to disable the event logs but it does clear the table.

I've tried to reduce the number of day gradually but it seems that sometime the job just does nothing.
I can see the entry :

Rocket has launched from http://0.0.0.0:80 Start events cleanup job
But sometime it feels like it's doing something because I can see some .wal file in the sqlite but the activity is so intense that it cannot manage to clear anything and just stall .

I'm very sorry if this is not the proper place to report but I think there's several bug in the event org feature :

  • to my knowledge there's no way to force a job clean up, from API, UI or CLI.
  • there's after looking at the code no message indicating the end of the job clean up.

I know we should moove our setup to anything but sqlite at that scale this is in the pipeline, in the mean time, as a manual clean up can I safely shutdown event logs and drop the table manually from CLI and then reenable them with a smaller retention ?
Or is there an other option I don't know about.

PS: Sorry if I report in the wrong place, I feel like it belongs here, feel free to close if I should use forum instead.
Also thank you very much to every one who work on this amazing project. ❤️

PS2: This is very much a SQLite issue, I need to run the Vacuum. the cleanup actually worked but never showed as the DB is very fragmented now.

Originally created by @ySp-chld on GitHub. Using Vaultwarden 1.31.0 I have log events for 90 days enable, with the popularity of password manager our instance came to have a constant flow of people using the API and the service in the org. The constant flow makes the DB 15GB now and basically any attempt to reduce the size of the event table has failed. And the backup of our poor sqlite is failling because the activity is non stop apparently. I've tried to reduce the amount of day in the table but it fails, I've try to disable the event logs but it does clear the table. I've tried to reduce the number of day gradually but it seems that sometime the job just does nothing. I can see the entry : `Rocket has launched from http://0.0.0.0:80 Start events cleanup job ` But sometime it feels like it's doing something because I can see some `.wal` file in the sqlite but the activity is so intense that it cannot manage to clear anything and just stall . I'm very sorry if this is not the proper place to report but I think there's several bug in the event org feature : - to my knowledge there's no way to force a job clean up, from API, UI or CLI. - there's after looking at the code no message indicating the end of the job clean up. I know we should moove our setup to anything but sqlite at that scale this is in the pipeline, in the mean time, as a manual clean up can I safely shutdown event logs and drop the table manually from CLI and then reenable them with a smaller retention ? Or is there an other option I don't know about. PS: Sorry if I report in the wrong place, I feel like it belongs here, feel free to close if I should use forum instead. Also thank you very much to every one who work on this amazing project. :heart: PS2: This is very much a SQLite issue, I need to run the Vacuum. the cleanup actually worked but never showed as the DB is very fragmented now.
Author
Owner

@BlackDex commented on GitHub:

I think you need to read a bit more how SQLite works.
Those records are deleted, but the location is kept there and thus you will not see the filesize to shrink.
SQLite will reuse those record locations to store new data.
This is just how SQLite works.

We are not going to build-in an automatic vacuum option, since that might cause issues if someone wants to do recovery actions on the database.

@BlackDex commented on GitHub: I think you need to read a bit more how SQLite works. Those records are deleted, but the location is kept there and thus you will not see the filesize to shrink. SQLite will reuse those record locations to store new data. This is just how SQLite works. We are not going to build-in an automatic vacuum option, since that might cause issues if someone wants to do recovery actions on the database.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/vaultwarden#427