mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2025-12-09 09:13:02 +03:00
Vaultwarden starts timing out after some time without a restart #1306
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @trwnh on GitHub.
Subject of the issue
I installed vaultwarden and vaultwarden-web via Arch Linux community repos, and everything seems to work fine after setup and configuration... however, several hours after leaving Vaultwarden unattended and then coming back to it to try to do something (e.g. a sync operation), it is found to give an error, and attempting to load the web vault does not work. Loading the URL gives a 502 or 504 (bad gateway or gateway timeout), and loading the IP directly gives a timeout.
Deployment environment
communityrepoSteps to reproduce
/etc/vaultwarden.env: (Everything else is commented out as default)
nginx config:
nginx site.conf:
Expected behaviour
Vaultwarden continues to run indefinitely and listen for requests
Actual behaviour
Vaultwarden begins timing out and requires a restart
Troubleshooting data
Nothing seems to come up in the logs. I've enabled
LOG_LEVEL=debugand am still not seeing anything out of the ordinary. It just seems to stop processing requests after some amount of time, despite the service still running. As a last resort I have enabledLOG_LEVEL=traceand am waiting for it to start timing out again.systemctl status shows this:
fwiw override.conf is just this:
Literally the only suspicious thing that I can see is that when Vaultwarden starts to timeout, the memory usage drops to about 7-8M instead of the 16-17M it usually hovers around while working. I doubt it's being killed by the system, because I have a fair amount of RAM left open:
@BlackDex commented on GitHub:
Well, off it is something in Vaultwarden, then it has to be triggered by something. Only thing i can think of from the top of my mind are the jobs running from time to time. But that should show at least a lot live that it wanted to start it i think. Either that or it broke somehow before that without any logging.
Have you also tried to login to http://localhost:8089 or http://127.0.0.1:8089 when this happens?
@BlackDex commented on GitHub:
I suggest to check what @jjlin mentioned
So items like
ss -tto see which items are still connected.And increase the workers of Vaultwarden
@trwnh commented on GitHub:
Trying to load by direct IP does not work, it just tries to load endlessly. Trying to load from the hostname leads to an nginx gateway timeout.
@trwnh commented on GitHub:
Last 1000 logged trace lines: https://gist.github.com/trwnh/fd8e9e9563c7599526fac1260e7d7d30
@jjlin commented on GitHub:
This sounds similar to #950 and #1515. See those threads for background.
@trwnh commented on GitHub:
Looking at nginx error-log I see basically these 2 lines or variants thereof:
nginx access-log looks normal, just shows me attempting to connect.
ss -treturns the following:@jjlin commented on GitHub:
185.56.80.46is presumably someone probing your server and keeping connections open for some reason. This is exactly what's discussed in #950 and #1515.nginx annoyingly doesn't have strict SNI support built in, but you can simulate it. See, e.g., https://security.stackexchange.com/a/107918.
@BlackDex commented on GitHub:
I have a default aka
_host also on my nginx which indeed responds to simple ip traffic.