mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2025-12-10 01:10:09 +03:00
Uptime flaky (503s) since version 1.9.1? #2001
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @itsthejb on GitHub.
Hi there,
First of all, thanks for the excellent work on this project!
As for my issue: I've been noticing quite flaky uptime performance on my instance since about late August. This seems like it might coincide with release 1.9.1. From my logs from uptimerobot.com:
docker-compose.ymlNote that I've stubbed out the default health check here, since this appeared to be causing serious problems with accessing the backend: similar issue? https://github.com/dani-garcia/bitwarden_rs/issues/499
Apache reverse proxy entry
curling the front end
All of my other services are pretty stable, using the same Docker service, and same Apache reverse proxy. So, I'm wondering if something was introduced in 1.9.1 that would be making the front end unstable?
Hope you can help!
Thanks
@itsthejb commented on GitHub:
Hi @mprasil. Thanks for the tip about disabling the health-check properly. Will try that. That was just the first thing to try to disable the default one, which certainly seemed to make the web interface inaccessible for me...
Edit: container continues running. It's just that the Web UI will intermittently fail to respond, and this causes the 503s
@davidglezz commented on GitHub:
The same happens to me
I was trying to install it for the first time and I thought I had a bad configuration, but the 1.9 looks much more stable.
1.10 is offline some time and then ok another time...
Logs only GET /alive (but not accessible from the browser)
@mprasil commented on GitHub:
Is the container running all the time or does it also restart?
Note that you can disable health check in docker-compose file:
There's no point running
exit 0periodically.@dani-garcia commented on GitHub:
Hmm well my first guess would have been the healthcheck too, but you already covered that, so not sure what could be happening.
Do you get any strange things in the bitwarden_rs logs during the times the service goes down? Or when it goes back up again?
@itsthejb commented on GitHub:
Hi,
No, as David, the log is clean. Sounds like I am experiencing the same issue as him. Glad it's not only me!
@targodan commented on GitHub:
I have the same problem on debian 9.11 with docker 19.03.2 in swarm mode. It keeps killing my container. The workaround with disabling health checks seems to work though.
@mprasil commented on GitHub:
I wonder if we should bump the timeout value up a bit. 3s should be enough, but maybe sometimes it takes longer for whatever reason and then it's marked as unhealthy..
Can someone that fixed the problem by disabling the health check try to set the timeout instead?
@mprasil commented on GitHub:
@targodan what image version are you using. There have been some issues with the health check in the past.
@davidglezz commented on GitHub:
I have solved it with
thanks @mprasil
@targodan commented on GitHub:
I use the official
bitwardenrs/server:alpineimage.@dani-garcia commented on GitHub:
Closed due to inactivity, these values have been bumped a couple of times already so hopefully it won't be a problem anymore.
@dani-garcia commented on GitHub:
Also I'm wondering if docker has a way to require multiple consecutive bad checks before declaring the container as unhealthy, that might smooth over any flakiness.