Uptime flaky (503s) since version 1.9.1? #2001

Closed
opened 2025-10-09 17:41:01 +03:00 by OVERLORD · 12 comments
Owner

Originally created by @itsthejb on GitHub.

Hi there,

First of all, thanks for the excellent work on this project!

As for my issue: I've been noticing quite flaky uptime performance on my instance since about late August. This seems like it might coincide with release 1.9.1. From my logs from uptimerobot.com:

Down | 2019-09-06 22:00:42 | Service Unavailable | 0 hrs, 4 mins | 4
-- | -- | -- | -- | --
Up | 2019-09-06 21:50:20 | OK | 0 hrs, 10 mins | 10
Down | 2019-09-06 21:49:21 | Service Unavailable | 0 hrs, 0 mins | 1
Up | 2019-09-06 21:43:59 | OK | 0 hrs, 5 mins | 5
Down | 2019-09-06 21:35:00 | Service Unavailable | 0 hrs, 8 mins | 9
Up | 2019-09-06 21:24:39 | OK | 0 hrs, 10 mins | 10
Down | 2019-09-06 21:23:39 | Service Unavailable | 0 hrs, 1 mins | 1
Up | 2019-09-06 21:18:19 | OK | 0 hrs, 5 mins | 5
Down | 2019-09-06 21:17:19 | Service Unavailable | 0 hrs, 1 mins | 1
Up | 2019-09-06 21:01:58 | OK | 0 hrs, 15 mins | 15
Down | 2019-09-06 20:53:00 | Service Unavailable | 0 hrs, 8 mins | 9
Up | 2019-09-06 20:42:39 | OK | 0 hrs, 10 mins | 10
Down | 2019-09-06 20:41:39 | Service Unavailable | 0 hrs, 1 mins | 1
Up | 2019-09-06 20:25:53 | OK | 0 hrs, 15 mins | 16
Down | 2019-09-06 20:24:54 | Service Unavailable | 0 hrs, 0 mins | 1
Up | 2019-09-06 20:09:13 | OK | 0 hrs, 15 mins | 16
Down | 2019-09-06 20:05:14 | Service Unavailable | 0 hrs, 3 mins | 4
Up | 2019-09-06 19:39:32 | OK | 0 hrs, 25 mins | 26
Down | 2019-09-06 19:38:32 | Service Unavailable | 0 hrs, 1 mins | 1
Up | 2019-09-06 19:33:07 | OK | 0 hrs, 5 mins | 5
Down | 2019-09-06 19:32:07 | Service Unavailable | 0 hrs, 1 mins | 1
Up | 2019-09-06 19:16:46 | OK | 0 hrs, 15 mins | 15
Down | 2019-09-06 19:12:48 | Service Unavailable | 0 hrs, 3 mins | 4
Up | 2019-09-06 19:02:26 | OK | 0 hrs, 10 mins | 10
Down | 2019-09-06 19:01:27 | Service Unavailable | 0 hrs, 0 mins | 1
Up | 2019-09-06 18:41:10 | OK | 0 hrs, 20 mins | 20
Down | 2019-09-06 18:40:11 | Service Unavailable | 0 hrs, 0 mins | 1
Up | 2019-09-06 17:54:25 | OK | 0 hrs, 45 mins | 46
Down | 2019-09-06 17:50:27 | Service Unavailable | 0 hrs, 3 mins | 4
Up | 2019-09-06 17:33:46 | OK | 0 hrs, 16 mins | 17
Down | 2019-09-06 17:30:51 | Service Unavailable | 0 hrs, 2 mins | 3
Up | 2019-09-06 17:05:26 | OK | 0 hrs, 25 mins | 25
Down | 2019-09-06 17:04:27 | Service Unavailable | 0 hrs, 0 mins | 1
Up | 2019-09-06 16:54:07 | OK | 0 hrs, 10 mins | 10
Down | 2019-09-06 16:53:06 | Connection Timeout | 0 hrs, 1 mins | 1
Up | 2019-09-06 16:42:31 | OK | 0 hrs, 10 mins | 11
Down | 2019-09-06 15:50:31 | Service Unavailable | 0 hrs, 52 mins | 52
Up | 2019-09-06 15:40:10 | OK | 0 hrs, 10 mins | 10
Down | 2019-09-06 14:09:12 | Service Unavailable | 1 hrs, 30 mins | 91
Up | 2019-09-06 14:03:51 | OK | 0 hrs, 5 mins | 5
Down | 2019-09-06 13:41:53 | Bad Gateway | 0 hrs, 21 mins | 22
Up | 2019-09-06 13:36:32 | OK | 0 hrs, 5 mins | 5
Down | 2019-09-06 13:32:33 | Bad Gateway | 0 hrs, 3 mins | 4
Up | 2019-09-06 13:27:12 | OK | 0 hrs, 5 mins | 5
Down | 2019-09-05 09:38:33 | Service Unavailable | 27 hrs, 48 mins | 1669
Up | 2019-09-05 09:27:20 | OK | 0 hrs, 11 mins | 11
Down | 2019-09-04 11:55:22 | Service Unavailable | 21 hrs, 31 mins | 1292
Up | 2019-09-04 11:13:15 | OK | 0 hrs, 42 mins | 42
Down | 2019-09-04 07:53:28 | Service Unavailable | 3 hrs, 19 mins | 200
Up | 2019-09-04 07:12:33 | OK | 0 hrs, 40 mins | 41
Down | 2019-09-04 00:42:33 | Service Unavailable | 6 hrs, 30 mins | 390
Up | 2019-09-04 00:17:12 | OK | 0 hrs, 25 mins | 25
Down | 2019-09-03 21:48:44 | Connection Timeout | 2 hrs, 28 mins | 148
Up | 2019-08-23 22:02:51 | OK | 263 hrs, 45 mins | 15826
  • That last time I had sustainable uptime (263hrs) was until Aug 23rd
  • Since then, uptime comes and goes every half hour or so

docker-compose.yml

  bitwarden:
    container_name: bitwarden
    image: bitwardenrs/server-mysql:latest
    restart: always
    networks:
      - shared
    environment:
      DOMAIN: <redacted>
      DATABASE_URL: "<Redacted>"
      WEBSOCKET_ENABLED: "true"
      SIGNUPS_ALLOWED: "false"
      SMTP_HOST: "<Redacted>"
      SMTP_FROM: "<Redacted>"
      SMTP_PORT: "587"
      SMTP_SSL: "true"
      SMTP_USERNAME: "<Redacted>"
      SMTP_PASSWORD: "<Redacted>"
      LOG_FILE: "/data/bitwarden.log"
    volumes:
      - /docker/appdata/bitwarden:/data
      - /etc/localtime:/etc/localtime:ro
    healthcheck:
      test: 'exit 0'
      interval: 1m
      timeout: 3s
      retries: 3

Note that I've stubbed out the default health check here, since this appeared to be causing serious problems with accessing the backend: similar issue? https://github.com/dani-garcia/bitwarden_rs/issues/499

Apache reverse proxy entry

<VirtualHost *:443>
  ServerName bitwarden.jcrooke.net

  ErrorLog ${APACHE_LOG_DIR}/bitwarden-error.log
  CustomLog ${APACHE_LOG_DIR}/bitwarden-access.log combined

  RewriteEngine On
  RewriteCond %{HTTP:Upgrade} =websocket [NC]
  RewriteRule /notifications/hub(.*) ws://bitwarden:3012/$1 [P,L]
  ProxyPass / http://bitwarden:80/

  ProxyPreserveHost On
  ProxyRequests Off
  RequestHeader set X-Real-IP %{REMOTE_ADDR}s
</VirtualHost>

curling the front end

<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
<html><head>
<title>503 Service Unavailable</title>
</head><body>
<h1>Service Unavailable</h1>
<p>The server is temporarily unable to service your
request due to maintenance downtime or capacity
problems. Please try again later.</p>
<hr>
<address>Apache/2.4.25 (Debian) Server at bitwarden.jcrooke.net Port 80</address>
</body></html>

All of my other services are pretty stable, using the same Docker service, and same Apache reverse proxy. So, I'm wondering if something was introduced in 1.9.1 that would be making the front end unstable?

Hope you can help!

Thanks

Originally created by @itsthejb on GitHub. Hi there, First of all, thanks for the excellent work on this project! As for my issue: I've been noticing quite flaky uptime performance on my instance since about late August. This seems like it might coincide with release 1.9.1. From my logs from uptimerobot.com: ``` Down | 2019-09-06 22:00:42 | Service Unavailable | 0 hrs, 4 mins | 4 -- | -- | -- | -- | -- Up | 2019-09-06 21:50:20 | OK | 0 hrs, 10 mins | 10 Down | 2019-09-06 21:49:21 | Service Unavailable | 0 hrs, 0 mins | 1 Up | 2019-09-06 21:43:59 | OK | 0 hrs, 5 mins | 5 Down | 2019-09-06 21:35:00 | Service Unavailable | 0 hrs, 8 mins | 9 Up | 2019-09-06 21:24:39 | OK | 0 hrs, 10 mins | 10 Down | 2019-09-06 21:23:39 | Service Unavailable | 0 hrs, 1 mins | 1 Up | 2019-09-06 21:18:19 | OK | 0 hrs, 5 mins | 5 Down | 2019-09-06 21:17:19 | Service Unavailable | 0 hrs, 1 mins | 1 Up | 2019-09-06 21:01:58 | OK | 0 hrs, 15 mins | 15 Down | 2019-09-06 20:53:00 | Service Unavailable | 0 hrs, 8 mins | 9 Up | 2019-09-06 20:42:39 | OK | 0 hrs, 10 mins | 10 Down | 2019-09-06 20:41:39 | Service Unavailable | 0 hrs, 1 mins | 1 Up | 2019-09-06 20:25:53 | OK | 0 hrs, 15 mins | 16 Down | 2019-09-06 20:24:54 | Service Unavailable | 0 hrs, 0 mins | 1 Up | 2019-09-06 20:09:13 | OK | 0 hrs, 15 mins | 16 Down | 2019-09-06 20:05:14 | Service Unavailable | 0 hrs, 3 mins | 4 Up | 2019-09-06 19:39:32 | OK | 0 hrs, 25 mins | 26 Down | 2019-09-06 19:38:32 | Service Unavailable | 0 hrs, 1 mins | 1 Up | 2019-09-06 19:33:07 | OK | 0 hrs, 5 mins | 5 Down | 2019-09-06 19:32:07 | Service Unavailable | 0 hrs, 1 mins | 1 Up | 2019-09-06 19:16:46 | OK | 0 hrs, 15 mins | 15 Down | 2019-09-06 19:12:48 | Service Unavailable | 0 hrs, 3 mins | 4 Up | 2019-09-06 19:02:26 | OK | 0 hrs, 10 mins | 10 Down | 2019-09-06 19:01:27 | Service Unavailable | 0 hrs, 0 mins | 1 Up | 2019-09-06 18:41:10 | OK | 0 hrs, 20 mins | 20 Down | 2019-09-06 18:40:11 | Service Unavailable | 0 hrs, 0 mins | 1 Up | 2019-09-06 17:54:25 | OK | 0 hrs, 45 mins | 46 Down | 2019-09-06 17:50:27 | Service Unavailable | 0 hrs, 3 mins | 4 Up | 2019-09-06 17:33:46 | OK | 0 hrs, 16 mins | 17 Down | 2019-09-06 17:30:51 | Service Unavailable | 0 hrs, 2 mins | 3 Up | 2019-09-06 17:05:26 | OK | 0 hrs, 25 mins | 25 Down | 2019-09-06 17:04:27 | Service Unavailable | 0 hrs, 0 mins | 1 Up | 2019-09-06 16:54:07 | OK | 0 hrs, 10 mins | 10 Down | 2019-09-06 16:53:06 | Connection Timeout | 0 hrs, 1 mins | 1 Up | 2019-09-06 16:42:31 | OK | 0 hrs, 10 mins | 11 Down | 2019-09-06 15:50:31 | Service Unavailable | 0 hrs, 52 mins | 52 Up | 2019-09-06 15:40:10 | OK | 0 hrs, 10 mins | 10 Down | 2019-09-06 14:09:12 | Service Unavailable | 1 hrs, 30 mins | 91 Up | 2019-09-06 14:03:51 | OK | 0 hrs, 5 mins | 5 Down | 2019-09-06 13:41:53 | Bad Gateway | 0 hrs, 21 mins | 22 Up | 2019-09-06 13:36:32 | OK | 0 hrs, 5 mins | 5 Down | 2019-09-06 13:32:33 | Bad Gateway | 0 hrs, 3 mins | 4 Up | 2019-09-06 13:27:12 | OK | 0 hrs, 5 mins | 5 Down | 2019-09-05 09:38:33 | Service Unavailable | 27 hrs, 48 mins | 1669 Up | 2019-09-05 09:27:20 | OK | 0 hrs, 11 mins | 11 Down | 2019-09-04 11:55:22 | Service Unavailable | 21 hrs, 31 mins | 1292 Up | 2019-09-04 11:13:15 | OK | 0 hrs, 42 mins | 42 Down | 2019-09-04 07:53:28 | Service Unavailable | 3 hrs, 19 mins | 200 Up | 2019-09-04 07:12:33 | OK | 0 hrs, 40 mins | 41 Down | 2019-09-04 00:42:33 | Service Unavailable | 6 hrs, 30 mins | 390 Up | 2019-09-04 00:17:12 | OK | 0 hrs, 25 mins | 25 Down | 2019-09-03 21:48:44 | Connection Timeout | 2 hrs, 28 mins | 148 Up | 2019-08-23 22:02:51 | OK | 263 hrs, 45 mins | 15826 ``` - That last time I had sustainable uptime (263hrs) was until Aug 23rd - Since then, uptime comes and goes every half hour or so ## `docker-compose.yml` ``` bitwarden: container_name: bitwarden image: bitwardenrs/server-mysql:latest restart: always networks: - shared environment: DOMAIN: <redacted> DATABASE_URL: "<Redacted>" WEBSOCKET_ENABLED: "true" SIGNUPS_ALLOWED: "false" SMTP_HOST: "<Redacted>" SMTP_FROM: "<Redacted>" SMTP_PORT: "587" SMTP_SSL: "true" SMTP_USERNAME: "<Redacted>" SMTP_PASSWORD: "<Redacted>" LOG_FILE: "/data/bitwarden.log" volumes: - /docker/appdata/bitwarden:/data - /etc/localtime:/etc/localtime:ro healthcheck: test: 'exit 0' interval: 1m timeout: 3s retries: 3 ``` Note that I've stubbed out the default health check here, since this appeared to be causing serious problems with accessing the backend: similar issue? https://github.com/dani-garcia/bitwarden_rs/issues/499 # Apache reverse proxy entry ``` <VirtualHost *:443> ServerName bitwarden.jcrooke.net ErrorLog ${APACHE_LOG_DIR}/bitwarden-error.log CustomLog ${APACHE_LOG_DIR}/bitwarden-access.log combined RewriteEngine On RewriteCond %{HTTP:Upgrade} =websocket [NC] RewriteRule /notifications/hub(.*) ws://bitwarden:3012/$1 [P,L] ProxyPass / http://bitwarden:80/ ProxyPreserveHost On ProxyRequests Off RequestHeader set X-Real-IP %{REMOTE_ADDR}s </VirtualHost> ``` ## curling the front end ``` <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>503 Service Unavailable</title> </head><body> <h1>Service Unavailable</h1> <p>The server is temporarily unable to service your request due to maintenance downtime or capacity problems. Please try again later.</p> <hr> <address>Apache/2.4.25 (Debian) Server at bitwarden.jcrooke.net Port 80</address> </body></html> ``` All of my other services are pretty stable, using the same Docker service, and same Apache reverse proxy. So, I'm wondering if something was introduced in 1.9.1 that would be making the front end unstable? Hope you can help! Thanks
Author
Owner

@itsthejb commented on GitHub:

Hi @mprasil. Thanks for the tip about disabling the health-check properly. Will try that. That was just the first thing to try to disable the default one, which certainly seemed to make the web interface inaccessible for me...

Edit: container continues running. It's just that the Web UI will intermittently fail to respond, and this causes the 503s

@itsthejb commented on GitHub: Hi @mprasil. Thanks for the tip about disabling the health-check properly. Will try that. That was just the first thing to try to disable the default one, which certainly seemed to make the web interface inaccessible for me... Edit: container continues running. It's just that the Web UI will intermittently fail to respond, and this causes the 503s
Author
Owner

@davidglezz commented on GitHub:

The same happens to me

I was trying to install it for the first time and I thought I had a bad configuration, but the 1.9 looks much more stable.
1.10 is offline some time and then ok another time...

Logs only GET /alive (but not accessible from the browser)

  • Not using mysql images
  • Same in alpine
  • Traefik as reverse proxy
bitwarden_1  | /--------------------------------------------------------------------\
bitwarden_1  | |                       Starting Bitwarden_RS                        |
bitwarden_1  | |                      Version 1.10.0-ae8bf954                       |
bitwarden_1  | |--------------------------------------------------------------------|
bitwarden_1  | | This is an *unofficial* Bitwarden implementation, DO NOT use the   |
bitwarden_1  | | official channels to report bugs/features, regardless of client.   |
bitwarden_1  | | Report URL: https://github.com/dani-garcia/bitwarden_rs/issues/new |
bitwarden_1  | \--------------------------------------------------------------------/
bitwarden_1  |
bitwarden_1  | [2019-09-11 21:47:49][launch][INFO] Configured for staging.
bitwarden_1  | [2019-09-11 21:47:49][launch_][INFO] address: 0.0.0.0
bitwarden_1  | [2019-09-11 21:47:49][launch_][INFO] port: 80
bitwarden_1  | [2019-09-11 21:47:49][launch_][INFO] log: normal
bitwarden_1  | [2019-09-11 21:47:49][launch_][INFO] workers: 10
bitwarden_1  | [2019-09-11 21:47:49][launch_][INFO] secret key: private-cookies disabled
bitwarden_1  | [2019-09-11 21:47:49][launch_][INFO] limits: forms = 32KiB, json* = 10MiB
bitwarden_1  | [2019-09-11 21:47:49][launch_][INFO] keep-alive: 5s
bitwarden_1  | [2019-09-11 21:47:49][launch_][INFO] tls: disabled
bitwarden_1  | [2019-09-11 21:47:49][rocket::fairing::fairings][INFO] Fairings:
bitwarden_1  | [2019-09-11 21:47:49][_][INFO] 2 response: Application Headers, Add CORS headers to requests
bitwarden_1  | [2019-09-11 21:47:49][launch][INFO] Rocket has launched from http://0.0.0.0:80
bitwarden_1  | [2019-09-11 21:47:49][ws][INFO] Listening for new connections on 0.0.0.0:3012.
bitwarden_1  | [2019-09-11 21:48:00][rocket::rocket][INFO] GET /alive:
bitwarden_1  | [2019-09-11 21:48:00][_][INFO] Matched: GET /alive (alive)
bitwarden_1  | [2019-09-11 21:48:00][_][INFO] Outcome: Success
bitwarden_1  | [2019-09-11 21:48:00][_][INFO] Response succeeded.
bitwarden_1  | [2019-09-11 21:48:11][rocket::rocket][INFO] GET /alive:
bitwarden_1  | [2019-09-11 21:48:11][_][INFO] Matched: GET /alive (alive)
@davidglezz commented on GitHub: The same happens to me I was trying to install it for the first time and I thought I had a bad configuration, but the 1.9 looks much more stable. 1.10 is offline some time and then ok another time... Logs only GET /alive (but not accessible from the browser) - Not using mysql images - Same in alpine - Traefik as reverse proxy ``` bitwarden_1 | /--------------------------------------------------------------------\ bitwarden_1 | | Starting Bitwarden_RS | bitwarden_1 | | Version 1.10.0-ae8bf954 | bitwarden_1 | |--------------------------------------------------------------------| bitwarden_1 | | This is an *unofficial* Bitwarden implementation, DO NOT use the | bitwarden_1 | | official channels to report bugs/features, regardless of client. | bitwarden_1 | | Report URL: https://github.com/dani-garcia/bitwarden_rs/issues/new | bitwarden_1 | \--------------------------------------------------------------------/ bitwarden_1 | bitwarden_1 | [2019-09-11 21:47:49][launch][INFO] Configured for staging. bitwarden_1 | [2019-09-11 21:47:49][launch_][INFO] address: 0.0.0.0 bitwarden_1 | [2019-09-11 21:47:49][launch_][INFO] port: 80 bitwarden_1 | [2019-09-11 21:47:49][launch_][INFO] log: normal bitwarden_1 | [2019-09-11 21:47:49][launch_][INFO] workers: 10 bitwarden_1 | [2019-09-11 21:47:49][launch_][INFO] secret key: private-cookies disabled bitwarden_1 | [2019-09-11 21:47:49][launch_][INFO] limits: forms = 32KiB, json* = 10MiB bitwarden_1 | [2019-09-11 21:47:49][launch_][INFO] keep-alive: 5s bitwarden_1 | [2019-09-11 21:47:49][launch_][INFO] tls: disabled bitwarden_1 | [2019-09-11 21:47:49][rocket::fairing::fairings][INFO] Fairings: bitwarden_1 | [2019-09-11 21:47:49][_][INFO] 2 response: Application Headers, Add CORS headers to requests bitwarden_1 | [2019-09-11 21:47:49][launch][INFO] Rocket has launched from http://0.0.0.0:80 bitwarden_1 | [2019-09-11 21:47:49][ws][INFO] Listening for new connections on 0.0.0.0:3012. bitwarden_1 | [2019-09-11 21:48:00][rocket::rocket][INFO] GET /alive: bitwarden_1 | [2019-09-11 21:48:00][_][INFO] Matched: GET /alive (alive) bitwarden_1 | [2019-09-11 21:48:00][_][INFO] Outcome: Success bitwarden_1 | [2019-09-11 21:48:00][_][INFO] Response succeeded. bitwarden_1 | [2019-09-11 21:48:11][rocket::rocket][INFO] GET /alive: bitwarden_1 | [2019-09-11 21:48:11][_][INFO] Matched: GET /alive (alive) ```
Author
Owner

@mprasil commented on GitHub:

Is the container running all the time or does it also restart?

Note that you can disable health check in docker-compose file:

healthcheck:
  disable: true

There's no point running exit 0 periodically.

@mprasil commented on GitHub: Is the container running all the time or does it also restart? Note that you can disable health check in docker-compose file: ```yaml healthcheck: disable: true ``` There's no point running `exit 0` periodically.
Author
Owner

@dani-garcia commented on GitHub:

Hmm well my first guess would have been the healthcheck too, but you already covered that, so not sure what could be happening.

Do you get any strange things in the bitwarden_rs logs during the times the service goes down? Or when it goes back up again?

@dani-garcia commented on GitHub: Hmm well my first guess would have been the healthcheck too, but you already covered that, so not sure what could be happening. Do you get any strange things in the bitwarden_rs logs during the times the service goes down? Or when it goes back up again?
Author
Owner

@itsthejb commented on GitHub:

Hi,

No, as David, the log is clean. Sounds like I am experiencing the same issue as him. Glad it's not only me!

@itsthejb commented on GitHub: Hi, No, as David, the log is clean. Sounds like I am experiencing the same issue as him. Glad it's not only me!
Author
Owner

@targodan commented on GitHub:

I have the same problem on debian 9.11 with docker 19.03.2 in swarm mode. It keeps killing my container. The workaround with disabling health checks seems to work though.

@targodan commented on GitHub: I have the same problem on debian 9.11 with docker 19.03.2 in swarm mode. It keeps killing my container. The workaround with disabling health checks seems to work though.
Author
Owner

@mprasil commented on GitHub:

I wonder if we should bump the timeout value up a bit. 3s should be enough, but maybe sometimes it takes longer for whatever reason and then it's marked as unhealthy..

Can someone that fixed the problem by disabling the health check try to set the timeout instead?

@mprasil commented on GitHub: I wonder if we should bump the timeout value up a bit. 3s *should* be enough, but maybe sometimes it takes longer for whatever reason and then it's marked as unhealthy.. Can someone that fixed the problem by disabling the health check try to set the timeout instead?
Author
Owner

@mprasil commented on GitHub:

@targodan what image version are you using. There have been some issues with the health check in the past.

@mprasil commented on GitHub: @targodan what image version are you using. There have been some issues with the health check in the past.
Author
Owner

@davidglezz commented on GitHub:

I have solved it with

healthcheck:
  disable: true

thanks @mprasil

@davidglezz commented on GitHub: I have solved it with ```yml healthcheck: disable: true ``` thanks @mprasil
Author
Owner

@targodan commented on GitHub:

I use the official bitwardenrs/server:alpine image.

@targodan commented on GitHub: I use the official `bitwardenrs/server:alpine` image.
Author
Owner

@dani-garcia commented on GitHub:

Closed due to inactivity, these values have been bumped a couple of times already so hopefully it won't be a problem anymore.

@dani-garcia commented on GitHub: Closed due to inactivity, these values have been bumped a couple of times already so hopefully it won't be a problem anymore.
Author
Owner

@dani-garcia commented on GitHub:

Also I'm wondering if docker has a way to require multiple consecutive bad checks before declaring the container as unhealthy, that might smooth over any flakiness.

@dani-garcia commented on GitHub: Also I'm wondering if docker has a way to require multiple consecutive bad checks before declaring the container as unhealthy, that might smooth over any flakiness.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/vaultwarden#2001