Continous performance issues #2369

Closed
opened 2025-10-09 18:01:50 +03:00 by OVERLORD · 18 comments
Owner

Originally created by @pwlgrzs on GitHub.

Hello, following @mprasil advice on reddit I wanted to report performance issues.

I'm running my docker instance on a VPS with following spec: 1 core Intel Xeon E5, 4GB RAM, 20GB SSD

WebUI takes over 15 seconds to load logon screen (I can provide HAR file to developers on demand, I don't want to disclose exact domain and port here).
After I log in UI is often unresponsive - vault loads by default, but it does not react when I try to go to settings for example. If I refresh page (what took another 30 seconds as I write this) it may start reacting to my actions, not always.

Adblock I'm using is disabled for domain I'm using Bitwarden on, I'm not facing any other network/website issues but this one.

Please let me know if you need anything specific from me, I will be happy to provide any logs etc.

Originally created by @pwlgrzs on GitHub. Hello, following @mprasil advice on reddit I wanted to report performance issues. I'm running my docker instance on a VPS with following spec: 1 core Intel Xeon E5, 4GB RAM, 20GB SSD WebUI takes over 15 seconds to load logon screen (I can provide HAR file to developers on demand, I don't want to disclose exact domain and port here). After I log in UI is often unresponsive - vault loads by default, but it does not react when I try to go to settings for example. If I refresh page (what took another 30 seconds as I write this) it may start reacting to my actions, not always. Adblock I'm using is disabled for domain I'm using Bitwarden on, I'm not facing any other network/website issues but this one. Please let me know if you need anything specific from me, I will be happy to provide any logs etc.
OVERLORD added the troubleshooting label 2025-10-09 18:01:51 +03:00
Author
Owner

@mprasil commented on GitHub:

Thanks for reporting that. It does seem extremely slow.

Can you share your setup details? I understand you're running the service in Docker container. Do you have any proxy in front of bitwarden? Any unusual settings?

@mprasil commented on GitHub: Thanks for reporting that. It does seem extremely slow. Can you share your setup details? I understand you're running the service in Docker container. Do you have any proxy in front of bitwarden? Any unusual settings?
Author
Owner

@mprasil commented on GitHub:

Can you share logs when you start bitwarden? You should see something like:

Configured for staging.
    => address: 0.0.0.0
    => port: 80
    => log: normal
    => workers: 8
    => secret key: generated
    => limits: forms = 32KiB, json* = 10MiB
    => tls: disabled
@mprasil commented on GitHub: Can you share logs when you start bitwarden? You should see something like: ``` Configured for staging. => address: 0.0.0.0 => port: 80 => log: normal => workers: 8 => secret key: generated => limits: forms = 32KiB, json* = 10MiB => tls: disabled ```
Author
Owner

@pwlgrzs commented on GitHub:

@mprasil no, it's on external server. Sending you HAR now.

@pwlgrzs commented on GitHub: @mprasil no, it's on external server. Sending you HAR now.
Author
Owner

@pwlgrzs commented on GitHub:

@mprasil It's Docker, no proxy, just UFW controlled firewall, but disabling it has no effect. I'm using LE issued certificate.
This is my docker startup script:

docker stop bitwarden
docker rm bitwarden
docker pull mprasil/bitwarden

DOCKER_CONFIGS="$(pwd)"

docker run -d --name bitwarden \
  -e ROCKET_TLS={certs='"/ssl/certs.pem",key="/ssl/key.pem"}' \
  -e ROCKET_PORT='8000' \
  -e SIGNUPS_ALLOWED=false \
  -v ${DOCKER_CONFIGS}/d-ssl/:/ssl/ \
  -v ${DOCKER_CONFIGS}/d-bitwarden/bw-data/:/data/ \
  -v ${DOCKER_CONFIGS}/d-bitwarden/icon_cache/ \
  -p VPSIP:EXTERNALPORT:8000 \
  mprasil/bitwarden:latest

VPSIP and EXTERNALPORT are having actual values in my script.
I'm also running PiHole in Docker and there're no issues.

@pwlgrzs commented on GitHub: @mprasil It's Docker, no proxy, just UFW controlled firewall, but disabling it has no effect. I'm using LE issued certificate. This is my docker startup script: ``` docker stop bitwarden docker rm bitwarden docker pull mprasil/bitwarden DOCKER_CONFIGS="$(pwd)" docker run -d --name bitwarden \ -e ROCKET_TLS={certs='"/ssl/certs.pem",key="/ssl/key.pem"}' \ -e ROCKET_PORT='8000' \ -e SIGNUPS_ALLOWED=false \ -v ${DOCKER_CONFIGS}/d-ssl/:/ssl/ \ -v ${DOCKER_CONFIGS}/d-bitwarden/bw-data/:/data/ \ -v ${DOCKER_CONFIGS}/d-bitwarden/icon_cache/ \ -p VPSIP:EXTERNALPORT:8000 \ mprasil/bitwarden:latest ``` VPSIP and EXTERNALPORT are having actual values in my script. I'm also running PiHole in Docker and there're no issues.
Author
Owner

@mprasil commented on GitHub:

So the service is actually on your internal network? Can you email me the har file at miroslav@prasil.info? I'd be curious to see how it behaves.

@mprasil commented on GitHub: So the service is actually on your internal network? Can you email me the har file at miroslav@prasil.info? I'd be curious to see how it behaves.
Author
Owner

@pwlgrzs commented on GitHub:

Done:

Configured for staging.
    => address: 0.0.0.0
    => port: 8000
    => log: normal
    => workers: 20
    => secret key: generated
    => limits: forms = 32KiB, json* = 10MiB
    => tls: enabled

Sent you 2 extra HAR files. Smaller one comes from loading of the logon screen, much faster indeed. However look and the end of the second, apart from crazy amount of GET for icon.png files, interesting part is response time for settings.html file (this is the unresponsiveness of the UI I mentioned).

@pwlgrzs commented on GitHub: Done: ``` Configured for staging. => address: 0.0.0.0 => port: 8000 => log: normal => workers: 20 => secret key: generated => limits: forms = 32KiB, json* = 10MiB => tls: enabled ``` Sent you 2 extra HAR files. Smaller one comes from loading of the logon screen, much faster indeed. However look and the end of the second, apart from crazy amount of GET for icon.png files, interesting part is response time for settings.html file (this is the unresponsiveness of the UI I mentioned).
Author
Owner

@pwlgrzs commented on GitHub:

Configured for staging.
    => address: 0.0.0.0
    => port: 8000
    => log: normal
    => workers: 2
    => secret key: generated
    => limits: forms = 32KiB, json* = 10MiB
    => tls: enabled
Mounting '/':
    => GET /
    => GET /app-id.json
    => GET /<p..>
    => GET /attachments/<uuid>/<file..>
    => GET /alive
Mounting '/api':
    => POST /api/accounts/register
    => GET /api/accounts/profile
    => POST /api/accounts/profile
    => GET /api/users/<uuid>/public-key
    => POST /api/accounts/keys
    => POST /api/accounts/password
    => POST /api/accounts/security-stamp
    => POST /api/accounts/email-token
    => POST /api/accounts/email
    => POST /api/accounts/delete
    => GET /api/accounts/revision-date
    => GET /api/sync
    => GET /api/ciphers
    => GET /api/ciphers/<uuid>
    => GET /api/ciphers/<uuid>/admin
    => GET /api/ciphers/<uuid>/details
    => POST /api/ciphers
    => POST /api/ciphers/admin
    => POST /api/ciphers/import
    => POST /api/ciphers/<uuid>/attachment multipart/form-data
    => POST /api/ciphers/<uuid>/attachment/<attachment_id>/delete
    => DELETE /api/ciphers/<uuid>/attachment/<attachment_id>
    => POST /api/ciphers/<uuid>/admin
    => POST /api/ciphers/<uuid>/share
    => POST /api/ciphers/<uuid>
    => PUT /api/ciphers/<uuid>
    => POST /api/ciphers/<uuid>/delete
    => POST /api/ciphers/<uuid>/delete-admin
    => DELETE /api/ciphers/<uuid>
    => POST /api/ciphers/delete
    => POST /api/ciphers/purge
    => POST /api/ciphers/move
    => GET /api/folders
    => GET /api/folders/<uuid>
    => POST /api/folders
    => POST /api/folders/<uuid>
    => PUT /api/folders/<uuid>
    => POST /api/folders/<uuid>/delete
    => DELETE /api/folders/<uuid>
    => GET /api/two-factor
    => POST /api/two-factor/get-recover
    => POST /api/two-factor/recover
    => POST /api/two-factor/disable
    => POST /api/two-factor/get-authenticator
    => POST /api/two-factor/authenticator
    => POST /api/two-factor/get-u2f
    => POST /api/two-factor/u2f
    => GET /api/organizations/<org_id>
    => POST /api/organizations
    => POST /api/organizations/<org_id>/delete
    => POST /api/organizations/<org_id>/leave
    => GET /api/collections
    => GET /api/organizations/<org_id>/collections
    => GET /api/organizations/<org_id>/collections/<coll_id>/details
    => GET /api/organizations/<org_id>/collections/<coll_id>/users
    => POST /api/organizations/<org_id>
    => POST /api/organizations/<org_id>/collections
    => POST /api/organizations/<org_id>/collections/<col_id>/delete-user/<org_user_id>
    => POST /api/organizations/<org_id>/collections/<col_id>
    => POST /api/organizations/<org_id>/collections/<col_id>/delete
    => POST /api/ciphers/<uuid>/collections
    => POST /api/ciphers/<uuid>/collections-admin
    => GET /api/ciphers/organization-details?<data>
    => GET /api/organizations/<org_id>/users
    => POST /api/organizations/<org_id>/users/invite
    => POST /api/organizations/<org_id>/users/<user_id>/confirm
    => GET /api/organizations/<org_id>/users/<user_id>
    => POST /api/organizations/<org_id>/users/<user_id>
    => POST /api/organizations/<org_id>/users/<user_id>/delete
    => PUT /api/devices/identifier/<uuid>/clear-token
    => PUT /api/devices/identifier/<uuid>/token
    => GET /api/settings/domains
    => POST /api/settings/domains
Mounting '/identity':
    => POST /identity/connect/token
Mounting '/icons':
    => GET /icons/<domain>/icon.png
Rocket has launched from https://0.0.0.0:8000

As mentioned in email, disabling TLS made logon screen load time down to 15 seconds.

@pwlgrzs commented on GitHub: ``` Configured for staging. => address: 0.0.0.0 => port: 8000 => log: normal => workers: 2 => secret key: generated => limits: forms = 32KiB, json* = 10MiB => tls: enabled Mounting '/': => GET / => GET /app-id.json => GET /<p..> => GET /attachments/<uuid>/<file..> => GET /alive Mounting '/api': => POST /api/accounts/register => GET /api/accounts/profile => POST /api/accounts/profile => GET /api/users/<uuid>/public-key => POST /api/accounts/keys => POST /api/accounts/password => POST /api/accounts/security-stamp => POST /api/accounts/email-token => POST /api/accounts/email => POST /api/accounts/delete => GET /api/accounts/revision-date => GET /api/sync => GET /api/ciphers => GET /api/ciphers/<uuid> => GET /api/ciphers/<uuid>/admin => GET /api/ciphers/<uuid>/details => POST /api/ciphers => POST /api/ciphers/admin => POST /api/ciphers/import => POST /api/ciphers/<uuid>/attachment multipart/form-data => POST /api/ciphers/<uuid>/attachment/<attachment_id>/delete => DELETE /api/ciphers/<uuid>/attachment/<attachment_id> => POST /api/ciphers/<uuid>/admin => POST /api/ciphers/<uuid>/share => POST /api/ciphers/<uuid> => PUT /api/ciphers/<uuid> => POST /api/ciphers/<uuid>/delete => POST /api/ciphers/<uuid>/delete-admin => DELETE /api/ciphers/<uuid> => POST /api/ciphers/delete => POST /api/ciphers/purge => POST /api/ciphers/move => GET /api/folders => GET /api/folders/<uuid> => POST /api/folders => POST /api/folders/<uuid> => PUT /api/folders/<uuid> => POST /api/folders/<uuid>/delete => DELETE /api/folders/<uuid> => GET /api/two-factor => POST /api/two-factor/get-recover => POST /api/two-factor/recover => POST /api/two-factor/disable => POST /api/two-factor/get-authenticator => POST /api/two-factor/authenticator => POST /api/two-factor/get-u2f => POST /api/two-factor/u2f => GET /api/organizations/<org_id> => POST /api/organizations => POST /api/organizations/<org_id>/delete => POST /api/organizations/<org_id>/leave => GET /api/collections => GET /api/organizations/<org_id>/collections => GET /api/organizations/<org_id>/collections/<coll_id>/details => GET /api/organizations/<org_id>/collections/<coll_id>/users => POST /api/organizations/<org_id> => POST /api/organizations/<org_id>/collections => POST /api/organizations/<org_id>/collections/<col_id>/delete-user/<org_user_id> => POST /api/organizations/<org_id>/collections/<col_id> => POST /api/organizations/<org_id>/collections/<col_id>/delete => POST /api/ciphers/<uuid>/collections => POST /api/ciphers/<uuid>/collections-admin => GET /api/ciphers/organization-details?<data> => GET /api/organizations/<org_id>/users => POST /api/organizations/<org_id>/users/invite => POST /api/organizations/<org_id>/users/<user_id>/confirm => GET /api/organizations/<org_id>/users/<user_id> => POST /api/organizations/<org_id>/users/<user_id> => POST /api/organizations/<org_id>/users/<user_id>/delete => PUT /api/devices/identifier/<uuid>/clear-token => PUT /api/devices/identifier/<uuid>/token => GET /api/settings/domains => POST /api/settings/domains Mounting '/identity': => POST /identity/connect/token Mounting '/icons': => GET /icons/<domain>/icon.png Rocket has launched from https://0.0.0.0:8000 ``` As mentioned in email, disabling TLS made logon screen load time down to 15 seconds.
Author
Owner

@mprasil commented on GitHub:

I wonder if number of workers isn't an issue here for some reason. It defaults to 2*<number of cores> which ends up being quite low on your system.

Can you start the container with:

-e ROCKET_WORKERS=20

Just to test if that makes any difference. Please verify in logs that the amount of workers is 20 after you set that.

@mprasil commented on GitHub: I wonder if number of workers isn't an issue here for some reason. It defaults to `2*<number of cores>` which ends up being quite low on your system. Can you start the container with: ``` -e ROCKET_WORKERS=20 ``` Just to test if that makes any difference. Please verify in logs that the amount of workers is `20` after you set that.
Author
Owner

@mprasil commented on GitHub:

Great to see the number of workers helped. The icon requests is just loading icons for your stored sites. There's a ton of them because you have a ton of sites. It should get cached (bitwarden has to fetch them to serve them to you) and should be better next time.

The settings request spent most of the time in the blocked state - that means there are other ongoing requests (most likely the icons) and that browser is waiting to finish those before starting this request. (there's a limit on number of requests per domain in the browser) it only took about 22ms to be server once browser made the request.

I'd say after the icons are cached, the long loading times in the UI should resolve completely. Can you see the icons appearing in the icon_cache folder?

@mprasil commented on GitHub: Great to see the number of workers helped. The icon requests is just loading icons for your stored sites. There's a ton of them because you have a ton of sites. It should get cached (bitwarden has to fetch them to serve them to you) and should be better next time. The settings request spent most of the time in the blocked state - that means there are other ongoing requests (most likely the icons) and that browser is waiting to finish those before starting this request. (there's a limit on number of requests per domain in the browser) it only took about 22ms to be server once browser made the request. I'd say after the icons are cached, the long loading times in the UI should resolve completely. Can you see the icons appearing in the `icon_cache` folder?
Author
Owner

@pwlgrzs commented on GitHub:

No, there're no icons appearing. Actually, they never were appearing since I started testing this project and while "attachement" folder is being created, "icon_cache" does not.

I have removed "-v ${DOCKER_CONFIGS}/d-bitwarden/icon_cache/" part from my container startup script, but there's no change.

@pwlgrzs commented on GitHub: No, there're no icons appearing. Actually, they never were appearing since I started testing this project and while "attachement" folder is being created, "icon_cache" does not. I have removed "-v ${DOCKER_CONFIGS}/d-bitwarden/icon_cache/" part from my container startup script, but there's no change.
Author
Owner

@mannp commented on GitHub:

I updated my workers to 10 yesterday and it made a notable difference to my setup, thanks for highlighting it.

@mannp commented on GitHub: I updated my workers to 10 yesterday and it made a notable difference to my setup, thanks for highlighting it.
Author
Owner

@pwlgrzs commented on GitHub:

Problem found, sometime ago I disabled iptables mode in Docker and added separate rule to allow containers to talk to the outside world, I apparently didn't make that rule persistent. Adding it again made icon cache to appear immediately and whole service is running as expected.

That being said, would it be a good idea to add workers info to readme so others can modify if having low performance?

Many thanks for looking into this, otherwise I'd probably be stuck like this for some time.

@pwlgrzs commented on GitHub: Problem found, sometime ago I disabled iptables mode in Docker and added separate rule to allow containers to talk to the outside world, I apparently didn't make that rule persistent. Adding it again made icon cache to appear immediately and whole service is running as expected. That being said, would it be a good idea to add workers info to readme so others can modify if having low performance? Many thanks for looking into this, otherwise I'd probably be stuck like this for some time.
Author
Owner

@mprasil commented on GitHub:

Thank you for reporting this, it was very helpful! I've created PR #91 to set the default number of workers to 10 and to document the setting.

@mprasil commented on GitHub: Thank you for reporting this, it was very helpful! I've created PR #91 to set the default number of workers to 10 and to document the setting.
Author
Owner

@pwlgrzs commented on GitHub:

Restarting container with --net=host parameter made icons to start appearing, it was a blind shot but apparently my icon problem is related to docker network/routing issues on my host? I will be checking this later today on a different server I got access to.

@pwlgrzs commented on GitHub: Restarting container with --net=host parameter made icons to start appearing, it was a blind shot but apparently my icon problem is related to docker network/routing issues on my host? I will be checking this later today on a different server I got access to.
Author
Owner

@mprasil commented on GitHub:

Yeah that sounds like some network problem. (are you using latest image?)

Check DNS settings in the container, you mentioned running pihole, maybe it defaults to using that, but it doesn't work as expected between containers or something like that.

@mprasil commented on GitHub: Yeah that sounds like some network problem. (are you using latest image?) Check DNS settings in the container, you mentioned running pihole, maybe it defaults to using that, but it doesn't work as expected between containers or something like that.
Author
Owner

@pwlgrzs commented on GitHub:

Thank you!

@pwlgrzs commented on GitHub: Thank you!
Author
Owner

@mprasil commented on GitHub:

This was just merged, the image build is queued and will be available in about 2h. (there's still some previous build running, unfortunately it takes about 1h per image on docker infrastructure)

Thanks again for reporting this.

@mprasil commented on GitHub: This was just merged, the image build is queued and will be available in about 2h. (there's still some previous build running, unfortunately it takes about 1h per image on docker infrastructure) Thanks again for reporting this.
Author
Owner

@pwlgrzs commented on GitHub:

For completeness, newest image is starting with 10 workers as defined.

@pwlgrzs commented on GitHub: For completeness, newest image is starting with 10 workers as defined.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/vaultwarden#2369