Vault data disappears when upgrading 1.23.0 -> 1.25.0 #947

Closed
opened 2025-10-09 16:57:56 +03:00 by OVERLORD · 21 comments
Owner

Originally created by @vitorbaptista on GitHub.

Subject of the issue

I have a Vaultwarden deployment using docker-compose currently on 1.23.0. I tried upgrading to 1.27.0, but then my vault was empty (I was able to login though). I tried all versions between them, and the only that worked was 1.24.0.

Deployment environment

  • vaultwarden version: 1.23.0
  • Install method: Docker-compose

  • Clients used: Web Vault

  • Other relevant details:

Steps to reproduce

I'm not sure if there's something on my installation, but I guess you could reproduce it by:

  1. Install vaultwarden:1.23.0 (Docker container), setup with SQLite
  2. Create an account and add passwords to your vault
  3. Upgrade the container to 1.25.0

Expected behaviour

All passwords in the vault would be the there.

Actual behaviour

The vault is empty

Troubleshooting data

These are the logs on Vaultwarden 1.27.0, but the error is the same in 1.25.0 and 1.26.0.

/--------------------------------------------------------------------\
|                        Starting Vaultwarden                        |
|                           Version 1.27.0                           |
|--------------------------------------------------------------------|
| This is an *unofficial* Bitwarden implementation, DO NOT use the   |
| official channels to report bugs/features, regardless of client.   |
| Send usage/configuration questions or feature requests to:         |
|   https://vaultwarden.discourse.group/                             |
| Report suspected bugs/issues in the software itself at:            |
|   https://github.com/dani-garcia/vaultwarden/issues/new            |
\--------------------------------------------------------------------/
[INFO] No .env file found.
[DEPRECATED]: `SMTP_SSL` or `SMTP_EXPLICIT_TLS` is set. Please use `SMTP_SECURITY` instead.
[2023-01-05 19:25:16.020][vaultwarden::api::notifications][INFO] Starting WebSockets server on 0.0.0.0:3012
[2023-01-05 19:25:16.024][start][INFO] Rocket has launched from http://0.0.0.0:80
[2023-01-05 19:25:27.090][request][INFO] POST /identity/connect/token
[2023-01-05 19:25:27.098][response][INFO] (login) POST /identity/connect/token => 200 OK
[2023-01-05 19:25:27.240][request][INFO] GET /api/sync?excludeDomains=true
[2023-01-05 19:25:27.653][panic][ERROR] thread 'rocket-worker-thread' panicked at 'Error loading attachments: DatabaseError(Unknown, "too many SQL variables")': src/db/models/attachment.rs:196
   0: vaultwarden::init_logging::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::result::unwrap_failed
   7: tokio::runtime::context::exit_runtime
   8: tokio::runtime::scheduler::multi_thread::worker::block_in_place
   9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  10: vaultwarden::api::core::ciphers::sync::into_info::monomorphized_function::{{closure}}
  11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  14: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  15: tokio::runtime::task::core::Core<T,S>::poll
  16: tokio::runtime::task::harness::Harness<T,S>::poll
  17: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  18: tokio::runtime::scheduler::multi_thread::worker::Context::run
  19: tokio::macros::scoped_tls::ScopedKey<T>::set
  20: tokio::runtime::scheduler::multi_thread::worker::run
  21: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  22: tokio::runtime::task::core::Core<T,S>::poll
  23: tokio::runtime::task::harness::Harness<T,S>::poll
  24: tokio::runtime::blocking::pool::Inner::run
  25: std::sys_common::backtrace::__rust_begin_short_backtrace
  26: core::ops::function::FnOnce::call_once{{vtable.shim}}
  27: std::sys::unix::thread::Thread::new::thread_start
  28: start_thread
  29: clone
[2023-01-05 19:25:27.661][_][ERROR] Handler sync panicked.
[2023-01-05 19:25:27.661][_][WARN] A panic is treated as an internal server error.
[2023-01-05 19:25:27.661][_][WARN] No 500 catcher registered. Using Rocket default.
[2023-01-05 19:25:27.669][response][INFO] (sync) GET /api/sync?<data..> => 500 Internal Server Error
[2023-01-05 19:25:28.233][vaultwarden::api::notifications][INFO] Accepting WS connection from 172.22.0.3:38398
Originally created by @vitorbaptista on GitHub. <!-- # ### NOTE: Please update to the latest version of vaultwarden before reporting an issue! This saves you and us a lot of time and troubleshooting. See: * https://github.com/dani-garcia/vaultwarden/issues/1180 * https://github.com/dani-garcia/vaultwarden/wiki/Updating-the-vaultwarden-image # ### --> <!-- Please fill out the following template to make solving your problem easier and faster for us. This is only a guideline. If you think that parts are unnecessary for your issue, feel free to remove them. Remember to hide/redact personal or confidential information, such as passwords, IP addresses, and DNS names as appropriate. --> ### Subject of the issue <!-- Describe your issue here. --> I have a Vaultwarden deployment using docker-compose currently on 1.23.0. I tried upgrading to 1.27.0, but then my vault was empty (I was able to login though). I tried all versions between them, and the only that worked was 1.24.0. ### Deployment environment <!-- ========================================================================================= Preferably, use the `Generate Support String` button on the admin page's Diagnostics tab. That will auto-generate most of the info requested in this section. ========================================================================================= --> <!-- The version number, obtained from the logs (at startup) or the admin diagnostics page --> <!-- This is NOT the version number shown on the web vault, which is versioned separately from vaultwarden --> <!-- Remember to check if your issue exists on the latest version first! --> * vaultwarden version: 1.23.0 <!-- How the server was installed: Docker image, OS package, built from source, etc. --> * Install method: Docker-compose * Clients used: Web Vault * Other relevant details: ### Steps to reproduce <!-- Tell us how to reproduce this issue. What parameters did you set (differently from the defaults) and how did you start vaultwarden? --> I'm not sure if there's something on my installation, but I guess you could reproduce it by: 1. Install vaultwarden:1.23.0 (Docker container), setup with SQLite 2. Create an account and add passwords to your vault 3. Upgrade the container to 1.25.0 ### Expected behaviour <!-- Tell us what you expected to happen --> All passwords in the vault would be the there. ### Actual behaviour <!-- Tell us what actually happened --> The vault is empty ### Troubleshooting data <!-- Share any log files, screenshots, or other relevant troubleshooting data --> These are the logs on Vaultwarden 1.27.0, but the error is the same in 1.25.0 and 1.26.0. ``` /--------------------------------------------------------------------\ | Starting Vaultwarden | | Version 1.27.0 | |--------------------------------------------------------------------| | This is an *unofficial* Bitwarden implementation, DO NOT use the | | official channels to report bugs/features, regardless of client. | | Send usage/configuration questions or feature requests to: | | https://vaultwarden.discourse.group/ | | Report suspected bugs/issues in the software itself at: | | https://github.com/dani-garcia/vaultwarden/issues/new | \--------------------------------------------------------------------/ [INFO] No .env file found. [DEPRECATED]: `SMTP_SSL` or `SMTP_EXPLICIT_TLS` is set. Please use `SMTP_SECURITY` instead. [2023-01-05 19:25:16.020][vaultwarden::api::notifications][INFO] Starting WebSockets server on 0.0.0.0:3012 [2023-01-05 19:25:16.024][start][INFO] Rocket has launched from http://0.0.0.0:80 [2023-01-05 19:25:27.090][request][INFO] POST /identity/connect/token [2023-01-05 19:25:27.098][response][INFO] (login) POST /identity/connect/token => 200 OK [2023-01-05 19:25:27.240][request][INFO] GET /api/sync?excludeDomains=true [2023-01-05 19:25:27.653][panic][ERROR] thread 'rocket-worker-thread' panicked at 'Error loading attachments: DatabaseError(Unknown, "too many SQL variables")': src/db/models/attachment.rs:196 0: vaultwarden::init_logging::{{closure}} 1: std::panicking::rust_panic_with_hook 2: std::panicking::begin_panic_handler::{{closure}} 3: std::sys_common::backtrace::__rust_end_short_backtrace 4: rust_begin_unwind 5: core::panicking::panic_fmt 6: core::result::unwrap_failed 7: tokio::runtime::context::exit_runtime 8: tokio::runtime::scheduler::multi_thread::worker::block_in_place 9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 10: vaultwarden::api::core::ciphers::sync::into_info::monomorphized_function::{{closure}} 11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 14: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 15: tokio::runtime::task::core::Core<T,S>::poll 16: tokio::runtime::task::harness::Harness<T,S>::poll 17: tokio::runtime::scheduler::multi_thread::worker::Context::run_task 18: tokio::runtime::scheduler::multi_thread::worker::Context::run 19: tokio::macros::scoped_tls::ScopedKey<T>::set 20: tokio::runtime::scheduler::multi_thread::worker::run 21: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 22: tokio::runtime::task::core::Core<T,S>::poll 23: tokio::runtime::task::harness::Harness<T,S>::poll 24: tokio::runtime::blocking::pool::Inner::run 25: std::sys_common::backtrace::__rust_begin_short_backtrace 26: core::ops::function::FnOnce::call_once{{vtable.shim}} 27: std::sys::unix::thread::Thread::new::thread_start 28: start_thread 29: clone [2023-01-05 19:25:27.661][_][ERROR] Handler sync panicked. [2023-01-05 19:25:27.661][_][WARN] A panic is treated as an internal server error. [2023-01-05 19:25:27.661][_][WARN] No 500 catcher registered. Using Rocket default. [2023-01-05 19:25:27.669][response][INFO] (sync) GET /api/sync?<data..> => 500 Internal Server Error [2023-01-05 19:25:28.233][vaultwarden::api::notifications][INFO] Accepting WS connection from 172.22.0.3:38398 ```
OVERLORD added the troubleshootingbugenhancement labels 2025-10-09 16:57:56 +03:00
Author
Owner

@BlackDex commented on GitHub:

You can use the admin interface to see the amount of items.
I would really like to know the amount so that i can try to replicate.

@BlackDex commented on GitHub: You can use the admin interface to see the amount of items. I would really like to know the amount so that i can try to replicate.
Author
Owner

@BlackDex commented on GitHub:

That is a lot. Probably not all your ciphers i think.
The admin interface is vw.domain.tld/admin.

@BlackDex commented on GitHub: That is a lot. Probably not all your ciphers i think. The admin interface is `vw.domain.tld/admin`.
Author
Owner

@BlackDex commented on GitHub:

May i ask how many vault items you have? (You can see this in the admin environment, please provide both personal and all orgs items you are a member of)
It looks like you have so many items that causes the query to be overloaded.

Also, could you try the Alpine based images to see if that does work?

@BlackDex commented on GitHub: May i ask how many vault items you have? (You can see this in the admin environment, please provide both personal and all orgs items you are a member of) It looks like you have so many items that causes the query to be overloaded. Also, could you try the Alpine based images to see if that does work?
Author
Owner

@vitorbaptista commented on GitHub:

@BlackDex I couldn't get to the admin interface, but I queried the DB directly:

sqlite> SELECT COUNT(*) FROM ciphers;
43931

Much more than I expected. Does that work for you? Or is there another query I can run on the DB that would help?

@vitorbaptista commented on GitHub: @BlackDex I couldn't get to the admin interface, but I queried the DB directly: ``` sqlite> SELECT COUNT(*) FROM ciphers; 43931 ``` Much more than I expected. Does that work for you? Or is there another query I can run on the DB that would help?
Author
Owner

@vitorbaptista commented on GitHub:

@BlackDex thanks for the quick reply

I couldn't find the number of vault items, but it's in the thousands (I guess between 2,000 ~ 5,000). Interestingly, I tried logging in with a different user that had much less vault items and it worked fine.

I tried the Alpine 1.27.0-alpine and I see the same error in the logs, but it shows the vault names. Maybe the Alpine version supports more vault items?

[2023-01-06 13:08:29.859][panic][ERROR] thread 'rocket-worker-thread' panicked at 'Error loading attachments: DatabaseError(Unknown, "too many SQL variables")': src/db/models/attachment.rs:196
   0: vaultwarden::init_logging::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::result::unwrap_failed
   7: tokio::runtime::context::exit_runtime
   8: tokio::runtime::scheduler::multi_thread::worker::block_in_place
   9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  10: vaultwarden::api::core::ciphers::sync::into_info::monomorphized_function::{{closure}}
  11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll
  14: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  15: tokio::runtime::task::core::Core<T,S>::poll
  16: tokio::runtime::task::harness::Harness<T,S>::poll
  17: tokio::runtime::scheduler::multi_thread::worker::Context::run_task
  18: tokio::runtime::scheduler::multi_thread::worker::Context::run
  19: tokio::macros::scoped_tls::ScopedKey<T>::set
  20: tokio::runtime::scheduler::multi_thread::worker::run
  21: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut
  22: tokio::runtime::task::core::Core<T,S>::poll
  23: tokio::runtime::task::harness::Harness<T,S>::poll
  24: tokio::runtime::blocking::pool::Inner::run
  25: std::sys_common::backtrace::__rust_begin_short_backtrace
  26: core::ops::function::FnOnce::call_once{{vtable.shim}}
  27: std::sys::unix::thread::Thread::new::thread_start
[2023-01-06 13:08:29.867][_][ERROR] Handler sync panicked.
[2023-01-06 13:08:29.867][_][WARN] A panic is treated as an internal server error.
[2023-01-06 13:08:29.867][_][WARN] No 500 catcher registered. Using Rocket default.
@vitorbaptista commented on GitHub: @BlackDex thanks for the quick reply I couldn't find the number of vault items, but it's in the thousands (I guess between 2,000 ~ 5,000). Interestingly, I tried logging in with a different user that had much less vault items and it worked fine. I tried the Alpine `1.27.0-alpine` and I see the same error in the logs, but it shows the vault names. Maybe the Alpine version supports more vault items? ``` [2023-01-06 13:08:29.859][panic][ERROR] thread 'rocket-worker-thread' panicked at 'Error loading attachments: DatabaseError(Unknown, "too many SQL variables")': src/db/models/attachment.rs:196 0: vaultwarden::init_logging::{{closure}} 1: std::panicking::rust_panic_with_hook 2: std::panicking::begin_panic_handler::{{closure}} 3: std::sys_common::backtrace::__rust_end_short_backtrace 4: rust_begin_unwind 5: core::panicking::panic_fmt 6: core::result::unwrap_failed 7: tokio::runtime::context::exit_runtime 8: tokio::runtime::scheduler::multi_thread::worker::block_in_place 9: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 10: vaultwarden::api::core::ciphers::sync::into_info::monomorphized_function::{{closure}} 11: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 12: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 13: <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll 14: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 15: tokio::runtime::task::core::Core<T,S>::poll 16: tokio::runtime::task::harness::Harness<T,S>::poll 17: tokio::runtime::scheduler::multi_thread::worker::Context::run_task 18: tokio::runtime::scheduler::multi_thread::worker::Context::run 19: tokio::macros::scoped_tls::ScopedKey<T>::set 20: tokio::runtime::scheduler::multi_thread::worker::run 21: tokio::loom::std::unsafe_cell::UnsafeCell<T>::with_mut 22: tokio::runtime::task::core::Core<T,S>::poll 23: tokio::runtime::task::harness::Harness<T,S>::poll 24: tokio::runtime::blocking::pool::Inner::run 25: std::sys_common::backtrace::__rust_begin_short_backtrace 26: core::ops::function::FnOnce::call_once{{vtable.shim}} 27: std::sys::unix::thread::Thread::new::thread_start [2023-01-06 13:08:29.867][_][ERROR] Handler sync panicked. [2023-01-06 13:08:29.867][_][WARN] A panic is treated as an internal server error. [2023-01-06 13:08:29.867][_][WARN] No 500 catcher registered. Using Rocket default. ```
Author
Owner

@BlackDex commented on GitHub:

@stefan0xC if I'm correct, from the top of my head, that optimization i build was after a newer version of the sqlite library. Since we always use a vendored/build-in version it shouldn't be an issue. Unless someone removes that vendored option of course.

While it's nice and good for most. It breaks for at least one. While i do think it's a lot of ciphers, i still think i should take a look at it.

@BlackDex commented on GitHub: @stefan0xC if I'm correct, from the top of my head, that optimization i build was after a newer version of the sqlite library. Since we always use a vendored/build-in version it shouldn't be an issue. Unless someone removes that vendored option of course. While it's nice and good for most. It breaks for at least one. While i do think it's a lot of ciphers, i still think i should take a look at it.
Author
Owner

@BlackDex commented on GitHub:

Well, it looks the maximum of elements sqlite supports by default is 32766. That is less then the amount of ciphers you reported.

I'll have to look into this and see if we can solve this in a decent way without slowing everything down again.

I also wonder how a key rotation will perform, because that will probably take a long time, and will also cause a lot of queries.

I also wonder if this breaks on MySQL or PostgreSQL.

@BlackDex commented on GitHub: Well, it looks the maximum of elements sqlite supports by default is `32766`. That is less then the amount of ciphers you reported. I'll have to look into this and see if we can solve this in a decent way without slowing everything down again. I also wonder how a key rotation will perform, because that will probably take a long time, and will also cause a lot of queries. I also wonder if this breaks on MySQL or PostgreSQL.
Author
Owner

@BlackDex commented on GitHub:

That is an issue, because that is used in the newer versions to speed-up the sync process.
I might need to revisit that approach, or limit the amount and merge in-code.

The speed-up was about 3x quicker sync then before if not more.
But if that causes very very large vaults to not being able to sync. That is an issue of course.

I didn't had time yet to reproduce and look at this.

@BlackDex commented on GitHub: That is an issue, because that is used in the newer versions to speed-up the sync process. I might need to revisit that approach, or limit the amount and merge in-code. The speed-up was about 3x quicker sync then before if not more. But if that causes very very large vaults to not being able to sync. That is an issue of course. I didn't had time yet to reproduce and look at this.
Author
Owner

@stefan0xC commented on GitHub:

Well, it looks the maximum of elements sqlite supports by default is 32766. That is less then the amount of ciphers you reported.

That's the limit on number of parameters (binding ? or :var in a query). That shouldn't be an issue, right?

If I understood the issue correctly this line creates an IN statement that is incompatible with SQLite if the number of ciphers gets too large:
367e1ce289/src/db/models/attachment.rs (L193)

According to https://www.sqlite.org/limits.html#max_variable_number the maximum was only 999 until 3.32.0 so in theory some users (that don't use the docker image and build the binary with an older SQLite) could be more affected by this (I think 999 would still be a large number of ciphers but should be a bit easier to reach than 32766 for most users).

@stefan0xC commented on GitHub: > > Well, it looks the maximum of elements sqlite supports by default is `32766`. That is less then the amount of ciphers you reported. > > That's the limit on number of parameters (binding `?` or `:var` in a query). That shouldn't be an issue, right? If I understood the issue correctly this line creates an IN statement that is incompatible with SQLite if the number of ciphers gets too large: https://github.com/dani-garcia/vaultwarden/blob/367e1ce289cea6a3251b7350a6707c700bd8a544/src/db/models/attachment.rs#L193 According to https://www.sqlite.org/limits.html#max_variable_number the maximum was only `999` until 3.32.0 so in theory some users (that don't use the docker image and build the binary with an older SQLite) could be more affected by this (I think 999 would still be a large number of ciphers but should be a bit easier to reach than 32766 for most users).
Author
Owner

@sorcix commented on GitHub:

Well, it looks the maximum of elements sqlite supports by default is 32766. That is less then the amount of ciphers you reported.

That's the limit on number of parameters (binding ? or :var in a query). That shouldn't be an issue, right?

@sorcix commented on GitHub: > Well, it looks the maximum of elements sqlite supports by default is `32766`. That is less then the amount of ciphers you reported. That's the limit on number of parameters (binding `?` or `:var` in a query). That shouldn't be an issue, right?
Author
Owner

@BlackDex commented on GitHub:

I Think i have found a good solution. Which may be nicer also.
I may do some more changes to see if we can improve the performance.

Also, @stefan0xC it is actually very easy to lower the limit.

SQLITE_MAX_VARIABLE_NUMBER=999 cargo build --features sqlite
@BlackDex commented on GitHub: I Think i have found a good solution. Which may be nicer also. I may do some more changes to see if we can improve the performance. Also, @stefan0xC it is actually very easy to lower the limit. ```bash SQLITE_MAX_VARIABLE_NUMBER=999 cargo build --features sqlite ```
Author
Owner

@stefan0xC commented on GitHub:

Since we always use a vendored/build-in version it shouldn't be an issue. Unless someone removes that vendored option of course.

Ah, okay. I just assumed the sqlite version in use would depend on the build platform while I could have looked in the Cargo.toml file instead. 🤦

(I was initially also wondering if it might be worth exploring that option to reproduce the issue more easily but I think it does not matter as even a limit of 999 is so large that it should be automated either way...)

While it's nice and good for most. It breaks for at least one. While i do think it's a lot of ciphers, i still think i should take a look at it.

Ah, yeah I was just wondering how to get so many entries (as I store almost all my credentials to Vaultwarden myself and I am nowhere near that).

@vitorbaptista May I ask how you got that many ciphers? Did you test something or do you maybe have an automated script? (If so you could maybe share it or a reworked version so we can more easily reproduce that issue).

I was also thinking if maybe switching to a more robust database backend like Postgres or MariaDB (which according to many answers on StackExchange apparently don't have such a "low" limit like SQLite) might be a workaround for you in the meantime (until there is a fix) but I've not tested it myself.

@stefan0xC commented on GitHub: > Since we always use a vendored/build-in version it shouldn't be an issue. Unless someone removes that vendored option of course. Ah, okay. I just assumed the sqlite version in use would depend on the build platform while I could have looked in the `Cargo.toml` file instead. :facepalm: (I was initially also wondering if it might be worth exploring that option to reproduce the issue more easily but I think it does not matter as even a limit of 999 is so large that it should be automated either way...) > While it's nice and good for most. It breaks for at least one. While i do think it's a lot of ciphers, i still think i should take a look at it. Ah, yeah I was just wondering how to get so many entries (as I store almost all my credentials to Vaultwarden myself and I am nowhere near that). @vitorbaptista May I ask how you got that many ciphers? Did you test something or do you maybe have an automated script? (If so you could maybe share it or a reworked version so we can more easily reproduce that issue). I was also thinking if maybe switching to a more robust database backend like Postgres or MariaDB (which according to many answers on StackExchange apparently don't have such a "low" limit like SQLite) might be a workaround for you in the meantime (until there is a fix) but I've not tested it myself.
Author
Owner

@BlackDex commented on GitHub:

Ok, PR done. It should solve your issue, and it looks like i shaved off a bit more time it takes to sync. Not much, but every bit counts. Especially with a huge cipher base 😉 .

Compared to the version you are currently running and the one with this patch, you wont have time to get a .

@BlackDex commented on GitHub: Ok, PR done. It should solve your issue, and it looks like i shaved off a bit more time it takes to sync. Not much, but every bit counts. Especially with a huge cipher base 😉 . Compared to the version you are currently running and the one with this patch, you wont have time to get a ☕ .
Author
Owner

@vitorbaptista commented on GitHub:

@stefan0xC We use it to store third-parties' credentials. It's an automated process that use the bw CLI to add/update passwords into Vaultwarden. We also have a staging and production organization to test this process, doubling the number of passwords we have.

Regarding migrating to another DB, that might be a better option. However, at this point, I think we'd better bite the bullet and use a Bitwarden.com organization, as we don't have that many users to begin with. I wonder if they would be able to handle this number of passwords, though.

@vitorbaptista commented on GitHub: @stefan0xC We use it to store third-parties' credentials. It's an automated process that use the `bw` CLI to add/update passwords into Vaultwarden. We also have a staging and production organization to test this process, doubling the number of passwords we have. Regarding migrating to another DB, that might be a better option. However, at this point, I think we'd better bite the bullet and use a Bitwarden.com organization, as we don't have that many users to begin with. I wonder if they would be able to handle this number of passwords, though.
Author
Owner

@BlackDex commented on GitHub:

Good test environment for Vaultwarden haha.

@BlackDex commented on GitHub: Good test environment for Vaultwarden haha.
Author
Owner

@vitorbaptista commented on GitHub:

@BlackDex Sorry to bother, but do you have any ETA on when a new release is going to be done? Looking forward to trying your fix.

@vitorbaptista commented on GitHub: @BlackDex Sorry to bother, but do you have any ETA on when a new release is going to be done? Looking forward to trying your fix.
Author
Owner

@BlackDex commented on GitHub:

@BlackDex Sorry to bother, but do you have any ETA on when a new release is going to be done? Looking forward to trying your fix.

Probably this weekend somewhere

@BlackDex commented on GitHub: > @BlackDex Sorry to bother, but do you have any ETA on when a new release is going to be done? Looking forward to trying your fix. Probably this weekend somewhere
Author
Owner

@vitorbaptista commented on GitHub:

That's awesome, @BlackDex! I'll keep an eye on when this is released. Thank you for the quick turnaround.

@vitorbaptista commented on GitHub: That's awesome, @BlackDex! I'll keep an eye on when this is released. Thank you for the quick turnaround.
Author
Owner

@vitorbaptista commented on GitHub:

@BlackDex hey, I've been checking to see when this would be released. If you don't mind, I'd rather wait for the next release, given that this is a key infrastructure of our company. Hopefully it won't take too long. I'll ping you here with the results

@vitorbaptista commented on GitHub: @BlackDex hey, I've been checking to see when this would be released. If you don't mind, I'd rather wait for the next release, given that this is a key infrastructure of our company. Hopefully it won't take too long. I'll ping you here with the results
Author
Owner

@BlackDex commented on GitHub:

@vitorbaptista I'm curious to known if this solution works for you, and what your feeling is on the loading part.
If you could try out the testing tagged image, that would be great!

@BlackDex commented on GitHub: @vitorbaptista I'm curious to known if this solution works for you, and what your feeling is on the loading part. If you could try out the `testing` tagged image, that would be great!
Author
Owner

@vitorbaptista commented on GitHub:

@BlackDex As promised, I did some quick performance comparisons. I was using version 1.23.0 before, upgraded to 1.28.0. The load time of the main page (just after logging in) is the same, 28 seconds. However, the load time of the /sync?excludeDomains=true endpoint went from 10.44s to 2.61s, a whopping 75% reduction!!!

The amount downloaded in that endpoint increased a bit, from 18.61 MB to 19.22 MB.

None of these checks were in any way scientific, I just used my regular clock to do the timings, and Firefox's Network tab to check the /sync endpoint size and timing.

Thanks a lot for your work! The bug is solved, and I now can resume using the Bitwarden apps.

@vitorbaptista commented on GitHub: @BlackDex As promised, I did some quick performance comparisons. I was using version 1.23.0 before, upgraded to 1.28.0. The load time of the main page (just after logging in) is the same, 28 seconds. However, the load time of the `/sync?excludeDomains=true` endpoint went from 10.44s to 2.61s, a whopping 75% reduction!!! The amount downloaded in that endpoint increased a bit, from 18.61 MB to 19.22 MB. None of these checks were in any way scientific, I just used my regular clock to do the timings, and Firefox's Network tab to check the `/sync` endpoint size and timing. Thanks a lot for your work! The bug is solved, and I now can resume using the Bitwarden apps.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/vaultwarden#947