Thread 'main' panicked at 'Error running migrations' #866

Closed
opened 2025-10-09 16:54:36 +03:00 by OVERLORD · 7 comments
Owner

Originally created by @kenlasko on GitHub.

Subject of the issue

Upgrading from 1.27 to 1.28, the container keeps crashing and restarting.

Deployment environment

K3S version v1.25.7+k3s1

  • vaultwarden version: 1.28.0

  • Install method:
    Kubernetes install

  • MySQL/MariaDB or PostgreSQL version: MariaDB 10.6.12-debian-11-r6

  • Other relevant details:
    The error points to a duplicate column name 'reset_password_key'. I can see there is a reset_password_key column in users_organizations table. It is currently NULL for all rows.

Steps to reproduce

  • Deleted existing VaultWarden pod, which prompted creating new pod with latest version

Expected behaviour

Expect it to start up without getting into a crashloop :)

Actual behaviour

Endless crashloop

Troubleshooting data

[2023-03-29 14:19:07.551][panic][ERROR] thread 'main' panicked at 'Error running migrations: QueryError(DieselMigrationName { name: "2023-01-06-151600_add_reset_password_support", version: MigrationVersion("20230106151600") }, DatabaseError(Unknown, "Duplicate column name 'reset_password_key'"))': src/db/mod.rs:471
   0: vaultwarden::init_logging::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::result::unwrap_failed
   7: vaultwarden::db::mysql_migrations::run_migrations
   8: vaultwarden::main::{{closure}}
   9: tokio::runtime::park::CachedParkThread::block_on
  10: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
  11: tokio::runtime::runtime::Runtime::block_on
  12: rocket::async_main
  13: vaultwarden::main
  14: std::sys_common::backtrace::__rust_begin_short_backtrace
  15: std::rt::lang_start::{{closure}}
  16: std::rt::lang_start_internal
  17: main
  18: __libc_start_main
  19: _start
Originally created by @kenlasko on GitHub. ### Subject of the issue Upgrading from 1.27 to 1.28, the container keeps crashing and restarting. ### Deployment environment K3S version v1.25.7+k3s1 * vaultwarden version: 1.28.0 * Install method: Kubernetes install * MySQL/MariaDB or PostgreSQL version: MariaDB 10.6.12-debian-11-r6 * Other relevant details: The error points to a duplicate column name 'reset_password_key'. I can see there is a reset_password_key column in users_organizations table. It is currently NULL for all rows. ### Steps to reproduce - Deleted existing VaultWarden pod, which prompted creating new pod with latest version ### Expected behaviour Expect it to start up without getting into a crashloop :) ### Actual behaviour Endless crashloop ### Troubleshooting data ``` [2023-03-29 14:19:07.551][panic][ERROR] thread 'main' panicked at 'Error running migrations: QueryError(DieselMigrationName { name: "2023-01-06-151600_add_reset_password_support", version: MigrationVersion("20230106151600") }, DatabaseError(Unknown, "Duplicate column name 'reset_password_key'"))': src/db/mod.rs:471 0: vaultwarden::init_logging::{{closure}} 1: std::panicking::rust_panic_with_hook 2: std::panicking::begin_panic_handler::{{closure}} 3: std::sys_common::backtrace::__rust_end_short_backtrace 4: rust_begin_unwind 5: core::panicking::panic_fmt 6: core::result::unwrap_failed 7: vaultwarden::db::mysql_migrations::run_migrations 8: vaultwarden::main::{{closure}} 9: tokio::runtime::park::CachedParkThread::block_on 10: tokio::runtime::scheduler::multi_thread::MultiThread::block_on 11: tokio::runtime::runtime::Runtime::block_on 12: rocket::async_main 13: vaultwarden::main 14: std::sys_common::backtrace::__rust_begin_short_backtrace 15: std::rt::lang_start::{{closure}} 16: std::rt::lang_start_internal 17: main 18: __libc_start_main 19: _start ```
Author
Owner

@kenlasko commented on GitHub:

Thanks for getting back to me!

Adding the first record still results in the same failure. I then ran:

INSERT INTO `__diesel_schema_migrations` VALUES ("20230131222222", "2023-03-29 01:00:00");

and it still fails with the same error.

I'm just running a single pod of Vaultwarden.

@kenlasko commented on GitHub: Thanks for getting back to me! Adding the first record still results in the same failure. I then ran: ``` INSERT INTO `__diesel_schema_migrations` VALUES ("20230131222222", "2023-03-29 01:00:00"); ``` and it still fails with the same error. I'm just running a single pod of Vaultwarden.
Author
Owner

@BlackDex commented on GitHub:

Strange. Looks like the migration was aborted somewhere mid-way, and caused it to not update the table which stores the migrations already executed.

Please try to update that table manually by running:

INSERT INTO `__diesel_schema_migrations` VALUES ("20230106151600", "2023-03-29 01:00:00");

That should tell the migration to skip that part, since it already exists as you mentioned.
You might also need to add 20230131222222 btw. Just use the same date/time (second value), that should do the trick. But only if it fails!!

@BlackDex commented on GitHub: Strange. Looks like the migration was aborted somewhere mid-way, and caused it to not update the table which stores the migrations already executed. Please try to update that table manually by running: ```sql INSERT INTO `__diesel_schema_migrations` VALUES ("20230106151600", "2023-03-29 01:00:00"); ``` That should tell the migration to skip that part, since it already exists as you mentioned. You might also need to add `20230131222222` btw. Just use the same date/time (second value), that should do the trick. **But only if it fails!!**
Author
Owner

@BlackDex commented on GitHub:

It does with the exact same error? Or a different one?
That is important, because else your database might not be compatible in the end.

@BlackDex commented on GitHub: It does with the exact same error? Or a different one? That is important, because else your database might not be compatible in the end.
Author
Owner

@BlackDex commented on GitHub:

btw, you mentioned k8s/pods. Are you running multiple pods of Vaultwarden?
In that case, that is not supported, but could have caused this strange issue.

@BlackDex commented on GitHub: btw, you mentioned k8s/pods. Are you running multiple pods of Vaultwarden? In that case, that is not supported, but could have caused this strange issue.
Author
Owner

@kenlasko commented on GitHub:

Looks like the same error:

[2023-03-29 15:28:59.117][panic][ERROR] thread 'main' panicked at 'Error running migrations: QueryError(DieselMigrationName { name: "2023-01-06-151600_add_reset_password_support", version: MigrationVersion("20230106151600") }, DatabaseError(Unknown, "Duplicate column name 'reset_password_key'"))': src/db/mod.rs:471
   0: vaultwarden::init_logging::{{closure}}
   1: std::panicking::rust_panic_with_hook
   2: std::panicking::begin_panic_handler::{{closure}}
   3: std::sys_common::backtrace::__rust_end_short_backtrace
   4: rust_begin_unwind
   5: core::panicking::panic_fmt
   6: core::result::unwrap_failed
   7: vaultwarden::db::mysql_migrations::run_migrations
   8: vaultwarden::main::{{closure}}
   9: tokio::runtime::park::CachedParkThread::block_on
  10: tokio::runtime::scheduler::multi_thread::MultiThread::block_on
  11: tokio::runtime::runtime::Runtime::block_on
  12: rocket::async_main
  13: vaultwarden::main
  14: std::sys_common::backtrace::__rust_begin_short_backtrace
  15: std::rt::lang_start::{{closure}}
  16: std::rt::lang_start_internal
  17: main
  18: __libc_start_main
  19: _start
@kenlasko commented on GitHub: Looks like the same error: ``` [2023-03-29 15:28:59.117][panic][ERROR] thread 'main' panicked at 'Error running migrations: QueryError(DieselMigrationName { name: "2023-01-06-151600_add_reset_password_support", version: MigrationVersion("20230106151600") }, DatabaseError(Unknown, "Duplicate column name 'reset_password_key'"))': src/db/mod.rs:471 0: vaultwarden::init_logging::{{closure}} 1: std::panicking::rust_panic_with_hook 2: std::panicking::begin_panic_handler::{{closure}} 3: std::sys_common::backtrace::__rust_end_short_backtrace 4: rust_begin_unwind 5: core::panicking::panic_fmt 6: core::result::unwrap_failed 7: vaultwarden::db::mysql_migrations::run_migrations 8: vaultwarden::main::{{closure}} 9: tokio::runtime::park::CachedParkThread::block_on 10: tokio::runtime::scheduler::multi_thread::MultiThread::block_on 11: tokio::runtime::runtime::Runtime::block_on 12: rocket::async_main 13: vaultwarden::main 14: std::sys_common::backtrace::__rust_begin_short_backtrace 15: std::rt::lang_start::{{closure}} 16: std::rt::lang_start_internal 17: main 18: __libc_start_main 19: _start ```
Author
Owner

@kenlasko commented on GitHub:

Did as instructed and the pod came up without issue!

Thank you very much for the fast response. Very glad I switched from LastPass, and even happier to be running it locally!

What's interesting is that the last SQL query you had me enter was the same query that I started with. It didn't work then, but did the second time. Weird.

@kenlasko commented on GitHub: Did as instructed and the pod came up without issue! Thank you very much for the fast response. Very glad I switched from LastPass, and even happier to be running it locally! What's interesting is that the last SQL query you had me enter was the same query that I started with. It didn't work then, but did the second time. Weird.
Author
Owner

@BlackDex commented on GitHub:

Sorry, my bad, i pasted the wrong value.

Please remove both records you added your self. After that run the following.

INSERT INTO `__diesel_schema_migrations` VALUES ("20230106151600", "2023-03-29 01:00:00");

So, make sure all records using "2023-03-29 01:00:00" as a date are gone, run that query shown above, and try again.

@BlackDex commented on GitHub: Sorry, my bad, i pasted the wrong value. Please remove both records you added your self. After that run the following. ```sql INSERT INTO `__diesel_schema_migrations` VALUES ("20230106151600", "2023-03-29 01:00:00"); ``` So, make sure all records using `"2023-03-29 01:00:00"` as a date are gone, run that query shown above, and try again.
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/vaultwarden#866