mirror of
https://github.com/dani-garcia/vaultwarden.git
synced 2025-12-09 17:23:04 +03:00
iOS app complains "an error has occured" when token expires #303
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @tinythomasffm on GitHub.
Vaultwarden Support String
Your environment (Generated via diagnostics page)
Config (Generated via diagnostics page)
Show Running Config
Environment settings which are overridden: DOMAIN
Vaultwarden Build Version
v1.32.5
Deployment method
Official Container Image
Custom deployment method
official container image running on kubernetes (k3s)
Reverse Proxy
ingress-nginx, deployed via helm chart ingress-nginx-4.11.3 (latest)
Host/Server Operating System
Linux
Operating System Version
iOS
Clients
iOS
Client Version
2024.11.0 (1680)
Steps To Reproduce
Expected Result
the sync should happen without any issues.
Actual Result
when trying to sync, iOS app throws "an error has occured". Logging off in the app and re-login fixes the problem (until the next token expire).
Logs
Screenshots or Videos
No response
Additional Context
No response
@tinythomasffm commented on GitHub:
no HA setup. Nothing configured about rsa key storage, so /data/rsa_key.pem
@BlackDex commented on GitHub:
Do you have a HA setup?
Where is the rsa key stored?
@tinythomasffm commented on GitHub:
forgot to mention: macos browser plugins, web vault work without issues.
@BlackDex commented on GitHub:
I can't reproduce this at all.
I used an iOS device with the exact same version, changed the expire/lifetime to 10 minutes to make my life easier.
But, i do get the same error message, but it will still be logged in and syncing after getting a new refresh token.
Ill leave it for longer now and see what happens.
But i suspect maybe your reverse proxy is modifying responses to error pages like 401 etc... and only passes on 3xx and 2xx unmodified.
@BlackDex commented on GitHub:
And
/datais a persistent volume?@tinythomasffm commented on GitHub:
yes.
@tinythomasffm commented on GitHub:
I'm just checking that. The nginx helm chart I am using was updated to that version on Oct 8 - I cannot totally exclude that the issue came with that update, checking changelogs of that now.
@BlackDex commented on GitHub:
Maybe your reverse proxy setup has been updated too?
Or K8s with some changes which could cause this.
@tinythomasffm commented on GitHub:
vaultwarden is deployed as a statefulset, the volume is persistent across container restarts
@tinythomasffm commented on GitHub:
that could be. But I only noticed this now, I'm using this setup (with older version of vaultwarden) for a long time and it just came up these days. Really strange.
@tinythomasffm commented on GitHub:
I disabled the 401 custom error message and it seems to work now. I have no idea how it could have worked before, as these were in place the whole time… I‘ll watch it a little longer, but I feel this is not an issue with vaultwarden.Thanks for looking into this!Am 08.12.2024 um 15:05 schrieb Mathijs van Veluw @.***>:
Well, i tested it just now after waiting a very long time, and having the token only valid for 10 minutes, i was still able to sync without any issue at all.
I do not know if something else may have changed on the reserve proxy. Maybe there were some bugs or fixes which caused it not to happen before.
I'm just not able to reproduce it my self.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: @.***>
@BlackDex commented on GitHub:
Well, i tested it just now after waiting a very long time, and having the token only valid for 10 minutes, i was still able to sync without any issue at all.
I do not know if something else may have changed on the reserve proxy. Maybe there were some bugs or fixes which caused it not to happen before.
I'm just not able to reproduce it my self.
@tinythomasffm commented on GitHub:
But wouldn't that issue then hit the browser plugins as well?
They work without problems.
@tinythomasffm commented on GitHub:
Hmm.. the nginx setup does indeed use custom error pages for 401,403,404,500,501,502,503.
I will remove 401 from that list and see if it fixes the issue. But I still wonder why this came up now - these error redirects where in place all the time and I never noticed anything.
Will check that out.