mirror of
https://github.com/pocket-id/pocket-id.git
synced 2025-12-09 14:42:59 +03:00
🐛 Bug Report: Use a process manager in the container to ensure apps restart if they crash #306
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ItalyPaleAle on GitHub.
Reproduction steps
node,pocket-id-backend, orcaddyExpected behavior
Either the entire container should crash, or the process should be restarted within the container
Actual Behavior
Process is not restarted, but the container is still up because the entrypoint is still up
Recommended fix: Use a process manager within the container, which ensures that:
PS: If looking for suggestions, I like supervisord.
Version and Environment
v0.39.0
Log Output
No response
@stonith404 commented on GitHub:
Yeah I think it would makes sense that the whole container stops when one of the three services fails.
Regarding spawning the frontend from the backend, I wouldn't do this. This would complicate non-containerized setups unnecessarily. While we could make it optional, it adds coupling between components that should remain separate and offers little advantage over our current entrypoint script approach.
@kmendell commented on GitHub:
Was this in pocket-id? or a different app? Just curious if this would be worth the effort to implement. I see where you are coming from though , just want to get the full picture.
@kmendell commented on GitHub:
Have you physically experienced something crash like this? If so could you provide some context in how (within some reason i know you probably don't have logs or anything) it happened?
@ItalyPaleAle commented on GitHub:
I actually have - albeit in a dev scenario.
I built a container with a modified backend app. I made a mistake and the app crashed upon startup. However the container kept running and podman still marked it as running because the main process (entrypoint.sh) was still running.
While this was at least partly my fault, there can be situations where one of the apps crashed in the container due to runtime bugs or others. I have not personally experienced that myself (yet?), but it is possible, and it would cause a situation like the one I did experience.
@ItalyPaleAle commented on GitHub:
pocket-id, I'm working on a PR to fix another bug that will be ready shortly.
However I have experience with situations like these (1 container, multiple processes) where I had to make sure that for example one process was restarted automatically upon a crash, and another one caused the entire container to crash.
I do think we could fix both this problem and #324 together, by making the Go app itself spawn (and keep alive) the Node.js app.
@ItalyPaleAle commented on GitHub:
@stonith404 yes, I can confirm that's the case. Here's a repro (using v0.40):
--read-only(read-only root file system) and without mounting a volume for/app/backend/data. This will make it so the backend will crash because it can't write the key file.The container will be up:
However, you can see in the logs that the backend isn't running:
(this happens with or without caddy)
You can invoke the frontend (or caddy if it's running) and you'll see an error like this, indicating the frontend is running, so at least one service is running:
(
localhost:4000is because of port forwarding)@kmendell commented on GitHub:
We Answered why we have one container is this older issue: https://github.com/pocket-id/pocket-id/issues/148#issuecomment-2605789073. But to summarize: We want to have one image to simplify the steup process, as one of the core aspects of Pocket-ID is to be simple, and not as complex (setup and usage) as other OIDC providers.
@stonith404 commented on GitHub:
Thanks @kmendell
@Pitasi It would be ideal if we would have an all-in-one image, separate images, support Kubernetes and Podman but this would require us to maintain all those installation methods even though we don't use them. T
Because of that I would like to outsource those methods and then we just link to them in the docs.
@Pitasi commented on GitHub:
My 2 cents: taking a step back, why having a single container running three services instead of two/three containers?
It's common for services to provide a sample docker compose for easy self-hosting, e.g., https://github.com/plausible/community-edition/blob/v2.1.5/compose.yml.
Having granular containers lets you use docker as the supervisor without having to care about all of this.
@stonith404 commented on GitHub:
@ItalyPaleAle Are you sure that the container doesn't stop if one of the three services crash?
wait -nat the end of the entrypoint should wait until one process finishes and then return the status code of the finished process.@kmendell commented on GitHub:
@ItalyPaleAle I think to stoniths point, If some thing crashes i think the entire container should stop, i havent looked much into this yet, but we will want to implement this in the simplest way possible.
@ItalyPaleAle commented on GitHub:
@kmendell Whether you run a process manager within the container, or let the entire container crash and have the orchestrator restart it, it should still be ok.
However, neither of the above is happening today, as per repro above.
@ItalyPaleAle commented on GitHub:
@kmendell read-only FS was just one example of how to repro this. You can repro in any other way that would make the backend crash, for example an incorrect Postgres connection string.
That said, read-only is for the root file system only. Mounted volumes can be read-write if the host allows it (they are mounted as RW on the host) and the user has permissions (and SELinux isn't in the way). You can read more about it here: https://medium.com/datamindedbe/improve-the-security-of-pods-on-kubernetes-3e4a81534674 (this was written for K8s, but it's supported in Docker too if using
--read-only). (Off topic, but using a read-only root FS is quite useful for security, and many security scanners will flag containers that don't do that)@stonith404 commented on GitHub:
@ItalyPaleAle It seems that only Podman doesn't stop the container if some service crashes.
If you run
docker run --read-only pocket-id/pocket-idthe container will stop. So Docker stops the container if some service crashes.Do you have any clue why Podman could handle this differently?
@kmendell commented on GitHub:
@ItalyPaleAle Ive never used read-only file system though, how does this really work? eve nif you mount a voume wouldnt it still be read-only?
@ItalyPaleAle commented on GitHub:
@stonith404 I can repro with docker too:
The container isn't crashing.
This is with Docker 28.0.2 on Ubuntu 22.04
@stonith404 commented on GitHub:
Thanks, I was able to reproduce this on my Ubuntu server too. This should now be fixed in the latest version, let me know if you still have any issues.