Jellyfin creates enormous amounts of processes #6764

Closed
opened 2026-02-07 04:04:44 +03:00 by OVERLORD · 9 comments
Owner

Originally created by @SomeBelgianDude on GitHub (Feb 20, 2025).

Description of the bug

Since updating to 10.10.5, but also in 10.10.6, Jellyfin creates an enormous amount of threads, even leading to warnings in the log.

Reproduction steps

Start Jellyfin
Have people start and stop streaming
Slowly threads start accumulating

What is the current bug behavior?

Hundreds of threads

What is the expected correct behavior?

A reasonable amount of threads

Jellyfin Server version

10.10.0+

Specify commit id

No response

Specify unstable release number

No response

Specify version number

No response

Specify the build version

10.10.6

Environment

- OS: Ubuntu 24.10
- Linux Kernel: 6.1.0-1025-rockchip
- Virtualization: none
- Clients: /
- Browser: /
- FFmpeg Version: 7.0.2
- Playback Method: /
- Hardware Acceleration: RKMPP
- GPU Model: RK3588
- Plugins: Default
- Reverse Proxy: Traefik
- Base URL: /
- Networking: host
- Storage: nfs

Jellyfin logs

"02/20/2025 19:22:06 +00:00", the heartbeat has been running for "00:00:01.1910804" which is longer than "00:00:01". This could be caused by thread pool starvation.
[2025-02-20 20:22:09.271 +01:00] [WRN] As of "02/20/2025 19:22:09 +00:00", the heartbeat has been running for "00:00:01.1698107" which is longer than "00:00:01". This could be caused by thread pool starvation.
[2025-02-20 20:22:12.782 +01:00] [WRN] As of "02/20/2025 19:22:12 +00:00", the heartbeat has been running for "00:00:01.1855610" which is longer than "00:00:01". This could be caused by thread pool starvation.
[2025-02-20 20:22:16.287 +01:00] [WRN] As of "02/20/2025 19:22:16 +00:00", the heartbeat has been running for "00:00:01.1650166" which is longer than "00:00:01". This could be caused by thread pool starvation.
[2025-02-20 20:22:19.751 +01:00] [WRN] As of "02/20/2025 19:22:19 +00:00", the heartbeat has been running for "00:00:01.1581506" which is longer than "00:00:01". This could be caused by thread pool starvation.
[2025-02-20 20:22:20.913 +01:00] [INF] WS "109.178.138.10" request
[2025-02-20 20:22:22.701 +01:00] [WRN] WS "109.178.138.10" error receiving data: "The remote party closed the WebSocket connection without completing the close handshake."
[2025-02-20 20:22:22.703 +01:00] [INF] WS "109.178.138.10" closed
[2025-02-20 20:22:27.344 +01:00] [WRN] As of "02/20/2025 19:22:27 +00:00", the heartbeat has been running for "00:00:03.4942511" which is longer than "00:00:01". This could be caused by thread pool starvation.
[2025-02-20 20:22:30.737 +01:00] [WRN] As of "02/20/2025 19:22:30 +00:00", the heartbeat has been running for "00:00:02.2732428" which is longer than "00:00:01". This could be caused by thread pool starvation.
[2025-02-20 20:22:34.199 +01:00] [WRN] As of "02/20/2025 19:22:34 +00:00", the heartbeat has been running for "00:00:01.1346931" which is longer than "00:00:01". This could be caused by thread pool starvation.
[2025-02-20 20:22:35.340 +01:00] [INF] Sending shutdown notifications
[2025-02-20 20:22:39.972 +01:00] [WRN] As of "02/20/2025 19:22:39 +00:00", the heartbeat has been running for "00:00:02.2851607" which is longer than "00:00:01". This could be caused by thread pool starvation.

FFmpeg logs


Client / Browser logs

No response

Relevant screenshots or videos

Image

Additional information

No response

Originally created by @SomeBelgianDude on GitHub (Feb 20, 2025). ### Description of the bug Since updating to 10.10.5, but also in 10.10.6, Jellyfin creates an enormous amount of threads, even leading to warnings in the log. ### Reproduction steps Start Jellyfin Have people start and stop streaming Slowly threads start accumulating ### What is the current _bug_ behavior? Hundreds of threads ### What is the expected _correct_ behavior? A reasonable amount of threads ### Jellyfin Server version 10.10.0+ ### Specify commit id _No response_ ### Specify unstable release number _No response_ ### Specify version number _No response_ ### Specify the build version 10.10.6 ### Environment ```markdown - OS: Ubuntu 24.10 - Linux Kernel: 6.1.0-1025-rockchip - Virtualization: none - Clients: / - Browser: / - FFmpeg Version: 7.0.2 - Playback Method: / - Hardware Acceleration: RKMPP - GPU Model: RK3588 - Plugins: Default - Reverse Proxy: Traefik - Base URL: / - Networking: host - Storage: nfs ``` ### Jellyfin logs ```shell "02/20/2025 19:22:06 +00:00", the heartbeat has been running for "00:00:01.1910804" which is longer than "00:00:01". This could be caused by thread pool starvation. [2025-02-20 20:22:09.271 +01:00] [WRN] As of "02/20/2025 19:22:09 +00:00", the heartbeat has been running for "00:00:01.1698107" which is longer than "00:00:01". This could be caused by thread pool starvation. [2025-02-20 20:22:12.782 +01:00] [WRN] As of "02/20/2025 19:22:12 +00:00", the heartbeat has been running for "00:00:01.1855610" which is longer than "00:00:01". This could be caused by thread pool starvation. [2025-02-20 20:22:16.287 +01:00] [WRN] As of "02/20/2025 19:22:16 +00:00", the heartbeat has been running for "00:00:01.1650166" which is longer than "00:00:01". This could be caused by thread pool starvation. [2025-02-20 20:22:19.751 +01:00] [WRN] As of "02/20/2025 19:22:19 +00:00", the heartbeat has been running for "00:00:01.1581506" which is longer than "00:00:01". This could be caused by thread pool starvation. [2025-02-20 20:22:20.913 +01:00] [INF] WS "109.178.138.10" request [2025-02-20 20:22:22.701 +01:00] [WRN] WS "109.178.138.10" error receiving data: "The remote party closed the WebSocket connection without completing the close handshake." [2025-02-20 20:22:22.703 +01:00] [INF] WS "109.178.138.10" closed [2025-02-20 20:22:27.344 +01:00] [WRN] As of "02/20/2025 19:22:27 +00:00", the heartbeat has been running for "00:00:03.4942511" which is longer than "00:00:01". This could be caused by thread pool starvation. [2025-02-20 20:22:30.737 +01:00] [WRN] As of "02/20/2025 19:22:30 +00:00", the heartbeat has been running for "00:00:02.2732428" which is longer than "00:00:01". This could be caused by thread pool starvation. [2025-02-20 20:22:34.199 +01:00] [WRN] As of "02/20/2025 19:22:34 +00:00", the heartbeat has been running for "00:00:01.1346931" which is longer than "00:00:01". This could be caused by thread pool starvation. [2025-02-20 20:22:35.340 +01:00] [INF] Sending shutdown notifications [2025-02-20 20:22:39.972 +01:00] [WRN] As of "02/20/2025 19:22:39 +00:00", the heartbeat has been running for "00:00:02.2851607" which is longer than "00:00:01". This could be caused by thread pool starvation. ``` ### FFmpeg logs ```shell ``` ### Client / Browser logs _No response_ ### Relevant screenshots or videos ![Image](https://github.com/user-attachments/assets/10bb5ccb-11bf-4b8d-8b41-943e8ed1289f) ### Additional information _No response_
OVERLORD added the bugstale labels 2026-02-07 04:04:44 +03:00
Author
Owner

@gnattu commented on GitHub (Feb 21, 2025):

These are threads not processes. On Linux even threads have their own PID. The logs you are seeing is printed by the web service where the heartbeat took more than 1 second to respond but the reason is not necessarily threadpool starvation, on a slow cpu a busy server can trigger this easily.

@gnattu commented on GitHub (Feb 21, 2025): These are threads not processes. On Linux even threads have their own PID. The logs you are seeing is printed by the web service where the heartbeat took more than 1 second to respond but the reason is not necessarily threadpool starvation, on a slow cpu a busy server can trigger this easily.
Author
Owner

@matt1432 commented on GitHub (Feb 22, 2025):

I'm having the exact same issue.

I found this issue here: https://github.com/DonutWare/Fladder/issues/121

I used the script they used and I had the same results. The script counted upwards of 12000 ffmpeg instances running jellyfin threads. Ever since I started monitoring it, Jellyfin starts being slow around a thousand or two and needs to be restarted.

@matt1432 commented on GitHub (Feb 22, 2025): I'm having the exact same issue. I found this issue here: https://github.com/DonutWare/Fladder/issues/121 I used the script they used and I had the same results. The script counted upwards of 12000 ~~ffmpeg instances running~~ jellyfin threads. Ever since I started monitoring it, Jellyfin starts being slow around a thousand or two and needs to be restarted.
Author
Owner

@gnattu commented on GitHub (Feb 22, 2025):

I'm having the exact same issue.

I found this issue here: DonutWare/Fladder#121

I used the script they used and I had the same results. The script counted upwards of 12000 ffmpeg instances running. Ever since I started monitoring it, Jellyfin starts being slow around a thousand or two and needs to be restarted.

Edit: I can confirm that all those instances were created by Jellyfin since restarting the unit force killed them all

These are not ffmpeg instances. having a cli parameter telling the server where the ffmpeg is does not mean running ffmpeg, the main process is still jellyfin.

@gnattu commented on GitHub (Feb 22, 2025): > I'm having the exact same issue. > > I found this issue here: [DonutWare/Fladder#121](https://github.com/DonutWare/Fladder/issues/121) > > I used the script they used and I had the same results. The script counted upwards of 12000 ffmpeg instances running. Ever since I started monitoring it, Jellyfin starts being slow around a thousand or two and needs to be restarted. > > Edit: I can confirm that all those instances were created by Jellyfin since restarting the unit force killed them all These are not ffmpeg instances. having a cli parameter telling the server where the ffmpeg is does not mean running ffmpeg, the main process is still jellyfin.
Author
Owner

@matt1432 commented on GitHub (Feb 22, 2025):

Oh right my bad

@matt1432 commented on GitHub (Feb 22, 2025): Oh right my bad
Author
Owner

@matt1432 commented on GitHub (Feb 26, 2025):

@SomeBelgianDude Do you use Jellystat? I've been doing some tests and it seems like Jellystat is causing this issue

@matt1432 commented on GitHub (Feb 26, 2025): @SomeBelgianDude Do you use Jellystat? I've been doing some tests and it seems like Jellystat is causing this issue
Author
Owner

@SomeBelgianDude commented on GitHub (Feb 26, 2025):

Can confirm.
I will disable it and check it out.

@SomeBelgianDude commented on GitHub (Feb 26, 2025): Can confirm. I will disable it and check it out.
Author
Owner

@mellow129 commented on GitHub (Feb 27, 2025):

I migrated from Emby to Jellyfin in January.
Jellyfin has been running great until about a week ago. I started seeing load averages reach 700-ish and thread counts around 12-14K. The JF server would become slow to unresponsive pretty early in that upward trend. I would just manually restart the service and move on.
I woke up early this morning to texts that our server was having issues. I restarted the service, checked the metrics, and logs. I decided to check this project page for issues and low and behold found this thread. Thanks for posting.

Image

Image

I setup Jellystat about a week ago so the time frame that this started happening does coincide. I'm gonna shutdown that stack for the weekend and see if the problem goes away.
Thanks @SomeBelgianDude and @matt1432

Update: 7 days with the Jellystat stack shutdown and no new events have occurred.

@mellow129 commented on GitHub (Feb 27, 2025): I migrated from Emby to Jellyfin in January. Jellyfin has been running great until about a week ago. I started seeing load averages reach 700-ish and thread counts around 12-14K. The JF server would become slow to unresponsive pretty early in that upward trend. I would just manually restart the service and move on. I woke up early this morning to texts that our server was having issues. I restarted the service, checked the metrics, and logs. I decided to check this project page for issues and low and behold found this thread. Thanks for posting. ![Image](https://github.com/user-attachments/assets/6a0843ad-702c-4eec-b538-e5c29c5e45e2) ![Image](https://github.com/user-attachments/assets/52b894f8-4d31-4e67-99a8-3720a09ec2b5) I setup Jellystat about a week ago so the time frame that this started happening does coincide. I'm gonna shutdown that stack for the weekend and see if the problem goes away. Thanks @SomeBelgianDude and @matt1432 Update: 7 days with the Jellystat stack shutdown and no new events have occurred.
Author
Owner

@jellyfin-bot commented on GitHub (Jul 5, 2025):

This issue has gone 120 days without an update and will be closed within 21 days if there is no new activity. To prevent this issue from being closed, please confirm the issue has not already been fixed by providing updated examples or logs.

If you have any questions you can use one of several ways to contact us.

@jellyfin-bot commented on GitHub (Jul 5, 2025): This issue has gone 120 days without an update and will be closed within 21 days if there is no new activity. To prevent this issue from being closed, please confirm the issue has not already been fixed by providing updated examples or logs. If you have any questions you can use one of several ways to [contact us](https://jellyfin.org/contact).
Author
Owner

@JPVenson commented on GitHub (Jul 8, 2025):

fixed by #14281

@JPVenson commented on GitHub (Jul 8, 2025): fixed by #14281
Sign in to join this conversation.
1 Participants
Notifications
Due Date
No due date set.
Dependencies

No dependencies set.

Reference: starred/jellyfin#6764