mirror of
https://github.com/jellyfin/jellyfin.git
synced 2026-05-04 18:09:12 +03:00
[Issue]: Jellyfin 10.9.1 locking up #5746
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @ShakeSp33r on GitHub (May 14, 2024).
Please describe your bug
I'm running the linuxserver/jellyfin:10.9.1 container. everything working fine and then Jellyfin locks up, everything freeze, cant connect via web or Android TV. I have to restart the container.
Had a look at the log file, and this was the last bit before having to restart the container.
Reproduction Steps
Server is scanning chapters, and I have 4 users connected.
Locking up happens randomly.
Jellyfin Version
10.9.0
if other:
10.9.1
Environment
Jellyfin logs
FFmpeg logs
No response
Please attach any browser or client logs here
No response
Please attach any screenshots here
No response
Code of Conduct
@jellyfin-bot commented on GitHub (May 14, 2024):
Hi, it seems like your issue report has the following item(s) that need to be addressed:
This is an automated message, currently under testing. Please file an issue here if you encounter any problems.
@felix920506 commented on GitHub (May 14, 2024):
What is the host operating system
@ShakeSp33r commented on GitHub (May 14, 2024):
Debian GNU/Linux 11 (bullseye)
@JamsRepos commented on GitHub (May 14, 2024):
I had a similar situation where the server looked fine in the logs but you could not connect to the front-end at all and this happened twice within 24 hours. I restarted the container and have been fine since, awaiting it to happen again to check the logs further, could be related, could be a fluke.
I'm using Unraid official docker image.
@felix920506 commented on GitHub (May 14, 2024):
@JamsRepos JF locking up on Unraid is a known issue and is being tracked in #11459
@l7ssha commented on GitHub (May 14, 2024):
I have similar problem with websocket closing i presume. I dont know if thats the culprit since jellyfin hangs randomly, disconnects from client and then after some time or after restarting service it comes back again.
These are the only error messages i see. These error came up while playing music through Feishin but playing through official client behaves same way.
Using official ubuntu package in lxc container.
EDIT: My issue might not be connected to websocket error mentioned above. It did the thing again (play media normally, after some time stops, client no longer responsible, nothing loads). New logs indicate that playback reporting plugin locks server up:
EDIT2:
Also a lot of errors like that
when using official client. Happens when opening collection, nothing loads, spinner "spins", backing up and clicking again makes it work (collection loads properly)
@ServeurpersoCom commented on GitHub (May 14, 2024):
Same probleme, freeze every few hours, without Docker, on bare Debian I need to systemctl restart jellyfin :'(
@felix920506 commented on GitHub (May 14, 2024):
Is anyone here using LXC
If so #11344
@JonHodgins commented on GitHub (May 14, 2024):
Experiencing the same problem on 10.9.1 using docker on Debian 12.5 and the official Jellyfin image.
@ServeurpersoCom commented on GitHub (May 14, 2024):
This is a general issue that seems to make this 1.9 unusable, I'm going to wait a bit while trying to find a way to help debug. Otherwise it will be a rollback (difficult because you have to restore the database from a version 1.8 and redo a lot of work)
@JamsRepos commented on GitHub (May 14, 2024):
Yeah - I rolled back to 10.8.13 and it's all perfect and responsive again. Think i'll hold off on 10.9 until it's a little bit more stable.
@soakes commented on GitHub (May 14, 2024):
@felix920506 I am also running LXC (Proxmox), a Debian 12 VM which runs the official jellyfin deb repo which has shared mounts from the core with some additional nfs mounts connected from inside the VM.
This is running debian version:
The only thing I have noticed which is worth mentioning is the logs shows:
Confirming on the filesystem this path is incorrect and should be set to:
Im not seeing anything worth noting thats in the logs (yet). Will update if I find anything else.
@Jomack16 commented on GitHub (May 14, 2024):
Using "jellyfin/jellyfin" image
Running under Docker "Docker version 26.1.2, build 211e74b"
On host OS "Ubuntu 22.04.4 LTS (GNU/Linux 6.5.0-28-generic x86_64)"
I am experiencing the same "locking up"
It is evidenced by streams failing, and the log ceasing to write.
The last line of the log is how I can know exactly when the problem happens, because nothing else is logged until the container is restarted.
This is the last ERR in the log before Jellyfin locks up:
And this is the last line in the log before Jellyfin locks up:
If it is useful, I can provide copies of these debug level logs.
edited:
log attached
jellyfin20240514_004-116pissueScrubbed.zip
@joshuaboniface commented on GitHub (May 14, 2024):
I am also experiencing this occasionally, and am set up to do a full core dump of the process the next time it does. Though in my case on Debian 11 (standard
.debpackages via repo) it's only happening once every couple days at most.@ServeurpersoCom commented on GitHub (May 14, 2024):
3 time in about 1 hour
@memehammad commented on GitHub (May 14, 2024):
Try having a lot of concurrent streams. I usually just opened 10 new tabs, played the same movie on each one of them. After about 5 minutes jellyfin becomes unresponsive.
@Jomack16 commented on GitHub (May 14, 2024):
This does only seem to occur after certain threshold of concurrent streams. I haven't nailed down the number yet, but seems to be 9+ or more.
@ServeurpersoCom commented on GitHub (May 14, 2024):
Do you have also a lot of :
(or it's only me ?)
@soakes commented on GitHub (May 14, 2024):
I haven't got anything like this and yes it seems to be dying several times an hour.
@memehammad commented on GitHub (May 14, 2024):
Yes, I've gotten this a few times. Not always, but often.
@ServeurpersoCom commented on GitHub (May 14, 2024):
It's look like I get this on every crash time, but I'm not sure
@Jomack16 commented on GitHub (May 14, 2024):
I've only seen this once in the Debug level logging I have been collecting.
During the same time-period I have had to restart many times.
So they don't seem to correlate for me.
@joshuaboniface commented on GitHub (May 14, 2024):
I don't get those DB locks either, so it might be a separate failure. There is another issue for those.
@Jomack16 commented on GitHub (May 14, 2024):
After some testing, It looks like the number is more like 25 concurrent, before another tab would not load.
However I did not get it to stop logging, so if the cause is the number of concurrent streams, It may take more than that.
@memehammad commented on GitHub (May 14, 2024):
Pretty sure it depends on how powerful your system is. I'm on a low power system (Intel j4125) and I've gotten these issues with as few as 3 streams.
@soakes commented on GitHub (May 14, 2024):
Getting the issue on a 12th Gen Intel(R) Core(TM) i5-1235U with iGPU, even with just a single stream, so I dont think stream count/specs are really the cause.
@ServeurpersoCom commented on GitHub (May 14, 2024):
And another lock, more rare (without ImageInfos) I try to correlate if it's near my systemctl restarts:
@ServeurpersoCom commented on GitHub (May 14, 2024):
certainly a consequence but not the cause:(
@joshuaboniface commented on GitHub (May 14, 2024):
The times I've noticed this, I always only had a very small number of concurrent streams (1, maybe 2) and testing with 10 now so far no crash/lockup. I'll let those run for a little bit longer.
@ServeurpersoCom commented on GitHub (May 14, 2024):
Oh, crashed exactly when I close my two fake load player !
(displayed on a browser, I think it's useless ?)
Jellyfin.log, NO DATA ffmpeg exit 0, Lost 50 WebSocket, goodbye !!!! We need more thing to debug !
@ServeurpersoCom commented on GitHub (May 15, 2024):
To recover some of my crash timestamp tonight (because it can crash with less than 9 WS) :
(root|/var/log/jellyfin) grep "Lost .. WebSockets." jellyfin20240514.log
[2024-05-14 19:56:33.725 +02:00] [INF] Lost 48 WebSockets.
[2024-05-14 21:27:56.001 +02:00] [INF] Lost 45 WebSockets.
[2024-05-14 21:43:38.231 +02:00] [INF] Lost 42 WebSockets.
grep "Lost .. WebSockets." jellyfin20240514_001.log
[2024-05-14 22:52:45.837 +02:00] [INF] Lost 50 WebSockets.
@ServeurpersoCom commented on GitHub (May 15, 2024):
inside /etc/jellyfin/logging.json change "MinimumLevel": "Information" to :
{
"Serilog": {
"MinimumLevel": "Debug",
"WriteTo": [
{
...
WAITING for a crash !
@SingingFrog7 commented on GitHub (May 15, 2024):
I have a similar issue that was closed as duplicate of #11344
I had nothing useful in the log at debug level either, some people said that the issue started between version 20240104.7 and 20240112.1 so it has to be a commit between those dates? A lot was done between those dates, but Dotnet was updated to 8.0.1, I can't say if it could be the cause. If I can spare some time later, I'll try to build the 10.9.1 with Dotnet 8.0.0 and see what it gives
@soakes commented on GitHub (May 15, 2024):
Also tried, lasted about 20 minutes then died, nothing in logs which is worth noting.
What is intresting is, jellyfin itself is still alive, log was still processing info, port was still open but if you try and curl to it, it just hangs.
@ServeurpersoCom commented on GitHub (May 15, 2024):
I deleted all sessions because some users can't connect... resulting in a total crash of all client (no login redirect).... can't debug anymore
@derfy79 commented on GitHub (May 15, 2024):
Same probleme, freeze every few hours, official linux docker on unraid
@joshuaboniface commented on GitHub (May 15, 2024):
OK I was able to reproduce this myself finally and get some core dumps for analysis.
Curiously this time though, it didn't completely lock up. It was still kinda responding, with lots of
And then eventually after about ~5 minutes it started responding again.
@joshuaboniface commented on GitHub (May 15, 2024):
For anyone this is happening to, does Jellyfin eventually start responding again? Presumably like me you restart fairly quickly, but I'm curious if it's left, if it starts responding on its own again after any reasonable (~5, 10, 15 mins) time.
@SingingFrog7 commented on GitHub (May 15, 2024):
At least once for me, it did start responding again after some time
Weirdly, I had no crash/lock during the last 24h, but had about 10 occurence yesterday
@ghost commented on GitHub (May 15, 2024):
Hi guys! I also have the same problem, yesterday I found something, try these steps maybe will help to find the problem:
Start monitoring your server, Uptime Kuma is good option for that, thats the url you need to watch http://YOURIP:8096/health
I found nothing relevant in the logs so I removed all scheduled tasks, it hasn't frozen since...
Maybe it's a library scan problem but I doesn't understand why can't see anything in logs.
@ShakeSp33r commented on GitHub (May 15, 2024):
Disabled all schedules tasks, and system up for the last 12 hours.
Had 4 streams running and all good, previously on 10.8.13 I did 7/8 streams (Direct Play & transcoding) with no issues.
Will see how it goes
@haath commented on GitHub (May 15, 2024):
+1
I get random freezes both while streaming videos and while only streaming music.
With a cursory look through the docker logs I can only spot "websocket lost" messages that others reported.
Sometimes it lasts a couple of minutes, other times I have to restart the container.
Clients: LG WebOS, Feishin
Server: Ubuntu 23.10 x86_64 on Intel Celeron N3350
@ServeurpersoCom commented on GitHub (May 15, 2024):
Rollback for me.
It's not in Debian to push a beta software inside a stable repository. Server crash every time, [library scan fail without reason = SOLVED], every users must uninstall/reinstall app each time they want connect and it's work only once...
@JamsRepos commented on GitHub (May 15, 2024):
I get your frustration, mine too. I also did a rollback however you've got to realise that you cannot do mass user testing without pushing things up the pipeline. You can only find so much in a restricted and limited test environment.
These things happen, if you want stability then don't upgrade Day 1.
@ServeurpersoCom commented on GitHub (May 15, 2024):
I believed it, no luck! it's for this kind of thing that I have an independent categorized file base with my own backend, I can switch to Emby or any software without lost TMDb matching or custom metadata (to give you an idea i'm a datahoarder with complete TMDb/IMDb data mirror on my local network, jellyfin database is only rebuildable temporary data for me, to share non-tech friends or TV), I wait this weekend a potential patch before rollback or switching to emby and rebuild all file metadata from my master database.
@soakes commented on GitHub (May 15, 2024):
Thanks for everyone whos added comments to this issue, it helps to try and narrow down the real issue. This issue is hard to find due to logs are not being very helpful.
I have however, been doing some testing myself, what ive found is the following:
Will do some further testing i.e single client to make sure this was not a fluke and will report/update this later.
@ServeurpersoCom commented on GitHub (May 15, 2024):
Can you try to delete a client session, and reconnect ? this require an app reinstall !
@soakes commented on GitHub (May 15, 2024):
That would be a little problematic here as theres several external "non tech" users, but I might be able to sort something out.
@ghost commented on GitHub (May 15, 2024):
I turned on my backup server and created a clean 10.9.1 install, new library scan to rule out that the existing database is not the cause of the problem. No problem until the fifth test user start playing, all of these from web client.
No response from jellyfin, server ok no load , log stopped and after 20 min. new lines came up in log file but still nothing relevant.
From the timestamps you can see there is a big delay between new lines, I still waiting what will happen after more time...
@ghost commented on GitHub (May 15, 2024):
Ok so the problem started at 10:35
After all 5 session was dropped:
Server start working again and 575 new line came to log file:
these repeated....
@soakes commented on GitHub (May 15, 2024):
One question @gyilkoszabpehely, is this using the same database/config? or was this a completly fresh install i.e no data copied over (apart from accessing movies/tv)
Initially when reading your comment, I thought it was a full reinstall, but you also mentioned "new lib scan to rule out existing db" and so I now suspect you just have uninstalled jellyfin and installed 10.9.1 again.
Only asking as one of the things i've yet to test is a fresh VM, with fresh config/db and only old data would be the tv/movie data itself. This would be a good test to iron out if its an upgrade "thing" thats broken or it is jellyfin itself.
On a completly seperate note, since my last update, second attempt with Firefox and an AndroidTV, hung jellyfin about ~11 minutes later.
I have currently a Firefox and Chrome user online (nothing else) and its still stable (time roughly just hitting 17 minutes), will see if this completes.
@ghost commented on GitHub (May 15, 2024):
My my live server is updated with an existing database.
My backup server is clean installed this morning to test out problems...
@soakes commented on GitHub (May 15, 2024):
Thank you, this is helpful, saves me a job :)
Update, just as you posted this message, my two clients now just crashed, roughly 20 minutes in. These were Firefox and Chrome. So I think that rules out android/ios issues talking to new server.
@ghost commented on GitHub (May 15, 2024):
I can confirm that delete scheduled tasks will not solve the problem.
I also tried to switch from hardware encoding to software but problem still exist.
I collected all error from log what happend in the past minutes:
@soakes commented on GitHub (May 15, 2024):
I just updated https://github.com/jellyfin/jellyfin/issues/11344#issuecomment-2112124780 issue as @Schutzwurst suggested that thread count was going crazy and hes right. Added some screenshots.
Whats odd is, while the count is going crazy, theres nothing in the logs. Also whats very odd is, I just had another crash, was down for 5 or so mins and this time it recovered itself and even the thread count returned to normal (~30).
@DwayneGoddem commented on GitHub (May 15, 2024):
I having the same sort of issue but on Windows hosting Jellyfin after upgrading to 10.9.1.
its driving me nuts.
Same nothing in the logs or anything.
@DwayneGoddem commented on GitHub (May 15, 2024):
I just got this in the log on a crash.
[2024-05-15 21:30:50.617 +10:00] [ERR] [17] Emby.Server.Implementations.HttpServer.WebSocketManager: WS "192.168.0.84" WebSocketRequestHandler error
System.Net.WebSockets.WebSocketException (997): The WebSocket is in an invalid state ('Closed') for this operation. Valid states are: 'Open, CloseReceived'
at System.Net.WebSockets.ManagedWebSocket.SendAsync(ReadOnlyMemory
1 buffer, WebSocketMessageType messageType, WebSocketMessageFlags messageFlags, CancellationToken cancellationToken) --- End of stack trace from previous location --- at Emby.Server.Implementations.HttpServer.WebSocketConnection.ProcessInternal(PipeReader reader) at Emby.Server.Implementations.HttpServer.WebSocketConnection.ReceiveAsync(CancellationToken cancellationToken) at Emby.Server.Implementations.HttpServer.WebSocketManager.WebSocketRequestHandler(HttpContext context) at Emby.Server.Implementations.HttpServer.WebSocketManager.WebSocketRequestHandler(HttpContext context) [2024-05-15 21:30:50.621 +10:00] [ERR] [17] Emby.Server.Implementations.HttpServer.WebSocketManager: WS "192.168.0.92" WebSocketRequestHandler error System.Net.WebSockets.WebSocketException (997): The WebSocket is in an invalid state ('Closed') for this operation. Valid states are: 'Open, CloseReceived' at System.Net.WebSockets.ManagedWebSocket.SendAsync(ReadOnlyMemory1 buffer, WebSocketMessageType messageType, WebSocketMessageFlags messageFlags, CancellationToken cancellationToken)--- End of stack trace from previous location ---
at Emby.Server.Implementations.HttpServer.WebSocketConnection.ProcessInternal(PipeReader reader)
at Emby.Server.Implementations.HttpServer.WebSocketConnection.ReceiveAsync(CancellationToken cancellationToken)
at Emby.Server.Implementations.HttpServer.WebSocketManager.WebSocketRequestHandler(HttpContext context)
at Emby.Server.Implementations.HttpServer.WebSocketManager.WebSocketRequestHandler(HttpContext context)
[2024-05-15 21:30:50.624 +10:00] [INF] [17] MediaBrowser.MediaEncoding.Transcoding.TranscodeManager: Transcoding kill timer stopped for JobId "cd1cec41c44e4fff873f8fe83443bbfa" PlaySessionId "027d9fc5aa7b4401ac392a7967e85da7". Killing transcoding
@ghost commented on GitHub (May 15, 2024):
.NET TP Worker which starts to multiply
@joshuaboniface commented on GitHub (May 15, 2024):
We've found quite a bit of info in our internal chats about this (going through my core dumps and logs).
The short version is, right now we're pretty sure the issue has to do with transcoding jobs and updating the transcode status. This happens very frequently, but there is a database query in there which is very heavy and after enough time the backlog locks everything up. That TP Worker is the process running the queries.
We're working on a solution so please remain patient and we'll hopefully get that out in a 10.9.2 sometime very soon.
@ghost commented on GitHub (May 15, 2024):
I can't speak for others but I have to say thank you! My team really appreciate what you doing for the community, keep up the good work...
@soakes commented on GitHub (May 15, 2024):
Totally agree and good job everyone whos contrubuted in anyway as everything helps.
@ServeurpersoCom commented on GitHub (May 15, 2024):
Even if we use only audio transcoding (I never transcode video, all already 3-5Mbps AVC) ?
@soakes commented on GitHub (May 15, 2024):
What ive noticed is, even if its direct, its still going through ffmpeg.
You can see in the logs (note the filenames)
I noticed this recently as I was looking at why IOPS/writes was higher then I expected even with most of the content being direct play, then noticed the transcode directory was being filled with lots of files. This is why ive now done a large ramdisk (tmpfs) for transcode directory to keep writes to a min.
@cvium commented on GitHub (May 15, 2024):
Direct Stream = Audio, subtitles or container are unsupported
Remux = Container is unsupported
Direct Play is always just sent as-is.
@SingingFrog7 commented on GitHub (May 15, 2024):
I am pretty sure I had a few occurrences with Direct Play, but let see how it goes with the fix. I weirdly still didn't had crashed in the last 36h, but I disabled all apps using Jellyfin's API (Jellyseerr, jellystat) so I wonder if it helped?
@soakes commented on GitHub (May 15, 2024):
Thank you for the clarification.
@ServeurpersoCom commented on GitHub (May 15, 2024):
this is a basic to know if you want to take heavy load on a machine :)
@DwayneGoddem commented on GitHub (May 16, 2024):
You might be right there.
I can see a lot of this error
I have seen error for "'C:\ProgramData\Jellyfin\Server\transcoding-temp\transcodes" with the file is open by another process
This was in transcoded and direct play logs.
I cleared the logs so I don't have a snip to post
@arab21dwc commented on GitHub (May 16, 2024):
I have set transcoding thread count to 1, app is still responsive for 12 hours now, will keep it running like this while waiting for the fix or 10.9.2 thanks @joshuaboniface @gnattu @all who contributed to this project keep it up guys
@ghost commented on GitHub (May 16, 2024):
Anyone have any information about when this bug came up?
Earlier I used unstable 10.9.0 from 2023 so my only option is switch
back to unstable because 10.8.x will not work with my database.
My question is all of this releases contain this bug? https://repo.jellyfin.org/files/server/debian/unstable/
@memehammad commented on GitHub (May 16, 2024):
Try version 2024040106 or earlier. The issue was introduced in the version after.
@ghost commented on GitHub (May 16, 2024):
Ok, and where can I get this version?
@memehammad commented on GitHub (May 16, 2024):
No clue 💀💀💀
@joshuaboniface commented on GitHub (May 16, 2024):
It would have to be built manually, our CI doesn't keep unstables that old.
@DwayneGoddem commented on GitHub (May 17, 2024):
Yesterday I shutdown Jellyseerr and last night with about 6 users streaming both Transcoded and Direct local network and external there was NO issues as all.
so @SingingFrog7 was onto something
So how much was changed in the API engine that could be the issue? or does Jellyseerr just need to update to there end.
Does anyone else use other API based connections?
@ravxen commented on GitHub (May 17, 2024):
I was about to shutdown the whole arr-suite + jellyseerr (every app that uses the API) when Jellyfin started to lock up again.
The moment the apps were down, Jellyfin worked again.
I'll come back tomorrow and tell if there were issues again.
@gianklug commented on GitHub (May 17, 2024):
Using Jellyseerr and other -arr apps that use the API as well. I'll just Jellyseerr down for now and will update if that fixes it.
EDIT: Just watched an entire episode without any issues.
@DwayneGoddem commented on GitHub (May 17, 2024):
I have all the other err's running with no issues. Just Jellyseerr was stopped and last night was the 1st stable night after upgrading to 10.9.1
@vin86 commented on GitHub (May 17, 2024):
i can confirm that after disabled jellyseer, jellyfin works without problems
@DwayneGoddem commented on GitHub (May 17, 2024):
so I wonder if Jellyfin has an API issue I can see in the change log there was API changes or if Jellyseer needs to be updated?
We will need the devs to check I guess.
So this removes the its a transcode issue i guess.
@BonzTM commented on GitHub (May 17, 2024):
Jellyfin 10.9.1 and Jellyseerr 1.8.1 -- no issues here since the release of 10.9.1
@vincejv commented on GitHub (May 17, 2024):
@DwayneGoddem jellyserr doesn't need fixing, it's a Jellyfin issue https://github.com/jellyfin/jellyfin/pull/11670
@felix920506 commented on GitHub (May 17, 2024):
Fixed in #11670
@SingingFrog7 commented on GitHub (May 17, 2024):
I had re-enabled Jellyseer after updating it and had no issue for another 24h, but then got another crash when a user needed transcoding. Disabling jellyseerr again made it work so I guess it's not just one or the other, but both combined along with other parameter such as a specific virtualization setup (VMs seems fine while LXC have that issue)
I think jellyseerr just add another bunch of authentication queries which, along with transcoding (which needs to validate if the user is allowed to transcode) ends up being too much. I hope the rolleback to using cache with UserManager will fix it
@DwayneGoddem commented on GitHub (May 17, 2024):
yes I guess we need to wait for the next release or try the next unstable release.
the current unstable is dated the 9th so I guess it will just have to keep an eye out for it.
@felix920506 commented on GitHub (May 17, 2024):
Please test on the next unstable build (should be out next Monday) and open a new issue if it wasn't fixed
@memehammad commented on GitHub (May 17, 2024):
Anything stopping you guys from releasing this sooner? Jellyfin is unusable for a lot of people right now.
@nielsvanvelzen commented on GitHub (May 17, 2024):
10.9.2 will be released shortly
@arab21dwc commented on GitHub (May 17, 2024):
Hi @nielsvanvelzen is it still going to be released today
@nielsvanvelzen commented on GitHub (May 17, 2024):
Can't give exact release dates but it should be within 48 hours from now.
@ghost commented on GitHub (May 17, 2024):
I created a disk backup from my debian system in 2023.12.08, so I decided I restore to another disk and create deb installer files with dpkg-repack. I successfully created and restored the working system, unstable 10.9.0 (20230628) amd64, If you want to play with it, I uploaded to here: https://file.ath.cx/filebrowser/share/MNUkEiqD
@arab21dwc commented on GitHub (May 17, 2024):
@gyilkoszabpehely problem with this release it crashes the android tv app
@ghost commented on GitHub (May 17, 2024):
@arab21dwc I've been using this version for a very long time, and I tested on all platform...
Are you using hardware encoding? If yes, what kind of video card have you?
New debian kernel images messing nvidia drivers...
If you have debian 12 check ffmpeg path in playback: /usr/lib/jellyfin-ffmpeg/ffmpeg
And if the problem with ffmpeg, you can use the new one: jellyfin-ffmpeg5
The funny thing in this release that is working fine if you want to revert back from 10.9.1, you must turn off all plugins...
@ghost commented on GitHub (May 17, 2024):
I know that most people do not know how useful such a system is if a community or family uses every day.
I have a little dislike for the developers for publishing this edition, maybe they didn't think about how many people's work can go to waste. I know that we need backup, but neither the family nor the community can raise the necessary costs for me.
I only do this for free for the record, that only my expense.
I could write a novel about my last week, because they destroyed everything that drives the engine of the community for me.
Jellyfin kills jellyfin.
Debian kills Nvidia.
Not funny :(
@cvium commented on GitHub (May 17, 2024):
If you can't afford to backup your config and data (not media) directories, I don't think self-hosting is for you.
@ghost commented on GitHub (May 17, 2024):
@cvium if you had bothered to read it back, my life would have been easier :)
@arab21dwc commented on GitHub (May 17, 2024):
@gyilkoszabpehely it must be my database that is crashing the android application, as soon as i go to the unstable build the android app does not crash
@ghost commented on GitHub (May 17, 2024):
@cvium I created a backup from everything, except a media. But how can I go back to the previous jellyfin version if I've been using 10.9.0 for almost an year and jellyfin deletes all which is working and compatible with this version?
@arab21dwc commented on GitHub (May 17, 2024):
FIY i also found a way to disable transcoding completely and just uses remuxing for all clients for media and live tv >xui one>xteve>jellyfin and remuxing does not cause high cpu usage
@SingingFrog7 commented on GitHub (May 17, 2024):
@gyilkoszabpehely you can't go back to 10.8.x unfortunately, due to change in databases. You decided to use unstable releases, that comes with that kind if risks. You can build the code yourself from a specific time where it worked for you
In that specific case, were are pretty lucky, devs are really responsive and the problem is most likely already fixed (we're just waiting for the next release, but you could build the code yourself if you don't want to wait)
As with most projects like Jellyfin, people aren't doing that for a living, they are maintaining the projects on their own time.
I don't want to be harsh or anything, but I feel like I'd share a simple "trick" I developed over the year:
@joshuaboniface commented on GitHub (May 17, 2024):
To be clear, we don't delete anything, so I'm not sure about what is meant there. We don't keep old unstable binary builds sure (ever increasing space requirements), but the code is all there in the Git repository and they can be built again from source if required. Upgrades apply database migrations for code changes however, so yes unless a backup is made before upgrading, upon the very first start of a new version, the on-disk databases will change and will not be compatible with any older version(s) - including older unstables potentially.
I'll echo what @SingingFrog7 said. To be exceedingly blunt, if you decide to jump right on a new major release that is 2+ years in the making right on day one... you're going to have problems. We had, at least among the team, at least 15 of us testing 10.9.0 for weeks before releasing and while we noticed (and fixed) a lot of bugs, our collective usecases cannot possibly represent the literally tens of thousands of other usecases out there. Bugs will happen, things will break. This is why we constantly stress to take backups before every upgrade, why we ask the community to help test unstable releases (because then we're more likely to find the edge cases), and why we never recommend automatic updates for Jellyfin. If stability is important to your usecase, it is always better to wait at least a week or two before upgrading major versions (note that point releases are much more strict and are usually safe - and recommended for security - to update immediately). We are all volunteers doing the best we can, and while we'd always like releases to be a seamless, bug free experience, that's not always possible.
I think at this point, this thread has been exhausted of viable discussion. While I'm normally loathe to do so, I'm going to limit this to contributors only now. 10.9.2, with what we expect is the final fix to this issue, will be released at some point in the next 24-48 hours. If the issue still occurs for anyone after that version, then please open a new issue.