mirror of
https://github.com/jellyfin/jellyfin.git
synced 2026-03-01 11:20:59 +03:00
Provide own build of ffmpeg #97
Reference in New Issue
Block a user
Delete Branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Originally created by @JustAMan on GitHub (Dec 16, 2018).
Originally assigned to: @JustAMan, @joshuaboniface on GitHub.
We may want to consider providing own ffmpeg bits.
At least having
ffmpegthat is new enough and handles some hardware acceleration would be nice.@joshuaboniface commented on GitHub (Dec 16, 2018):
We can build a custom binary
.debwith a statically-linked version of ffmpeg, with everything we reasonably can turned on - it shouldn't be that hard to do and would work everywhere in theory.@andrewrabert commented on GitHub (Dec 16, 2018):
Need to have the build process Dockerized as well. I'm wondering if we should have a separate repo for that? Doesn't make sense to always rebuild ffmpeg when building Jellyfin.
@JustAMan commented on GitHub (Dec 16, 2018):
It would be... painful to rebuild it every time. It builds for more than dozen minutes on my i7 + SSD.
I was thinking about separate repo with irregular releases from which Jellyfin could pick up bits either at build time or at install time.
@JustAMan commented on GitHub (Dec 16, 2018):
@nvllsvm would it be fine if I make a repo with non-docker build script and ask you (or someone else) to Dockerize it? I'm not familiar with Docker at all...
And yes, I've read GPL FAQ thoroughly just now... and seems that ideally we need to have forks of all repos used to build ffmpeg (as GPL states that source must be available as long as the binary release is available). Opinions?
@andrewrabert commented on GitHub (Dec 16, 2018):
@JustAMan that works. We only need to provide the source of the stuff we are compiling.
@joshuaboniface commented on GitHub (Dec 16, 2018):
I've forked FFMPEG from https://github.com/FFmpeg/FFmpeg to our org so we have it there. Will try building a static binary this evening.
Edit: I did remove this, since there isn't really a reason to fork it to our org.
@BobOkisama commented on GitHub (Dec 17, 2018):
This is awesome, I am currently trying to figure out how to get NVENC and NVDEC working in ubuntu 16.04 with jellyfin, so a static would be great! Heck, I would even be fine with a simple bash script I can run that will prompt me to ask which HW options I needed, then it just downloads and complies the thing for me and throws it in something like /opt/ffmpeg. AT least that way people can customize what they need and you guys don't have to kill yourselves building.
@MatMaul commented on GitHub (Dec 17, 2018):
Here is a guide I found to compile with some vaapi and intel opencl dependencies, would be useful so we can do most of the transcoding on the GPU. I am trying to have some tone mapping working on the GPU for 10->8bits translation.
https://gist.github.com/Brainiarc7/4f831867f8e55d35cbcb527e15f9f116
Regarding full static build it is a lot of work I think. Shouldn't we just provide a full version in Docker ? It would allow us to install a lot of the deps using apt.
Otherwise for the static build I know VLC have a nice contrib folder with script to build ffmpeg with lots of dependencies, perhaps it can be used as a base.
@BobOkisama commented on GitHub (Dec 17, 2018):
I have a P2000 so would want to compile the NVDEC and NVENC for mine.
@MatMaul commented on GitHub (Dec 17, 2018):
@BobOkisama the script you want is a LOT of work...
If we can make it works fine with HW transcoding inside a container it is a lot simpler for developers to manage than a static build, and you as a user can just use Docker.
@BobOkisama commented on GitHub (Dec 17, 2018):
I am 100% ok with doing it by Docker, as I am using a docker for mine now. I was just throwing ideas out against a wall to see if anything stuck.
@JustAMan commented on GitHub (Dec 17, 2018):
Making fully static build is kind of hard, but making "relocatable" set of binaries which one can extract anywhere and run is not that hard, I have a proof of concept for that.
@MatMaul commented on GitHub (Dec 17, 2018):
You mean taking and relocate the .so from the binary packages of the distribution or building them for sources ? I am mostly worried about the maintenance of a huge script which build most of the deps from sources.
@JustAMan commented on GitHub (Dec 17, 2018):
I mean building ffmpeg + codecs from source. It's not as hard as it sounds - as I said I already have a proof of concept for that. Just need some more poking around...
@SpootDev commented on GitHub (Dec 18, 2018):
I'll link my ffmpeg docker image based on Alpine linux for size. It doesn't have all the hw acc in it yet though.
https://github.com/MediaKraken/MediaKraken_Deployment/tree/master/docker/alpine/ComposeMediaKrakenBaseFFMPEG
@SpootDev commented on GitHub (Dec 18, 2018):
my dev branch has vdpau now....working on the others.
@JustAMan commented on GitHub (Dec 18, 2018):
Isn't vdpau obsolete by now? I thought nvdec/nvenc replaced it.
@SpootDev commented on GitHub (Dec 18, 2018):
Other cards can use it...and nvidia still has it listed for 1080 and such. I'll have to play around with it more. Not really sure.
@BobOkisama commented on GitHub (Dec 19, 2018):
Anyone have a working ffmpeg that supports nvdec/nvenc yet? My P2000 is hungry for this! I changed my docker to jellyfin/jellyfin:latest last night, no bueno yet. Also, I would be happy to build it if anyone has some legit instructions on how to do so.
@anthonylavado commented on GitHub (Dec 21, 2018):
Just referencing from the old GPL violation issue nearly a year ago, here are the build flags that original upstream uses:
Not sure if this is still relevant, but thought it was worth mentioning.
@gsnerf commented on GitHub (Dec 27, 2018):
While I see a benefit in providing "own" ffmpeg binaries for some cases (like docker and windows, etc.): wouldn't it be better to just re-use the regular linux packages provided by the different distros?
@JustAMan commented on GitHub (Dec 27, 2018):
This is how it is done by default.
The problem with using system ffmpeg is that you're in the dependency hell, that is, something works one way in older version and another way in newer version. But I don't think we'll be force-bundling own ffmpeg version, it's just another option.
@EraYaN commented on GitHub (Dec 28, 2018):
For Windows it's almost a requirement to bundle ffmpeg. Most likely the binary releases from https://ffmpeg.zeranoe.com/builds/ will do. You can make the installer download the archive. Building on windows should be done with the zeranoe (not the nicecest IMO), this one works pretty much "set and forget" jb-alvarado/media-autobuild_suite. And then the server could be built using regular Visual Studio. (Well MSBuild really, and then even the installer with something like WiX)
The most important thing will be to give people the option to override.
I have been toying with my own branch of Emby for a while to see if I could instead of using the command line use the libraries directly to do transcoding in process and have a much lower latency.
@joshuaboniface commented on GitHub (Dec 28, 2018):
@EraYaN How are you achieving that command line hackery? That might be much better in the long run, to let users set their own FFMPEG flags via the UI if they want to. It's something I wanted for a while too!
@JustAMan commented on GitHub (Dec 28, 2018):
@logicalphallacy has already more-or-less enabled Windows builds (including bundling zeranoe ffmpeg). I suggest you have a look and probably make some suggestions or improvements as you seem to have related experience.
@EraYaN commented on GitHub (Dec 29, 2018):
@joshuaboniface Well the cmd hackery was just by editing the Server source (one of the encoder classes if my memory serves me right).
The problem with random user-specified flags is that the order matters. ffmpeg is pretty picky or more so, the *_qsv parts are. But on windows at least if you remove the ffmpeg from the server it looks in your path (and that works, but a direct setting would be better). The trouble is that the whole server is made with the assumption that the only output can be h264 from transcoding, making dealing with stuff like this harder.
That is partially why I wanted to try to make a proper binding to ffmpeg as a library but my video encoder/decoder-fu wasn't good enough at the time.
@MatMaul commented on GitHub (Jan 10, 2019):
@EraYaN I had a look at the ffmpeg API after your comment, good luck with that :) it is a lot more low level that the ffmpeg CLI, which is already not simple with order, the filters pipeline, hw acceleration.
You need to manually handle everything. you will have to read the file, pass packets to libavformat, then pass the stream packet to libavcodec, works with picture buffers, convert them, filter them, timestamps hell, etc. I stopped after seeing the example:
https://ffmpeg.org/doxygen/trunk/doc_2examples_2decoding_encoding_8c-example.html
They don't seem to have a higher abstraction that would mimic ffmpeg CLI.
I haven't look for other libs to have an higher abstraction however. libvlc for sure but it's a big beast.
It's doable, but it is a huge amount of work for almost 0 gain.
@BobOkisama commented on GitHub (Jan 10, 2019):
I would not say 0 gain. A $250-300 Quadro P2000 can handle about 7 or so 4k transcodes at once without artifacts, nearly as perfect quality as software. With the "patched" drivers for GTX cards a 1080 ti is like a $6k P6000 and can probably handle something crazy like 20 4k transcodes. Being able to put a GPU in a lower end system and get the same overall performance (or better) with a massive amounts of streams being able to be performed at a fraction of the power and probably 1/10th the cost of a higher end server... well that is some serious gain.
I see threads all the time on Reddit of "cheap" server builds that come in around $1200 or so that can handle 3(ish) 4k streams.. when someone can buy a $200 Dell Optiplex and then a $250-300 P2000 (or with patched drivers a $100 1050/1060 ti) and get nearly double that performance for half the cost. IMO, the gain is pretty large.
@MatMaul commented on GitHub (Jan 10, 2019):
It is completely unrelated to the fact that you would use ffmpeg CLI or the API instead no ? Or did I miss something ?
@JustAMan commented on GitHub (Jan 11, 2019):
BTW, if anyone is interested I'm slowly chipping at it here: https://github.com/JustAMan/ffmpeg-standalone-build
@EraYaN commented on GitHub (Jan 11, 2019):
Are you donig linux only or windows too? For windows there are existing project that can handle just about every combination out there. You might also need multi depth x264 and x265 so you can support hdr and non HDR transcoding.
@JustAMan commented on GitHub (Jan 11, 2019):
Linux now, as it's what I'm interested in. What about multi-depth? I think I already set the flags, did I not?
@EraYaN commented on GitHub (Jan 11, 2019):
Mmm for x264 yes, not sure for x265. You might need to statically link one and then dynamically link the 12 bit version. But that might have changed with changes up stream. x265 multi-depth used to mean 3 different binaries.
@BobOkisama commented on GitHub (Jan 11, 2019):
Waiting eagerly on this. I'll be happy to break stuff on my side once you need some testers.
@pb1051 commented on GitHub (Jan 31, 2019):
Running Jellyfin in a container on Ubuntu. This would be great.
@hawken93 commented on GitHub (Feb 1, 2019):
I had a go at custom compiled ffmpeg with 660ti in a kvm with gpu passthrough and I was disappointed that I couldn't seem to get it to work in a headless fashion.. They always have those dummy plugs on ebay though. I'll probably test some more
@joshuaboniface commented on GitHub (Mar 13, 2019):
Debian build of ffmpeg is now provided by the
jellyfin-ffmpegpackage (https://github.com/jellyfin/jellyfin-ffmpeg). Will still need this for other platforms though.@BobOkisama commented on GitHub (Mar 13, 2019):
Just want to make sure, this includes NVENC and NVDEC right?
@joshuaboniface commented on GitHub (Mar 13, 2019):
Right now the Debian one does not - trying to get the library to compile properly and consistently is difficult. It does support VAAPI and VDPAU however.
@joshuaboniface commented on GitHub (May 16, 2019):
Just to provide an update for posterity - recent versions of the
jellyfin-ffmpegDebian/Ubuntu packages do now suppport NVENC/NVDEC.@joshuaboniface commented on GitHub (May 25, 2019):
Someone mentioned this for Arch the other day, but since the AUR repo isn't managed by us directly I'm not sure the status of that.
Beyond that, the other main platform is Fedora/CentOS, which I think is still OK on this front with 4.0 packaged, but that may change of course. Windows can continue to use official ffmpeg binaries as they do.
@viggy96 commented on GitHub (May 26, 2019):
Is the docker version of Jellyfin using the special 'jellyfin-ffmpeg' ?
@anthonylavado commented on GitHub (Jul 4, 2019):
Do we still want to hang on to this issue?
Here's what we've covered:
1: Currently using jellyfin-ffmpeg repository.
2: When using an available FFmpeg 4.0.x for that operating system/distribution.
3: Future server releases have plans to bundle this in.
@JustAMan commented on GitHub (Oct 8, 2019):
I think we currently don't have to provide our own build, because we're mostly fine with upstream bits.
For anyone on x86_64 Linux who's in desperate need of ffmpeg with quicksync/vaapi/nvenc-enabled hardware acceleration which isn't provided by any means listed by @anthonylavado you're welcome to try my project: https://github.com/JustAMan/ffmpeg-standalone-build