I noticed my home servers SSD running out of space and it ended up being my Jellyfin Docker container which wasn’t clearing the directory for transcodes in /var/lib/jellyfin/transcodes
correctly.
I simply created a new directory on my media hard drive and bind mounted the above mentioned directory to it. Now Jellyfin got over 1 TB of free space to theoretically clutter. To prevent this I simply created a cronjob to delete old files in case Jellyfin isn’t.
/usr/bin/find /path/to/transcodes -mtime +1 -delete
Easy!
Why not write to ram instead?
How much RAM would this consume?
Every transcode could need as much disk space as the size of the file you’re playing. If you have a media file that’s bigger than your available RAM the transcode will propably cause problems because you will run out of RAM.
I transcode to ramdisk.
I have like a dozen people using my Jellyfin and sometimes 3-4 people watch something at the same time which results in a lot of transcoding data. At the moment my transcoding directory (which is cleaned every 24 hours) is almost 8 GB big. I don’t have the RAM to do this.
Starting with 10.9 you can enable segment deletion so files are cleaned up while still transcoding.
Version 10.9 is not even released, right?
Personally I have a secondary external SSD I use for my cache and transcode directories so that my transcodes aren’t throttled by being read from and written to the same disk.
Also of note is that Jellyfin does have a cron job built into it to clear the transcodes directory. You can see it under Dashboard -> Scheduled Tasks -> Clean Transcode Directory. I have mine set to run every 24 hours.
Yeah, but the cleanup job doesn’t seem to work reliably. I noticed because my home server ran out of disk space because the transcoding directory was over 30 GB in size.