At least in the case of a Jellyfin server, you can download media locally when you know you’ll be without internet
At least in the case of a Jellyfin server, you can download media locally when you know you’ll be without internet
Zetsubou Sensei has the highest throw-away blink-and-you-miss-it gag rate of any anime I’ve seen. You need to pause every time the chalkboard is visible to get all of the jokes
Mushishi and Kino’s Journey
If you switch the devices line to
- /dev/dri:/dev/dri
as other have suggested, that should expose the Intel iGPU to your Jellyfin docker container. Presently you’re only exposing the Nvidia GPU.
QSV is the highest quality video transcoding hardware acceleration out there. It’s worth using if you have a modern Intel CPU (8th gen or newer)
Jellyfin doesn’t need any particular setup to work directly from LAN because it doesn’t ever try to use a central login provider the way Plex does.
The only reason OP is struggling with it is because they set it up so that they can only connect to it via Tailscale.
My 70-something father in law is a no-life hikkikomori who just watches anime all the time, now that he’s retired
Right, I just mean if your connection speed is faster than your server can transcode, then the transcode speed will be the bottleneck
It’s limited to the transcode speed, but it’s important to keep in mind that e.g. if you transcode to a lower resolution especially it’ll usually transcode faster than realtime.
FYI Jellyflix also supports that
This. Jellyfin has a direct HDHR integration and works as a DVR directly with one.
The person you’re replying to linked their literal reliability stats lmao
So first of all, you shouldn’t involve yourself in your friend’s business. Fraud is generally frowned upon.
But secondly, you know that ChatGPT was trained on the entire internet, right? Like, every book. I don’t think “more books” is gonna help.
I hope you take your computer skills and make something of yourself. Try not to get any more involved in this scheme, seriously. You don’t need this crap marring your reputation.
Besides, there are better reasons/ways to fight the system than helping other people avoid learning.
Just that they’re no easier to use to fool an anti-AI system than using ChatGPT, Gemini, Bing, or Claude. Those AI detectors also give false positives on works made by humans. They’re unreliable in the first place.
Basically, they’re “boring text detectors” more than anything else.
I believe commercial LLMs have some kind of watermark when you apply AI for grammar and fixing in general, so I just need an AI to make these works undetectable with a private LLM.
That’s not how it works, sorry.
Quantized with more parameters is generally better than floating point with fewer parameters. If you can squeeze a 14b parameter model down to a 4-bit int quantization it’ll still generally outperform a 16-bit Floating Point 7b parameter equivalent.
I would also add Welcome To The NHK to the list of SOL with anxious characters. That show is amazing.
IMO, if you want the beast deals right now on a 12+ TB HDD, you should use serverpartdeals.com instead. I’ve got 2 manufacturer recertified 14 TB enterprise-grade drives from them and it was way cheaper than buying any 14 TB external drive.
They’re still mounted individually, so you do RAID5 or ZRAID on them, same as if they were internal. You can potentially be bandwidth-limited since USB 3.0 has a 5 Gbps speed limit, but realistically only for reads and you’re still fine in terms of overall performance since they’re all spinning disks anyhow and 5Gbps is fast enough for any media server/NAS unless you’ve got a 10-gig LAN/internet connection and feel the compulsive need to saturate it.
Generally the app is better. Compatible with more container formats, audio formats (surround sound, Dolby digital, etc), and has hardware supported decoding for h265 video in addition to h264.