• 0 Posts
  • 70 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle
  • I think there are more people that are #1 and #2 the same time

    Probably where some of the attitude comes from. People are assuming that it’s paid IT people bringing their work home with them, which is a different case then a casual user trying out self-hosting without the broader background.

    Although I haven’t seen this attitude myself so I suspect it’s not that common, and probably just a handful of users jumping to conclusions.







  • Mainly because running multiple desktop machines adds up to a lot of power, even at idle. If you power them off and on as needed it’s better, but then it’s not as convenient. Of course, if you leave a single machine with multiple GPUs on 24/7 that will also eat a lot of power, but it will be less than multiple machines turned on 24/7 at least.

    And the physical space taken up by multiple desktop machines starts to add up significantly, particularly if you live in an apartment or smaller house.



  • I’ve recently tried to do that using sunsine and different linux gaming distros and it was awful, the VM was working great for a few minutes and then suddenly crashes and I have to hard stop it.

    Are you running this with something like libvirtd/qemu? If so, VFIO configurations can get pretty complex. Random crashes seem like MSI interrupt issues (or you’ve allocated too much RAM to the guest). Or it could be GPU reset issues that would also occur on the (Linux) host, a newer kernel and Mesa version in the guest may help.

    Setting on the kernel commandline for the host to workaround MSR interrupt crashes:

    kvm.ignore_msrs=1

    If you’re running on a Windows host or with something like Virtualbox (assuming GPU passthrough is supported by these), YMMV but I wouldn’t expect good results.







  • vividspecter@lemm.eetoSelfhosted@lemmy.worldHosting private UHD video
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    5 months ago

    I suspect the delay would still be longer than a Youtube like implementation which may need to switch transcodes multiple times, but that’s probably unrealistic at this point anyway.

    Transcoding everything to AV1 could be a solution too, since high resolutions can look quite good at low bitrates, so you could limit it to 5mbps or 10mbps for any resolution and be done with it. But I’m not sure Jellyfin supports that, and at least from the UI it doesn’t give you particularly fine grained control over resolution/bitrates. Perhaps having a secondary library of just AV1 transcodes that you handle manually (perhaps even using a software encoder) could be an option for some.

    The client side is also an issue, with not that many devices supporting hardware decoding (although I’ve found it’s fast enough in software with most modern smartphones at least).