• 1 Post
  • 42 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle




  • jet@hackertalks.comtohomelab@lemmy.mlNetwork setup help
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    5 days ago

    Depending on your requirements, you can pick up used gear for quite cheap, set alerts on craigslist/marketplace/kijiji. i.e. one access point for like $30 used, and host your own network controller container to configure it.

    If you want a single pane of glass whole network management, its going to be spendy no matter which ecosystem you go with.



  • jet@hackertalks.comtohomelab@lemmy.mlNetwork setup help
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    5 days ago

    True, but you can use your gateway to cut off google wifi from google, and still use the radios. No need to buy new hardware.

    Heck, you can put openwrt on some google wifi models https://openwrt.org/toh/google/wifi

    My advice stays the same, work with what you have first, save your budget, then SLOWLY, after doing research, buy one thing, and fit it in.

    Your advice is good if you just want the fastest way to de-google yourself, but i think the OP wants to run a homelab, and learn, and understand.


  • jet@hackertalks.comtohomelab@lemmy.mlNetwork setup help
    link
    fedilink
    English
    arrow-up
    5
    ·
    5 days ago

    Do one thing at a time, don’t buy equipment unless you have a actionable use case for it.

    Isp cpe in bridge mode

    One of the boxes can be your gateway

    You can keep using the Google Wi-Fi.

    You can play around with proxmox, xen, etc, to run a bunch of containers, or virtual machines, to do different things on your network. I think you can do it all with your current hardware





  • Okay. Do you want to debug your situation?

    What’s the operating system of the host? What’s the hardware in the host?

    What’s the operating system in the client? What’s the hardware in the client?

    What does the network look like between the two? Including every piece of cable, and switch?

    Do you get sufficient experience if you’re just streaming a single monitor instead of multiple monitors?


  • Remember the original poster here, was talking about running their own self-hosted GPU VM. So they’re not paying anybody else for the privilege of using their hardware

    I personally stream with moonlight on my own network. Have no issues it’s just like being on the computer from my perspective.

    If it doesn’t work for you Fair enough, but it can work for other people, and I think the original posters idea makes sense. They should absolutely run a GPU VM cluster, and have fun with it and it would be totally usable


  • Fair enough. If you know it doesn’t work for your use case that’s fine.

    As demonstrated elsewhere in this discussion, GPU HEVC encoding only requires 10ms of extra latency, then it can transit over fiber optic networking at very low latency.

    Many GPUs have HEVC decoders on board., including cell phones. Most newer Intel and AMD CPUs actually have an HEVC decoder pipeline as well.

    I don’t think anybody’s saying a self-hosted GPU VM is for everybody, but it does make sense for a lot of use cases. And that’s where I think our schism is coming from.


    As far as the $2,000 transducer to fiber… it’s doing the same exact thing, just more specialized equipment maybe a little bit lower latency.




  • jet@hackertalks.comtoSelfhosted@lemmy.worldFully Virtualized Gaming Server?
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    16 days ago

    Yes, for some definition of ‘low latency’.

    Geforce now, shadow.tech, luna, all demonstrate this is done at scale every day.

    Do your own VM hosting in your own datacenter and you can knock off 10-30ms of latency.

    However you define low latency there is a way to iteratively approach it with different costs. As technology marches on, more and more use cases are going to be ‘good enough’ for virtualization.

    Quite frankly, if you have a all optical network being 1m away or 30km away doesn’t matter.

    Just so we are clear, local isn’t always the clear winner, there are limits on how much power, cooling, noise, storage, and size that people find acceptable for their work environment. So there is some tradeoff function every application takes into account of all local vs distributed.