Some IT guy, IDK.

  • 1 Post
  • 35 Comments
Joined 1 year ago
cake
Cake day: June 5th, 2023

help-circle

  • Yep, I’m sure they do.

    Realistically, does any average consumer know what’s on which circuit?

    Spanning the split phase will screw you up, across breakers won’t be fun but shouldn’t pose any serious problems, as long as it’s not in different sides of the split phase.

    I’m pretty sure they say this because actually explaining what will work and what won’t either requires significant prior knowledge of power systems, or a couple of paragraphs of explainers before you can get a rough picture of what the hell they’re driving at.

    Everyone I know who has used powerline, just plug it in and see if it works. Those who were lucky, say it’s great and works without issue, etc. Those who were not lucky say the opposite.

    I’m just over here watching the fireworks, eating popcorn.


  • I’ve been doing IT work for more than a decade, I was a nerd/“computer guy” well before that. I’ve had a focus on networking in the past 15-20 years. You learn a few things.

    I try to be humble and learn what I can where I can, I know that I definitely do not know everything about it, and at the same time I try to be generous and share what I’ve learned when I can.

    So if you have questions, just ask. I either already know, or I can at least point you in the right direction.


  • It definitely sounds like you have some challenges ahead. I personally prefer MoCA over wireless, simply because you can control what devices are able to be a part of the network, and reduce the overall interference from external sources and connections.

    With WiFi, being half duplex, only one station can transmit at a time (with come caveats). Whether that station is a part of your network, or it is simply operating on the same frequency/channel, doesn’t matter. So in high density environments, you can kind of get screwed by neighbors.

    MoCA is also half duplex (at least it was the last time I checked) so having a 2.5G MoCA link, to a 1GbE connection (on the ethernet side) should provide similar, or the same experience as pure ethernet (1G full duplex)… The “extra” bandwidth on the MoCA will allow for each station to send and receive at approximately 1Gbps without stepping on eachother so much that you have degraded performance.

    However, it really depends on your situation to say what should or shouldn’t be setup. I don’t know your bandwidth requirements, so I can’t really say. The nice thing about ethernet is that it on switched networks (which is what you’ll be using for gigabit), the. Ethernet kind of naturally defaults to the shortest path, unless you’re doing something foolish with it (like intentionally messing with STP to push traffic in a particular direction). The issue with that is that ethernet doesn’t really scale beyond a few thousand nodes. Not an issue for even a fairly large LAN, but that’s the reason we don’t use it for internet (wan side) traffic routing. But now I’m off topic.

    Given the naturally shortest-path behavior of ethernet, of you have a switch in your office and you only really use your NAS from your office PC, you’ll have a full speed experience. If nothing else needs high-speed access to the NAS, you’ll be fine.

    Apart from the NAS or any other LAN resources, the network should be sufficient to fully saturate your internet connection. So the average WiFi speeds should be targeted towards something faster than your internet link (again, half duplex factors in here). I don’t know your internet speed so I’m not going to even guess what the numbers should be, but I personally aim for double my internet speed for maximum throughput on my WiFi as much as I can. The closer you can get to doubling your internet speed here, the better. Anything more than that will likely be wasted.

    There’s a ton to say about WiFi and performance optimization, but I’ll leave it alone unless you ask about it further.

    Good luck.


  • MystikIncarnate@lemmy.catoSelfhosted@lemmy.worldNetworking Dilemma
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    8 days ago

    It can be faster, it really depends on whether you have a clear-ish channel for the mesh, which is why I would recommend something on the higher end, hopefully with a dedicated radio for mesh, so it can be on a different channel with (hopefully) less interference.

    If the mesh radio is shared with client access, or if it’s on a busy channel, it may be much, much slower than some options.



  • Depending on where you live and what your power circuits look like (not the outlets, the circuits that power them), you may have a great, or very poor experience.

    I’d need to know what country you live in to know more, since power wiring standards vary from country to country. In the USA and Canada (I’m in Canada and the USA is the same), we use split phase and crossing the split phase will severely hinder the ability for powerline to perform.

    It’s a viable option, not my favorite option, I’d recommend MoCA (coax) over powerline, but it’s ultimately up to you.


  • IMO, powerline is going to depend on a lot of factors including what kind of power you use, which varies from country to country. Where I am in North America, we use 240v split phase, and the powerline adapters are 120v (half phase), so if one unit ends up on one side of the phase, and one ends up on the other side of the phase, you’re going to have a bad time, if it links at all… So knowing which “side” of the split phase your powerline is on becomes critical, which is not something most people know about their power situation. As a result, it’s basically a crap shoot whether it will work well or not.


  • I have three suggestions for you.

    Easy mode: find a triple radio mesh wifi system and get at least two nodes. Generally the LAN Jack on the satellite nodes will bridge to the LAN over WiFi. Just add a switch and use it normally. This will harm your overall speeds when connecting to the NAS from other wired LAN systems that are not on the same switch. I’m not sure if that’s important. As long as your internet speed is less than half of your WiFi speed, you shouldn’t really notice a difference.

    Medium mode: buy MoCA adapters and use coax. Just be sure to get relatively new ones. They’re generally all 1G minimum, but usually half duplex, so there’s still sacrifice there, but MoCA is generally better than WiFi. The pinch is making sure you stop the MoCA signal from exiting your premise. You don’t want to tap into someone else’s MoCA network, nor have them tap into yours. There are cable filters that will accomplish this, or you can air gap the coax. I’m not sure how much control you have for the ingress/egress of your coax lines. You can yolo it and just hope for the best, but I can’t recommend that.

    Hard mode: do ethernet anyways. Usually in rentals, nobody can complain with holes in the walls the size you would get from nails to hand pictures, not much larger than a picture hanging nail, is a cup hook. What I did at my old place, which was a rental, was to buy large cup hooks, and put them every ~18" down the hallway, and load it with ethernet cables. I used adhesive cable runners to go down walls near doors and ran the cables under doors to get from room to room. I got lucky that two adjacent rooms shared a phone jack and I replaced the faceplate with a quad port Keystone faceplate on each side. One Keystone was wired to the phone line to keep existing functionality, the rest were connected to eachother though the wall as ethernet, and I just patched one side to the other (on one side was the core switch for my network). That was my experience, obviously your experience will be different. I used white ethernet to try to blend it in with the ceiling/walls which were off-white. In my situation, I was on DSL and used the phone jack in one of the bedrooms for my internet connection, that bedroom was used as an office and it neighbored my bedroom where I used the jack to jack connections through the wall to feed my TV and other stuff in the bedroom. The ethernet on the cup hooks went from the office to the living room where I put a second access point (first ap was on the office) and TV and other stuff. Inbetween the bedrooms and the living room was the kitchen and the wet wall was basically RF blocking, so I needed an access point on either side, so one in the office near the bedroom and bathroom, and one in the living room, provided plenty of coverage for the ~900sqft apartment we were renting. Most everything was on wired ethernet, and the WiFi was used mainly by laptops and cellphones.

    I live by the philosophy of wired when you can, wireless when you have to. Mainly to save WiFi channels and bandwidth for devices that don’t have an easy alternative option like mobile phones and portable computers.

    I don’t think you’re in a bad spot OP, and any of these choices should be adequate for your needs, but that will vary depending on what speed internet you have, and how much speed you need for the LAN (to the NAS and between systems).

    Good luck.



  • Indeed it does. I’m looking forward to the flex series (I’m specifically waiting on the 140 because I have systems with a low profile requirement), to try to put together some GPU acceleration on my homelab cluster. I need it for transcoding in the short term but in the long term I’m hoping to put up one of those open source, self hosted “cloud” gaming services.

    We still do LAN parties and if I can pick up some cheap thin clients, and connect them to a GPU accelerated VDI or something, people wouldn’t have to cart their PC’s over when we have a LAN.

    I’d go for something more modest like the A380, since sparkle has a low profile version of it, but the 6G of dedicated video memory gives me pause, since I’d basically have to dedicate one whole GPU per virtual desktop, which isn’t as scalable as I would need. Even putting two users on a single GPU with 6G of memory is kind of a non-starter for me. I’ve used GPUs with 3G of memory, as recently as 2 years ago, and bluntly, it’s not a good experience. So anything less than 4-6G per user is basically rejected right out of the gate. I might pick one up just to test with a single VM in a VDI situation, but long term that’s not going to work.


  • At the risk of resurrecting a zombie post. I’ll respond.

    I’m not sure on the specifics of xcp-ng, since I haven’t run it myself, but, I know proxmox and VMware can both do PCIe pass thru to VMs. Recently L1 techs have done videos on the Intel flex GPUs and their potential with vdi for video rendering (basically for a virtual GPU), which worked excellently. I’m not sure if there’s a large feature gap between the a380 and the flex series, but I suspect not. Given the cost of an A380 it’s probably worth the risk to try it. With all the recent updates for the Intel GPUs which have been increasing performance and stability, the a380 is a solid buy, even if it’s “only” able to be passed through to the VM …

    Good luck



  • I didn’t have to read far into the documentation of pi alert to find your issue. Scans and detection is done using ARP scans. ARP or address resolution protocol operates on layer 2. VLANs span layer 3 boundaries, so: layer 2 traffic does not traverse VLANs.

    Additional scanning (by pi alert) is complimentary to the ARP scan. Which to me reads like ARP scans always need to work.

    The easy solution is to use a trunk port into the system, and set up multiple VLAN sub interfaces on the NIC in the OS to handle each VLAN. Alternatively, give the VM multiple NICs, one for each VLAN you wish to scan.

    The bottom line is that the pi alert system needs to have a direct network link into each network that it is trying to monitor.


  • I do it because I don’t want to run short of IP space.

    I’ve worked on networks that are reaching the limit of how many systems they can hold, and I don’t want that to happen, so I intentionally oversize basically every subnet and usually over segregate the traffic. I use a lot of subnets.

    They’re not all VLANs, some are on independent switches. What I did for storage in one case is gave a single NIC to the management Network for administration, and the rest connected to a storage subnet with fully dedicated links. I was using the same switch so they were vlanned but it easily could have been done on another switch. The connections from the storage to the compute systems was all done with dedicated links on dedicated NICs, so 100% of the bandwidth was available for the storage connections.

    I’m very sensitive to bottlenecks in my layer 2 networks and I don’t want to share bandwidth between a production interface and a storage interface. NICs are cheap. My patience is not.



  • I’m a network guy, so everything in my labs use SNMP because it works with everything. Things that don’t support SNMP are usually replaced and yeeted off the nearest bridge.

    For that I use librenms. Simple, open source, and I find it easy to use, for the most part. I put it on a different system than what I’m monitoring because if it shares fate with everything else, it’s not going to be very useful or give me any alerts if there’s a full outage of my main homelab cluster.

    Of course, access from the internet to it, is forbidden, and any SNMP is filtered by my firewall. Nothing really gets through for it, so I’m unconcerned about it becoming a target. For the rest of my systems security is mostly reliant on a small set of reverse proxies and firewall rules to keep everything secure.

    I use a couple of VPN systems to access the servers remotely, all running on odd ports (if they need port forwards at all). I have multiple to provide redundancy to my remote access, so if one VPN isn’t working due to a crash or something, I have others that should get me some measure of access.



  • MystikIncarnate@lemmy.catohomelab@lemmy.mlCommunity Activity
    link
    fedilink
    English
    arrow-up
    2
    ·
    6 months ago

    I’ve been avoiding reddit, but when I go visit, I’m usually on /r/homelab or /r/techsupport (or something similar); most of the other communities have rotted away, and aren’t nearly as good as they used to be.

    I use Jerboa on my Android, and it’s been quite adequate for lemmy.

    As for the community, bluntly, reddit is overrun with repeat questions, so if you’re a regular there, you see the same or similar stuff posted constantly by other users. So far, here, with the community being nominally smaller, repeats are generally more limited in frequency. You also see more of the same names popping up more often and you can mostly follow people’s homelab journey. That’s nice.

    I don’t hate reddit, though I hate their API rules and the decisions they’ve made regarding how to handle it… I just, don’t see it as the future. There may have been a time where I did see reddit as the future of this type/style of discussion, but it’s definitely not anymore. Reddit will continue to hold a special place in my mind for what it was when it was a good platform, but I’m waiting for everyone that’s still over there to catch up to the evolution that is lemmy.


  • I just want to say that I don’t love the NUC for homelabs; mainly that it only has one NIC. I also don’t like USB NICs because I’ve had too many problems with them dropping out without any obvious cause, and then working again by simply unplugging them and plugging them back in. I don’t like to have to be that hands-on with my lab, I just want it to work.

    If you’re okay with the limits of a single NIC, then the NUC is a great option; for my homelab, I actually run a storage network, so I generally need two NICs; one for production/front-end traffic, and one for storage/back-end traffic.

    Beyond that gripe, you could do a lot worse than a NUC for your homelab. You may be able to save some money if you get an off-lease Core i5/i7 business class system, and the mini/micro systems that are available are quite good, even in the used market. If you want new, I’d probably say the NUC is going to be one of the cheaper options, even considering the tiny/mini/micro systems that are out there. I’ve used several tiny/mini/micro for small processing systems; one example of this is a DNS server; in another case, I used one for HomeAssistant. Neither system relies on external storage (no storage network requirement), so they performed quite well.

    I know most people don’t run a storage network, and just use containers/VMs on local storage, so if that’s you, or you’re just starting out, any tiny/mini/micro or NUC will do quite well.