• sith@lemmy.zip
    link
    fedilink
    English
    arrow-up
    18
    ·
    7 days ago

    Maybe a more reasonable question: Is there anyone here self-hosting on non-shit hardware? 😅

  • lnxtx (xe/xem/xyr)@feddit.nl
    link
    fedilink
    English
    arrow-up
    24
    ·
    edit-2
    7 days ago

    Maybe not shit, but exotic at that time, year 2012.
    The first Raspberry Pi, model B 512 MB RAM, with an external 40 GB 3.5" HDD connected to USB 2.0.

    It was running ARM Arch BTW.

    Next, cheap, second hand mini desktop Asus Eee Box.
    32 bit Intel Atom like N270, max. 1 GB RAM DDR2 I think.
    Real metal under the plastic shell.
    Could even run without active cooling (I broke a fan connector).

    • Dave@lemmy.nz
      link
      fedilink
      English
      arrow-up
      5
      ·
      7 days ago

      I have one of these that I use for Pi-hole. I bought it as soon as they were available. Didn’t realise it was 2012, seemed earlier than that.

      • lnxtx (xe/xem/xyr)@feddit.nl
        link
        fedilink
        English
        arrow-up
        4
        ·
        7 days ago

        Mainly telemetry, like temperature inside, outside.
        Script to read a data and push it into a RRD, later PostreSQL.
        ligthttpd to serve static content, later PHP.

        Once it served as a bridge, between LAN and LTE USB modem.

    • ThunderLegend@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 days ago

      This was my media server and kodi player for like 3 years…still have my Pi 1 lying around. Now I have a shitty Chinese desktop I built this year with i5 3rd. Gen with 8gb ram

  • BCsven@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    6 days ago

    Does this count ARMv6 256MB RAM running OpenMediaVault…hmm I have to fix my clock. LOL

  • Rooty@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    edit-2
    7 days ago

    Enterprise level hardware costs a lot, is noisy and needs a dedicated server room, old laptops cost nothing.

    • pixelscript@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      ·
      edit-2
      7 days ago

      I got a 1U rack server for free from a local business that was upgrading their entire fleet. Would’ve been e-waste otherwise, so they were happy to dump it off on me. I was excited to experiment with it.

      Until I got it home and found out it was as loud as a vacuum cleaner with all those fans. Oh, god no…

      I was living with my parents at the time, and they had a basement I could stick it in where its noise pollution was minimal. I mounted it up to a LackRack.

      Since moving out to a 1 bedroom apartment, I haven’t booted it. It’s just a 70 pound coffee table now. :/

      • Sentau@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        18
        ·
        7 days ago

        This was common in budget laptops 10 years ago. I had a Asus laptop with the same resolution and I have seen others with this resolution as well

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 days ago

          Which doesn’t sound like much, but if you have applications designed for 1024x768 (which was pretty much the standard PC resolution for years) then at least it would fit on the screen.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 days ago

          😆nice

          I just learned that this resolution resulted from 4:3 screens which got some wideness added to reach 16:9 from an awesome person in this comment thread 😊

          • VoteNixon2016@lemmy.blahaj.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 days ago

            I had to check the post not logged in, weirdly I only see your comment when I’m logged in, but yeah, I (almost) only ever ssh into it, so I never really noticed the resolution until you pointed it out

      • viking@infosec.pub
        link
        fedilink
        English
        arrow-up
        7
        ·
        7 days ago

        Some old netbook I guess, or unsupported hardware and a driver default. If all you need is ssh, the display resolution hardly matters.

        • Petter1@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          7 days ago

          Sure, just never saw this numbers for resolution, ever 😆

          • kalleboo@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            ·
            edit-2
            7 days ago

            Most 720p TVs (“HD Ready”) used to be that resolution since they re-used production lines from 1024x768 displays

            • Petter1@lemm.ee
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              1
              ·
              7 days ago

              Ahh, I see, they took the 4:3 Standard screen and let it grow to 16:9, that makes a lot of sense 😃

              I am to young for knowing 4:3 resolutions 😆

  • Smokeydope@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    6 days ago

    I run a local LLM on my gaming computer thats like a decade old now with an old 1070ti 8GB VRAM card. It does a good job running mistral small 22B at 3t/s which I think is pretty good. But any tech enthusiast into LLMs look at those numbers and probably wonder how I can stand such a slow token speed. I look at their multi card data center racks with 5x 4090s and wonder how the hell they can afford it.

  • Pixel@lemmy.ca
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    6 days ago

    I had a old Acer SFF desktop machine (circa 2009) with an AMD Athlon II 435 X3 (equivalent to the Intel Core i3-560) with a 95W TDP, 4 GB of DDR2 RAM, and 2 1TB hard drives running in RAID 0 (both HDDs had over 30k hours by the time I put it in). The clunker consumed 50W at idle. I planned on running it into the ground so I could finally send it off to a computer recycler without guilt.

    I thought it was nearing death anyways, since the power button only worked if the computer was flipped upside down. I have no idea why this was the case, the computer would keep running normally afterwards once turned right side up.

    The thing would not die. I used it as a dummy machine to run one-off scripts I wrote, a seedbox that would seed new Linux ISOs as it was released (genuinely, it was RAID0 and I wouldn’t have downloaded anything useful), a Tor Relay and at one point, a script to just endlessly download Linux ISOs overnight to measure bandwidth over the Chinanet backbone.

    It was a terrible machine by 2023, but I found I used it the most because it was my playground for all the dumb things that I wouldn’t subject my regular home production environments to. Finally recycled it last year, after 5 years of use, when it became apparent it wasn’t going to die and far better USFF 1L Tiny PC machines (i5-6500T CPUs) were going on eBay for $60. The power usage and wasted heat of an ancient 95W TDP CPU just couldn’t justify its continued operation.

      • Pixel@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        The X3 CPUs were essentially quad cores where one of the cores failed a quality control check. Using a higher end Mobo, it was possible to unlock the fourth core with varying results. This was a cheap consumer Acer prebuilt though, so I didn’t have that option.

  • rumba@lemmy.zip
    link
    fedilink
    English
    arrow-up
    6
    ·
    6 days ago

    7th gen intel, 96GB mismatched ram, 4 used 10TB HDD, one 12 with a broken sata connector that only works because it’s sitting just right in a sled. A couple of 14’s one M.2 and two sataSSD. It’s running Unraid with 2 VM’s (plex and Home Assistant), one of which has corrupted itself 3 times. A 1080 and a 2070.

    I can get several streams off it at once, but not while it’s running parity check and it can’t handle 4k transcoding.

    It’s not horrible, but I couldn’t do what I do now with less :)

  • empireOfLove2@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    7 days ago

    your hardware ain’t shit until it’s a first gen core2duo in a random Dell office PC and 2gb of memory that you specifically only use just because it’s a cheaper way to get x86 when you can’t use your raspberry pi.

    Also they lie most of the time and it may technically run fine on more memory, especially if it’s older when dimm capacities were a lot lower than they can be now. It just won’t be “supported”.

  • sudoer777@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    7 days ago

    I started my self hosting journey on a Dell all-in-one PC with 4 GB RAM, 500 GB hard drive, and Intel Pentium, running Proxmox, Nextcloud, and I think Home Assistant. I upgraded it eventually, now I’m on a build with Ryzen 3600, 32 GB RAM, 2 TB SSD, and 4x4 TB HDD

    • tburkhol@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      My first server was a single-core Pentium - maybe even 486 - desktop I got from university surplus. That started a train of upgrading my server to the old desktop every 5-or-so years, which meant the server was typically 5-10 years old. The last system was pretty power-hungry, though, so the latest upgrade was an N100/16 GB/120 GB system SSD.

      I have hopes that the N100 will last 10 years, but I’m at the point where it wouldn’t be awful to add a low-cost, low-power computer to my tech upgrade cycle. Old hardware is definitely a great way to start a self-hosting journey.

  • biscuitswalrus@aussie.zone
    link
    fedilink
    English
    arrow-up
    7
    ·
    7 days ago

    3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.

    I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.

    Running that cluster 7 or so years now since I bought them new.

    I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.

    Point is, it’s still capable today.

    • renzev@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 days ago

      How is ceph working out for you btw? I’m looking into distributed storage solutions rn. My usecase is to have a single unified filesystem/index, but to store the contents of the files on different machines, possibly with redundancy. In particular, I want to be able to upload some files to the cluster and be able to see them (the directory structure and filenames) even when the underlying machine storing their content goes offline. Is that a valid usecase for ceph?

      • biscuitswalrus@aussie.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 days ago

        I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.

        I can’t see why regular file would be any different.

        I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.

        I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.

        I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.

        I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.

        Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.

        • renzev@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          6 days ago

          Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else.

          This is good advice, thanks! Pretty much what I’m doing right now. Already tried it with IPFS, and found that it didn’t meet my needs. Currently setting up a tahoe-lafs grid to see how it works. Will try out ceph after this.

  • ipkpjersi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    6 days ago

    Not anymore. My main self-hosting server is an i7 5960x with 32GB of ECC RAM, RTX 4060, 1TB SATA SSD, and 6x6TB 7200RPM drives.

    I did used to host some services on like a $5 or $10 a month VPS, and then eventually a $40 a month dedi, though.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        5 days ago

        I use it for Plex/Jellyfin, it’s the cheapest NVIDIA GPU that supports both AV1 encoding and decoding, even though Plex doesn’t support AV1 yet IIRC it’s still more futureproof that way. I picked it up for like around $200 on a sale, it was well worth it IMO.

    • ripcord@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 days ago

      Yeah, not here either. I’m now at a point where I keep wanting to replace my last host thats limited to 16GB. All the others - at least the ones I care about RAM on - all support 64GB or more now.

      • ipkpjersi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        6 days ago

        64GB would be a nice amount of memory to have. I’ve been okay with 32GB so far thankfully.