![](https://lemmy.world/pictrs/image/a835b4d7-3f39-4c0f-8db8-ec8e89107087.jpeg)
![](https://lemmy.world/pictrs/image/8286e071-7449-4413-a084-1eb5242e2cf4.png)
2023, I remember the announcement last year. Not sure where you’re getting 2014 from, that was even before NC split off.
2023, I remember the announcement last year. Not sure where you’re getting 2014 from, that was even before NC split off.
What puts me off of Owncloud is the new ownership. I couldn’t care less if it’s written in the blood of Christ, if I have to worry about the rug getting pulled out from under me for self-hosting, it’s a no-go for me, Joe.
Nextcloud works well for me and has for years. The people that don’t like it can go use this, and we’ll see you back in a couple of years when it goes open-core or worse.
Probably depends on the ISP, but I just have 2 nics in each server, and eth1 on both is on a switch to the cable modem. If one goes down, the other comes up fine. Can’t recall if I spoofed the same MAC on the OPNsense VMs.
VMs under KVM are pretty much bare metal and Proxmox doesn’t use much for resources itself, it’s basically a headless Debian with a webserver interface to do all the KVM stuff.
Proxmox, especially if you use ZFS for the VM datastore, makes a home lab so much easier to revert, backup and deploy/clone VMs and LXCs. I highly recommend it if you’re just starting out. Once you wrap your head around it, it gets out of the way and lets you just tinker with your projects, and not have to manually do everything in VirtManager or at the command line.
Combined with Proxmox Backup Server, it’s a production ready hypervisor for anything you decide to keep. Also, the HA features work well enough that I had my main routing OPNsense VM jump between nodes when the primary node lost a drive, and I didn’t notice for a week, it was that seamless.
Use a firewall like OPNsense and you’ll be fine. There’s a Crowdsec plugin to help against malicious actors, and for the most part, nothing you’re doing is worth the trouble to them.
It’s Linux, not Unix.
+1 for PBS and it’s dedup capabilities. I run a remote sync with it to offsite, along with ZFS reps of the underlying datastores.
As for Proxmox itself, I haven’t bothered with backing the nodes themselves up, it’s so simple to set up and cluster that if it went down, it would be a good chance for a nuke and pave, and restore VMs.
While there’s probably a better way of doing it via the docker zfs driver, I just make a datastore per stack under the hypervisor, mount the datastore into the docker LXC, and make everything bind mount within that mountpoint, then snapshot and backup via Sanoid to a couple of remote ZFS pools, one local and one on zfs.rent.
I’ve had to restore our mailserver (mysql) and nextcloud (postgres) and they both act as if the power went out, recovering via their own journaling systems. I’ve not found any inconsistencies on recovery, even when I’ve done a test restore on a snapshot that’s been backed up during known hard activitiy. I trust both databases for their recovery methods, others maybe not so much. But test that for yourself.
Snapshot with zfs, backup snapshot.
Shucked drives are usually the drives that are rejected for internal use because of quality issues. They might work fine, they might not. Be careful with them and remember, RAID is not a backup.
Could just use a regular DC motor and limit switches.
The T-Pot installation needs at least 8-16 GB RAM, 128 GB free disk space
Good lord.
And fuck curl-bash script installers.
I use the HACS integration in Home Assistant. Then I can build automation based on events to notify, restart VMs, etc.
The AIO container is pretty effortless to run.
I’m not sure how well docker-in-docker would work via portainer. Maybe it does, I’ve not tried it.
I would just do it from a folder you set up yourself and drop the docker-compose.yml in it, and go. If you want to share your dockercompose I can see if I notice a problem. I remember having to get over a couple issues at the time, but it’s been a while and can’t remember them offhand.
I think NC is worth setting up, but YMMV.
What’s your issue with NC AIO? Maybe I can help, I’ve been running it since nearly inception.
Well, you knew that was coming from the licensing changes.
On the plugins, I couldn’t say, I’ve not used those plugins. I do use ones like Gpoddersync, Recipes and Snappymail with no issues. I did try that Forms plugin and it was a bag of shit. Never had issues with the client, but I’ve only used it on Windows once, every other system its on is Linux, but it’s been solid.
In the Docker All-in-One, the Collabra Suite integration is flawless and I have several people using it on my server. Performance is snappy, especially with a few recent updates. I highly recommend the AIO, after having used NC in baremetal, NextcloudPi, Docker, it’s the least maintenance and best update experience by far.
Well, every project ends up finding things that aren’t as easy as they may have thought, or chooses after the fact to devote the time to other things. I could cherry pick decade old features from every long-lived project, like KDE or Gnome and say that makes them worthless. They patently aren’t worthless, and anyone that wants to criticize is welcome to file a bug and follow through on the fix. Most bugs don’t get fixed because people won’t follow up.
I’m happy with where they’ve gone overall, it fits a lot of my needs that I’d have to use something like Google or Microsoft instead, so it’s annoying as shit to see every person that can’t be arsed to put in the time to get it working properly for the things it does well to shit on it every. goddamn. time. it’s. name. shows. up gets on my last nerve.
I have no issue with corporate funding. I have an issue when a company gets to make all the decisions. Lot of good software has gone to hell when the shareholders need profit now instead of seeing a long term vision.
We’ll see, but I’ve been around this rodeo enough to just avoid it from the start and take some pain now instead of putting in effort that’s going to be wasted later.