Different phases of power? Did you have 3-phase ran to your house or something?
You could get a Starlink for redundant internet connection. Load balancing / fail over is an interesting challenge if you like to DIY.
Different phases of power? Did you have 3-phase ran to your house or something?
You could get a Starlink for redundant internet connection. Load balancing / fail over is an interesting challenge if you like to DIY.
You could possibly run ai horde if they have enough ram or vram. You could run bare metal kubernetes or inside proxmox.
Honestly I just moved back to local accounts. I’m interested in the other comments on this post for a good solution to move to.
Does that work with gitea? I was able to get it working with Authentik but wasn’t able to get it working on Keycloak.
FYI docker engine can use different runtimes and there is are lightweight vm runtimes like kata or firecracker. I hope one day docker will default with that technology as it would be better for the overall security of containers.
I have mixed architecture cluster as well. It works great as long as you set your manifests up properly and either use public images that support both or you build your own, or you set up node affinity to ensure the architecture-specific pod runs only on the node with the correct architecture.
Other than k3s.io’s documentation and tailscale’s documentation, I don’t have any to share, but I don’t mind answering questions if you are stuck.
https://docs.k3s.io/installation https://tailscale.com/kb/1017/install
Install tailscale and k3s on the master node and worker nodes. I have a setup like this and it works well. I have nodes in different physical locations from the master node, it works fine.
Very insightful. I definitely need to check out cloud-init as that is one thing you mentioned I have practically no experience with. Side note, I hate other people’s helm with a passion. No consistency in what is exposed, anything not cookie cutter and you’re customizing the helm chart to the point it’s probably easier to start with a custom template to begin with, which is what I started doing!
You urge teams to stop using it [ansible?] as soon as they can? What do you recommend to use instead?
Dynamic inventory. I haven’t used it on a cloud api before but I have used it against kube API and it was manageable. Are you saying through kubectl the node names are different depending on which cloud and it’s not uniform? Edit: Oh you’re talking about the VMs doh
I’ve tried ansible vault and didn’t make it very far… I agree that thing is a mess.
Thank god I haven’t ran into interpreter issues, that sounds like hell.
Ansible output is terrible, no argument there.
I don’t remember the name for it, but I use parameterized template tasks. That might help with this? Edit: include_tasks.
I think this is due to not a very good IDE for including the whole scope of the playbook, which could be a condemnation of ansible or just needing better abstraction layers for this complex thing we are trying to manage the unmanageable with.
I have noticed very slow speeds with sshfs as well. I’ll have to give rclone mount over ssh a try. Thanks!
How do you do the sshfs mount, tracker and search queries? Is that over tailscale?
Care to share some war stories? I have it set up where I can completely destroy and rebuild my bare metal k3s cluster. If I start with configured hosts, it takes about 10 minutes to install k3s and get all my services back up.
What is seedbox? Is it part of the homelab or a service like the VPSs?
I have traefik running on my kubernetes cluster as an ingress controller and it works well enough for me after finagling it a bit. Fully automated through ansible and templated manifests.
I have a small cluster of Pis running k3s kubernetes and running several services for my household. Yea they could all run on a single beefy server but I had fun learning it all.
Pi 3B has dedicated bus for SD card but ethernet and usb share bandwidth. Enable zram, disable all swap and keep using sd card.
What is the tmpfs for?