If it’s not available as an application, you should probably look into docker compose
If it’s not available as an application, you should probably look into docker compose
What I’m using is Text Generation WebUI with an 11B GGUF model from Huggingface. I offloaded all layers to the GPU, which uses about 9GB of VRAM. With GGUF models, you can choose how many layers to offload to the GPU, so it uses less VRAM. Layers that aren’t offloaded use system RAM and the CPU, which will be slower.
In GNOME you just need to log in with your Nextcloud account in the system settings and it will add it in the file manager
There’s a project called Watchtower that is specifically for auto-updating docker-compose containers
You can also literally tell from the cover what kind of show it’s going to be 😭
Absolutely. First few episodes, especially the first one, were actually really fun but I dropped it somewhere at the beginning of season 2 because it was just boring. It kinda felt like it just kept getting worse episode by episode.
It’s just that I only read manga on my phone anyway (even though that might be because it’s not synced between devices) and I’ve never had the issue that an online source went offline. I just thought that maybe there are other reasons, like how you can get way better quality when you self host something like Jellyfin for movies and shows.
A mini-pc with an Intel N100 will be a little more expensive (I bought one for ~150€) but it’s about 5-6 times faster than the Pi and mine also came with 16gb of RAM and a 500gb SSD. It requires very little power and because of that, it’s also very quiet. AV1 decode is also great if you plan to run something like Kodi on it or you want to do transcoding from an AV1 video with Jellyfin (I haven’t migrated those to it yet, so I don’t know how well it works in practice). I’m not sure but it might not even be a lot more expensive than a Pi with 8gb of RAM and an additional 500gb SSD.
You just need the docker and docker-compose packages. You make a docker-compose.yml
file and there you define all settings for the container (image, ports, volumes, …). Then you run docker-compose up -d
in the directory where that file is located and it will automatically create the docker container and run it with the settings you defined. If you make changes to the file and run the command again, it will update the container to use the new settings. In this command docker-compose
is just the software that allows you to do all this with the docker-compose.yml
file, up
means it’s bringing the container up (which means starting it) and -d
is for detached, so it does that in the background (it will still tell you in the terminal what it’s doing while creating the container).
I have Home Assistant running with TTS and STT on a mini PC with an Intel N100 CPU and 16 gigs of RAM. Works great. LLMs and Stable Diffusion need way more procesing power and RAM (or rather VRAM cause both are very slow without a GPU), so that mini PC wouldn’t be enough for that tho.