Hey :) For a while now I use gpt-oss-20b on my home lab for lightweight coding tasks and some automation. I’m not so up to date with the current self-hosted LLMs and since the model I’m using was released at the beginning of August 2025 (From an LLM development perspective, it feels like an eternity to me) I just wanted to use the collective wisdom of lemmy to maybe replace my model with something better out there.
Edit:
Specs:
GPU: RTX 3060 (12GB vRAM)
RAM: 64 GB
gpt-oss-20b does not fit into the vRAM completely but it partially offloaded and is reasonably fast (enough for me)


Just curious, what does “some automation” entail? I thought LLMs could only work with text, like summarize documents and that sort of thing.
It’s done by software using an LLM, not just a raw LLM. They do only work with text, but you can get it to output the text “get_weather(mylocation)”, and instead of just outputting that directly to the user, the software running on top of the LLM runs a " get_weather" function that calls some weather API. The result of that function is then output to the user.
Any time you see an “AI” taking “actions”, this is what happens in the background for every action.
Some examples
That’s probably just the basics. People have some clever uses for these things. It’s not just summarize this document
That’s cool, it just… does those things? How does it connect to those apps? I can’t even get Gemini to set a reminder and that’s on a Google device.
Good question. Short answer: not quite.
The LLM is the reasoning layer. It reads your input, figures out intent, and outputs structured instructions. They have a method that achieves that (MCP).
Something else like Home Assistant, n8n, a Python script, whatever you’ve set up actually executes the actions. The LLM interacts with those things.
So for the calendar example: your email client triggers on a booking reply, passes the text to the LLM, the LLM extracts the date/time/location and outputs something structured, and then your automation tool creates the calendar event and sets the reminder. Once it’s set up, it looks and feels like one thing, because you interact with it via the LLM (or even better - you vocally tell the LLM. Yes, JARVIS).
So the LLM never “talks to” Google Calendar directly, it just does the bit that’s hard to do with traditional code, which is reading messy natural language and making sense of it.
Same for Home Assistant. The LLM parses “turn the lights down a bit, it’s movie time, play something sci-fi” into a device + action + value, and HA does the actual switching.
The secret sauce that makes this work is MCP (Model Context Protocol) - basically a standardised way for LLMs to talk to tools and services.
Instead of custom glue code for every integration, you wire up an MCP server once and the model knows how to use it.
Growing library of them now: filesystems, calendars, browsers, databases, smart home etc.
Anthropic open-sourced the spec, most major local LLM frontends support it.
Think of it like hiring a translator who can manage your crew, rather than hiring someone who speaks every language and also has keys to every building and is also a plumber/electrician/contractor/interior designer, if that makes sense.
TL;DR: once you set up the stack, then the cool automation stuff can happen. Not a big ask, just a bit fiddly, like learning to program your VCR.
Super surprised Google’s AI doesn’t have the stack / harness inbuilt tho. They could afford to do a lot of the heavy lifting invisibly. I bet they actually do and it’s just … shit. Or a paid extra lol.
These days they can also chain together tools, keep a working memory etc. Look at Claude Code if you’re curious. It’s come very far very quickly in the last 12 months.
OP said coding AND “some automation”, what is being automated?