I have my own Lemmy instance (Lemmy.emphisia.nl) but I had to take it offline, after running a while it was using more than half of the system memory (16gb in total). Causing the computer to crash as there are a lot of other services also running on it. The problem appears to come from postgress. In btop I can see it spawning a lot of processes all taking up about 80 MB in the beginning but rising as it runs, rising as high as 250 MB per process. I have already tried adjusting the postgress config to be optimized for less memory, but it seems to do nothing.
Is this normal, and if not, what is going wrong?
Does your server has a big swap space?
It has zswap and a swapfile of about 8 gb, and it gets fully utilized
Yeah, this is why I have small swap on my servers. I’d rather the process got killed by the oom killer and got restarted automatically instead of running very slowly and trashing the entire server when it uses too much memory.
Since my upgrade to 0.19 I really struggle to keep my servr online. It sounds like what is happening to me too. The whole server beco.en unesponsive after the load goes to 100. After I kicked NextCloud from the server in only kept happening every coupple of days. Let’s see if this workaround helps to fix it. If not then I’ll remove swap.
I have been running a cron script to automatically restart the lemmy backend, which in turn resets the postgres memory use ever since this problem started to happen months ago. For me 0.19.x actually made it less bad, but it is still an annoying issue.
Try limiting the database connection pool size too in
lemmy.hjson
. It helped a lot in my instance. I set mine to 30 in a small server with 8gb ram. You can set it to even lower value for lower postgres memory consumption.database: { host: dbhost user: "lemmy" password: "secret" database: "lemmy" pool_size: 30 }