I also wrote mine, a bit of JavaScript that load a json file and populate the page.
Shimitar
- 9 Posts
- 348 Comments
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•What's the best Open-Source selfhostable Notion replacement?English
62·9 days agoAFAIK Joplin is FOSS, but be aware that it’s markdown format is not compatible with… Markdown. Funnily enough.
Yes, more redindancy is good and indeed worth having. Still 5 12tb drives are probably yet more energy and heat efficient than 10 4tb ones.
Even if I had 10 4tb for free I wouldn’t use them. Maybe a couple for backup reasons or cold storage, but not active 24/7 for a domestic raid environment.
I actually have 4 6tb hdds that I dismissed for the 4 8tb sdds, and I use two for local backup and keep two spares to replace them when they will fail.
4 8tb in raid5 provide 24tb total space that its far more than I need, and the risk of a double failure is mitogated by a proper 3,2,1 backup strategy in place
As for the higher I/o frankly I never felt the need. 1gbps home network is always the bottleneck anyway and if you require such disk troughput on your network, you are doing something wrong anyway.
Even many 4k video streams would sturate your lan before saturating your disks unless you store uncompressed video streams.
10x4tb = 40tb can be achieved with 4 12tb drives (actually 36tb in raid5) .
Doubtfully those 12tb uses much more power than the 4tb ones, each. So the 28€/m probably cut down to 14,€/m counted in excess.
Considering 120m (10y) of uptime, you should save enough to justify cutting down from 10 to 4 drives.
I wouldn’t use more than 4 or 6 disks in a home environment. Specially with mechanical drivers, power consumption 24/7 would get me very worried.
I run 4 x 8Tb SSDs, not cheap, but solid, low power AND low heat (even more important).
Consider also heat dissipation as most likely at home you don’t have a constant temperature and humidity, so many spinning disks can suffer from heat, and that will kill them faster
Longevity… With so much space I would expect to keep it running a decade or more… So factor in 10x365x24 hours of operation, energy consumed, heat dissipation and failure rate.
On top of that, whatever gpu and ram you throw at it is meaningless, whatever wi work, even an Intel n100 NUC. Having enough cables and port instead… Well.
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•What is Radicale and how do I use it?English
2·12 days agoYes, I was trying to be funny… Don’t worry… ;)
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•What is Radicale and how do I use it?English
2·12 days agoWas it… Made with AI?
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•What is Radicale and how do I use it?English
2·12 days agoThanks! Tomorrow will see to upload to my wiki…
.
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•What is Radicale and how do I use it?English
2·12 days agoAhahah I like the “zero AI” logo idea, maybe will use AI to create one… :)
Yes I am that bad with graphics
Anyway check the main page of the wiki I explain why I did it.
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•What is Radicale and how do I use it?English
10·12 days agoYes it is, written by me by hand 100%…
Zero AI too… Just old grumpy bashing
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•What is Radicale and how do I use it?English
14·13 days agoMy experience with radicale
https://wiki.gardiol.org/doku.php?id=services%3Aradicale
I currently use it, from android with dav5x (F-droid)
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•What's your self-hosting success of the week?English
2·15 days agoNVIDIA Corporation GA104GL [RTX A4000] (rev a1)
From lspci
It has 16gb of VRAM, not too much but enough to run gpt:OSS 20b and a few other models pretty nice.
I noticed that it’s better to stick to a single model, I imagine that unload and reload the model in VRAM takes time.
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•What's your self-hosting success of the week?English
10·15 days agoI plugged in an NVIDIA gpu in my server and enabled ollama to use it, diligently updated my public wiki about it and now enjoying real time gpt: OSS model responses!
I was amazed, time cut from 3-8 minutes down to seconds. I have a Intel Core7 with 48gb ram, but even an oldish gpu beats the crap out of it.
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•Thermostats compatible with selfhosted Home AssistantEnglish
3·15 days agoI use ZigBee and have lots of the sonoff trv’s. I tried a few Chinese ones and definitely DO NOT recommend.
Buy the sonoff ones, they still get firmware updates after two years. Batteries last about 1 year in my experience which is good too
Keep it private only to yourself? I do this, and never been rate limited.
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•Opnsense, tailscale and headscaleEnglish
5·17 days agoNo experience with tailscale, but I have opnSense on a firewall appliance like yours and run two wireguard networks one from the opn itself and one from my home server, which is in the DMZ. They all work just fine…
They have different scope and remote peers, but both use my VPS as enter gateway since I am CGNATted.
I run Continuwuity since day zero, actually since Conduwuit. Works great, super stable and very lightweight.
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•Ideon: I'm building a self-hosted project cockpit on an infinite canvas (v0.5 update)English
3·21 days agoI am with you my friend, all the way
Shimitar@downonthestreet.euto
Selfhosted@lemmy.world•Ideon: I'm building a self-hosted project cockpit on an infinite canvas (v0.5 update)English
52·21 days agoRefreshingly not an ai made thing…
Nothing bad about using ai but

Llm is just a tool and it usage will only increase over time.
There is nothing intrinsically bad in that, as with any tool, it’s not bad per se, but how we use it.
So, push for ethic and proper usage of AI, rather.
Projects will use AI more and more, nothing bad in that. Provided it’s used properly, vetted, tested, verified and such.