My guess is log files are being written to it? Might want to install a proper drive internally and redirect log storage. With less activity the USB drive should not heat up anywhere near as much.
My guess is log files are being written to it? Might want to install a proper drive internally and redirect log storage. With less activity the USB drive should not heat up anywhere near as much.
Nothing too special, just had to do some fiddling to get the Apache reverse proxy working correctly. Now I believe they have a pre-made example for it, but back then they only had nginx. I stick with Apache because that’s still what I know. Might start learning nginx, but my main work isn’t in web stuff.
Mine is nice and quick in regards to the web interface and general functions. However I run it on a server at home and my upload speed isn’t the best, so if I need to pull a larger file (Files On Demand enabled) then obviously the transfer speed of the file is a bit sluggish.
Hosted on a VM with 16GB RAM, 4 cores. Using the NextcloudAIO docker deployment option, all behind an Apache reverse proxy (I have a bunch of other services on another VM that all have reverse proxy access in place as well).
In very basic terms, and why you want to do them:
Attack surface is the ports and services you are exposing to the internet. Keep this as small as possible to reduce the ways your setup can be attacked.
Network topology is the layout of your home network. Do you have multiple vlans/subnets, firewalls that restrict traffic between internal networks, a DMZ is probably a simple enough approach that is available on some home grade routers. This is so if your server gets breached it minimises the amount of damage that can be done to other devices in the network.
The first year price is a “loss leader” discount. Get you in the door, then make a profit from you in future.
Namecheap have a bit of a reputation (as can be seen here with a few people warning of poor support), Spaceship seems to be a bit of a offshoot/addition they have created, partly as it doesn’t seem to be a 1-1 comparison, and partly maybe to avoid their existing reputation?
However, it’s not entirely a bad idea to separate your registrar from your DNS provider. If one goes down, you still have access to the other to make changes. I used namecheap in the past because it was cheap, and cloudflare for DNS. If you are using both for only your registrar, it probably won’t matter much at all as you are probably not changing nameservers often, if at all, once set.
If you are going to use your desktop, I would suggest putting all of the self-hosted services into a VM.
This means if you decide you do want to move it over to dedicated hardware later on, you just migrate the VM to the new host.
This is how I started out before I had a dedicated server box (refurb office PC repurposed to a hypervisor).
Then host whatever/however you want to on the VM.
I’ve been using Trilium (https://github.com/zadam/trilium). There are desktop clients, no mobile clients. However the web interface works well enough for me that I don’t mind. The notes update in near-realtime when you make edits through the web app on multiple machines (assuming internet connectivity of course).
If you’re already self-hosting NextCloud you might want to look NextCloud Notes as well.
If you move to office 365, it is possible to create an email transport rule to handle this. Effectively any non existent address gets sent to the mailbox your specify.
Yes, they aren’t the cheapest option, and it gets meme’d that it should be called office 364,363, etc, but it is a solid service.
Very loosely it would act as a caching or proxy service from what I understand.
My understanding is that when you subscribe to community “x” on server “y”, that your server “z” starts to download all of the content from that community so it can serve it to you locally. I don’t know how fast the activitypub protocol would fetch new posts/comments, if it’s real-time, or some kind of intermittent pull or push.
Another vote for selfhosting a VaultWarden (Bitwarden) setup.
I have had it through a docker container for a while, it’s solid, and the browser integration/desktop apps/web access mean my passwords are always close at hand.
Yes, it’s a bad idea to do it this way. The most likely time a RAID array will fail is during a rebuild as that is a whole bunch of drive activity over a sustained timeframe.
Better to perform a backup or copy, power down, remove all the old drives, install the new ones, power back up, configure a new array (most people recommend to use RAID 6 at a minimum, no hot spare, so you have two drive redundancy) then restore or copy back the data.
This way you can also keep the old drives as a cold backup of sorts, potentially reimporting the configuration if needed.
If you have docker containers and other stuff all on that USB drive I’d really reccomend getting it all off that USB (not just logging) and onto a proper drive of some kind. USB thumb sticks are not reliable long term storage, you will wake up to find the drive failing one day and good chance you lose everything on it with little to no warning.