signing into cloud services and downloading apps is just so much easier to do!
This is actually true, but it doesn’t speak to why self hosting is “impossible” and more to how the lack of education around computers have reached an inflection point.
There’s no reason why self hosting should be some bizarre concept; In another reality, we would all have local servers and firewalls that then push our content into the wider internet and perhaps even intranet based notes. Society as a whole would be better if we chose to structure the internet that way instead of handing the keys to the biggest companies on the stock market.
I’ll give this podcast a listen to though, as it might be interesting. I think the reality is that some more docker frontends might help casual users jump into the realm of self hosting – especially be setting up proxy managers and homepage sites (like homarr) that work intuitively that never requires you to enter ports and IPs (though fearing that is also an education problem, not a problem with the concept itself.)
I have a kind of complicated system for organizing my music files – some of which is admittedly way too much maintenance but it might be of interest to some.
For my general “commercial” music collection, the folder structure is roughly
Music/%Release Artist | Band%/%Album%[%Year%]/%Track No.% - %Title%.%Format%
This is simple to maintain. I basically just use MusicBrainz Picard and set up appropriate paths.
For my soundtrack collection, it gets a bit more complicated. For Anime/Film/Whatever, I have it sorted basically the same way but in a different root folder. So something like:
Music/Anime/%Release Artist | Band%/%Album%[%Year%]/%Track No.% - %Title%.%Format%
Which is also easy to maintain since most of these also have commercial releases.
But games are sorted more strangely. To put it simply, I have a folder structure that puts the console or platform first, followed by the game name and then the loose files. Since some of these files are emulated formats (.vgm
, .nsf
, .spc
), I generally don’t bother renaming them and keep them as is and trust that the music program in question has tagging support. It also means that having them sorted by console is mostly beneficial to quickly find emulated file formats, but YMMV and I have regretted the choice on occasion.
Obviously game soundtracks are spotty when it comes to releases. Some companies have reliable metadata you can get from MusicBrainz Picard, like SquareEnix, but others have no tagging at all or very incorrect tag values. Because of this, I generally use something like VGMDB, which is usually higher quality but not always. I do have to resort to manually correcting files on occasion.
If anyone has a nice automated way to sort this stuff out, it would be a real benefit to me as well.
Spotify serves mp3s because it uses less bandwidth and most people can’t tell the difference on their 30€ Bluetooth headset.
I think this highlights a bigger issue when it comes to this discussion.
The issue isn’t the mp3 format – for the most part, the format of any lossy encoder can sound good with the right settings. The problem is that, unlike flac, all encoded lossy files are essentially untrustworthy audio formats. So when people say mp3 sounds bad, it’s only a half truth in the same way that it’s a half truth to say that people cannot tell a difference. You are putting trust in the person who encoded the audio to make the right choice and the encoder is putting trust in the idea that the person consuming the media can’t tell the difference.
When it comes to being cheap on bandwidth since most users can’t hear it, that’s a huge cop-out being made for a company that can do better. While Apple is pretty notorious for making terrible decisions for arbitrary reasons, even they respect the user enough to allow you to opt into higher audio format quality. It’s decisions like these that cement Apple as the kings of the creative computer user.
As an update, I think this was a side effect of how Wordpress / Gitlab was set up, where it was expecting it to be IP:PORT, which would force a redirect to the port specifically. Using the subdomain as the setting for web url seemed to resolve my problem. Thanks for the replies from everyone as all of the advice here is still really useful!
Ahh crap.
What’s the best no nonsense alternative?
Won’t be able to do much, and even if you can do some stuff you have to keep on mind that the energy efficiency would be poor enough that you’d still be better off with a cheap pi from a cost perspective.
Original creator was only involved in the “first season”, all the way back in 2000. Two new seasons were made in the 2010s when adult swim decided they needed to “bring it back” with a completely new director and animation studio. It wasn’t good, and now they’re committing to two more new seasons.
Most of us agree that all the follow up seasons are non-cannon and a mess. It distracts from what a good series the original 6-episode OVA was.
I always come up with a naming scheme and then immediately forget it either because I’m in a rush setting up a computer and forget to name the machine or because I get tired of trying to keep track of which machine is what.
I’m kind of glad. Only because I was thinking about buying a NUC for windows development purposes instead of using a VM or dual boot – so it looks like that option will be available for me in the future.
Kavita and Jellyfin both sold me on self hosting.
I no longer have to worry about transferring my media to every computer, it’s accessible now via the web browser which is ideal.
You would think, of all the communities that would be comfortable with migration, it would be the folks from /r/selfhosted
!
Fellow user from there, btw, nice to see we’ve got a decent pool of people on this board instead.
My understanding is that you’d need a combination of a reverse proxy and a general proxy manager. Nginx Proxy Manager
handles a lot of these tasks for me on my website, with most of my use being a simple redirect though.
There is a docker internal DNS, you can just resolve IPs by service name/container_name.
Yes, and you can also control that as well by messing with docker network groups. I find the ability to network into docker servers from the host to be super simple.
What I haven’t figured out yet is whether or not I can give my docker services their own IP on my router for access from another system on a fixed or reserved IP.
Consider using a USB3 SSD as your boot drive if you want long term usage from your pi. The SD card is prone to failure relatively quickly on Raspian and is even worse on OSes that aren’t optimized for the PI directly.