Note: I am not affiliated with this project in any way. I think it’s a very promising alternative to things like MinIO and deserves more attention.
Note: I am not affiliated with this project in any way. I think it’s a very promising alternative to things like MinIO and deserves more attention.
S3 storage is simpler than running scp -r to a remote node, because you can copy files to S3 in a massively parallel way and scp is generally sequential. It’s very easy to protect the API too, as it’s just HTTP (and at it, it’s also significantly faster than WebDAV).
S3 goes beyond the scope you describe. You disqualify yourself with such statements
Clearly I mean Garage in here when I write “S3.” It is significantly easier and faster to run
hugo deploy
and let it talk to Garage, then to figure out where on a remote node the nginx k8s pod has its data PV mounted and scp files into it. Yes, I could automate that. Yes, I could pin the blog’s pod to a single node. Yes, I could use a stable host path for that and use rsync, and I could skip the whole kubernetes insanity for a static html blog.But I somewhat enjoy poking the tech and yes, using Garage makes deploys faster and it provides me a stable well-known API endpoint for both data transfers and for serving the content, with very little maintenance required to make it work.
I don’t follow. S3 is an AWS service that these tools emulate locally by providing the same API. But I’m happy to accept that there’s just some misunderstanding 😃
In the context of my comments here, any mention of “S3” means “S3-compatible” in the way that’s implemented by Garage. I hope that clarifies it for you.
Thank you
Nobody should be using SCP, use rsync.