• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 22nd, 2023

help-circle

  • There’s also the option of setting up a cloudflare tunnel and only exposing immich over that tunnel. The HTTPS certificate is handled by cloudflare and you’d need to use the cloudflare DNS name servers as your domains name servers.

    Note that the means cloudflare will proxy to you and essentially become a man-in-the-middle. You – HTTPS --> cloudflare --http–> homelab-immich. The connection between you and cloudflare could be encrypted as well, but cloudflare remains the man-in-the-middle and can see all data that passes by.









  • It’s looping back to itself? Location header is pointing back to itself.

    Is it possible your backend is sending back an http 301 redirect back to caddy, which forwards it to your browser?

    Possibly some old configuration on your backend from the letsencrypt beforehand? Can you check the logs from your backend and see what they’re sending back?

    I’m assuming the request might replace the host with the IP on your reverse Proxy and that your next cloud backend is replying with a redirect to https://nextcloud.domain.com:443

    Edit: I think this is the most incoherent message I wrote to date.

    I think your reverse Proxy is forwarding the request to your next cloud, but replacing the Host header with the IP you specified as reverse Proxy. As a result the request arrives at your next cloud with the IP as “host”.

    Your next cloud installation is then sending back a 301 redirect to tell the client that they should connect to https://nextcloud.domain.com. this arrives through caddy at your browser, goes through the same loop until you’ve reached the max redirects.

    Have a look at your next cloud backend http logs to see what requests are arriving there and what HOST( http header ) it’s trying to connect to on that IP.





  • If you create a new account you should have configured a root email address for it. That one should have received an email to login and set the initial password IIRC.

    You can get an estimate of what it’s going to cost by going to https://calculator.aws

    Upload to AWS shouldn’t really cost much, unless you’re sending a lot of API put requests. Since they are backups I’m going to guess the files are large and will be uploaded as Multi-Part and will probably invoke multiple API calls to do the upload.

    My suggestion would be to upload it to s3 and have it automatically transition to glacier for you using a lifecycle rule.

    Cost explorer would be your best bet to get an idea of what it’ll cost you at the end of the month as it can do a prediction. There is (unfortunately) not a way to see how many API requests you’ve already done IIRC.

    Going by the s3 pricing page, PUT requests are $ 0.005 per 1000 requests( N. Virginia ).

    Going by a docs example

    For this example, assume that you are generating a multipart upload for a 100 GB file. In this case, you would have the following API calls for the entire process. There would be a total of 1002 API calls. 
    

    https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html

    Assuming you’re uploading 10x 100gb according to the upload scheme mentioned above you’d make 10.020 API calls which would cost you 10 * 0.005= 0.05$.

    Then there would be the storage cost on glacier itself and the 1 day storage on s3 before it transitioned to glacier.

    Retrieving the data will also cost you, as well as downloading the retrieved data from s3 back to your device. If we’re talking about a lot of small files you might incur some additional costs of the KMS key you used to encrypt the bucket.

    I typed all this on my phone and it’s not very practical to research like this. I don’t think I’d be able to give you a 100% accurate answer if I was on my pc.

    There’s some hidden costs which aren’t Hidden if you know they exist.

    Note that (imo) AWS is mostly aimed at larger organisations and a lot of things ( like VMs ) are often cheaper elsewhere. It’s the combination or everything AWS does and can do so that makes it worth the while. Once you have your data uploaded to s3 you should be able to see a decent estimate in cost explorer.

    Note that extracting all that data back from s3 to your onprem or anywhere or you decide to leave AWS will cost you a lot more than what it cost you to put it there.

    Hope this helps!


  • Yup yup! I agree wholeheartedly! WordPress is indeed a whole other mess. Never been a big fan of the php CMS systems ( WordPress joomla Drupal).

    I’ve seen the mess that can become firsthand ( though i wasn’t working on the project ).

    Go has peaked my interest as well ( for terraform modules and/or kubrrnetes operators ). I wonder if the owncloud project is working out better ( performance wise ). That aims at a different market segment than NC though. It was written in go though I think.

    It isn’t opensource anymore I think? ( didn’t google - very possible that I’m wrong ).

    Best of luck to your go project ( I’d you decide to kickstart it ). I’d contribute if I could, though you’d probably be better off code-quality wise with somebody with more experience :D.


  • I mean. There’s plenty of languages that have this overhead.

    A base Laravel or symfony installation shows a landing page in 30-50ms (probably).

    I’ve written ( in a lightweight framework rather that no longer exists ) a program to encrypt/decrypt strings using XML messages over http requests.

    The whole call took 40-60ms. About 40-50% of that was the serializer that needed to be loaded. The thing was processing a few hundred request per minute in peak. Which is a lot more than the average nextcloud installation. The server wasn’t really budging ( and wasn’t exactly a beast either ).

    I’m definitely not refuting that the JIT compiler adds overhead. But as in my example above, it’s also not like it’s adding 100ms or more per request.

    If you have a very high performance app to the point where you’re thinking about different transport than HTTP because of throughput you’re likely better off using something else.

    Circling back to the original argument my feeling remains that the same codebase in GO or RUST wouldn’t necessarily perform a lot better if you just calculate in php speed and the overhead of the JIT compiler.

    If you’d optimize it in rust/go. It likely will be faster. But I feel like the codebase could use some love/refactoring. But doing that is more difficult when you already have:

    • a large user base on various hardware
    • a large Plugin community which will need to refactor all their plugins
    • need some compatibility with all the stuff that is already there ( files, databases, migrations)

    You don’t want to piss off your entire userbase. Now I feel like I’d like to try it myself and look at the source though :'). ( I’m not saying I can do better though. It’s been a couple of years).


  • There are libraries which allow you to do stuff async in PHP ( https://github.com/amphp/amp ). It’s not all async by default like Javascript. A lot of common corporate languages right now are synchronous rather than asynchronous like python, java, c#, … By default, but allow you to run asynchronous code.

    It all has their place. I’m not saying making it async won’t improve some of the issues. Running a program that does 15 async processes might cause some issues on smaller systems like RPIs that don’t have a lot or compute capacity like a laptop/desktop with 20 cores.

    Having said that. I can’t back that up at all :D.

    Thanks for your insights though. I appreciate the civil discussion :)


  • I can follow that. I think most applications that keep running ( like a go webserver) are more likely to cache certain information in memory, while in PHP you’re more inclined to have a linear approach to the development. As in “this is all the things i need to bootstrap, fetch and get and run before I can answer the query”.

    As a result the fetching of certain information wouldn’t be cached by default and over time that might start adding up. The base overhead of the framework should be minimal.

    You ( nextcloud ) are also subject to whoever is writing plugins. Is nextcloud slow because it is slow, or because the 20 plugins people install aren’t doing any caching and a single page load is running 50 queries? This would be unrelated to NC, but I have no idea if there’s any plugin validation done.

    Then again, I could be talking completely out of my ass. I haven’t done much with NC except install it on my RPI4 and be a bit discontent with its performance. At the same time the browser experience on the RPI was also a bit disappointing so I never went in depth as to why it wasn’t performing very well. I assumed it was too heavy for the PI. This was 4 years ago mind you.

    My main experience with frameworks like Laravel and symfony is that they’re pretty low overhead. But the overhead is there ( even if it is only 40ms ).

    The main framework overhead would be negligible, but if you’re dynamically building the menu on every request using db queries it’ll quickly not become negligible


  • I think nextcloud suffers more from carrying along legacy code rather than blaming it on php. There’s tons of stuff written in php that performs well.

    It’s definitely not the right tool for every job, but it’s also not the wrong tool for every job. Which goes for most programming languages. I’ve seen it work fine on high traffic environments. It also carries a legacy reputation from php 5 and before. I haven’t kept up with it much in the last few years though.

    Which nextcloud tasks do you think php is unsuited for? (Genuine question)