

🤙


🤙


Anubis forces the site to reload when doing the normal PoW challenge! Meta Refresh is a sufficient mouse to block 99% of all bot traffic without being any more burdensome than PoW.
You’ve failed to demonstrate why meta-refresh is more burdensome than PoW and have pivoted to arguing the point I was making from the start as though it was your own. I’m not arguing with you any further. I’m satisfied that I’ve convinced any readers of our discussion.


You will have people complain about their anti-fingerprinting being blocked with every bot-managment solution. Your ability to navigate the internet anonymously is directly correlated with a bots ability to scrape. That has never been my complaint about Anubis.
My complaint is that the calculations Anubis forces you to do are absolutely negligible burden for a bot to solve. The hardest part is just having a JavaScript interpreter available. Making the author of the scraper write custom code to deal with your website is the most effective way to prevent bots.
Think about how much computing power AI data centers have. Do you think they give a shit about hashing some values for Anubis? No. They burn more compute power than a thousand Anubis challenges generating a single llm answer. PoW is a backwards solution.
Please Think. Captchas worked because they’re supposed to be hard for a computer to solve but are easy for a human. PoW is the opposite.
In the current shape Anubis has zero impact on usability for 99% of the site visitors, not so with meta refresh.
Again, I ask you: What extra burden does meta-refresh impose on users? How does setting a cookie and immediately refreshing the page burden the user more than making them wait longer while draining their battery before doing the exact same thing? Its strictly less intrusive.


And how do you actually check for working JS in a way that can’t be easily spoofed? Hint: PoW is a good way to do that.
Accessing the browsers API in any way is way harder to spoof than some hashing. I already suggested checking if the browser has graphics acceleration. That would filter out the vast majority of headless browsers too. PoW is just math and is easy to spoof without running any JavaScript. You can even do it faster than real JavaScript users something like Rust or C.
Meta refresh is a downgrade in usability for everyone but a tiny minority that has disabled JS.
What are you talking about? It just refreshes the page without doing any of the extra computation that PoW does. What extra burden does it put on users?


LOL


You are arguing a strawman. Anubis works because because most AI scrapers (currently) don’t want to spend extra on running headless chromium
WTF, That’s what I already said? That was my entire point from the start!? You don’t need PoW to force headless usage. Any JavaScript challenge will suffice. I even said the Meta Refresh challenge Anubis provides is sufficient and explicitly recommended it.


Well in most cases it would by Python requests not curl. But yes, forcing them to use a browser is the real cost. Not just in CPU time but in programmer labor. PoW is overkill for that though.


Anubis is that it has a graded tier system of how sketchy a client is and changing the kind of challenge based on a a weighted priority system.
Last I checked that was just User-Agent regexes and IP lists. But that’s where Anubis should continue development, and hopefully they’ve improved since. Discerning real users from bots is how you do proper bot management. Not imposing a flat tax on all connections.


Then there was a paper arguing that PoW can still work, as long as you scale the difficulty in such a way that a legit user
Telling a legit user from a fake user is the entire game. If you can do that you just block the fake user. Professional bot blockers like Cloudflare or Akamai have machine learning systems to analyze trends in network traffic and serve JS challenges to suspicious clients. Last I checked, all Anubis uses is User-Agent filters, which is extremely behind the curve. Bots are able to get down to faking TLS fingerprints and matching them with User-Agents.


Its like you didn’t understand anything I said. Anubis does work. I said it works. But it works because most AI crawlers don’t have a headless browser to solve the PoW. To operate efficiently at the high volume required, they use raw http requests. The vast majority are probably using basic python requests module.
You don’t need PoW to throttle general access to your site and that’s not the fundamental assumption of PoW. PoW assumes (incorrectly) that bots won’t pay the extra flops to scrape the website. But bots are paid to scape the website users aren’t. They’ll just scale horizontally and open more parallel connections. They have the money.


I’ve repeatedly stated this before: Proof of Work bot-management is only Proof of Javascript bot-management. It is nothing to a headless browser to by-pass. Proof of JavaScript does work and will stop the vast majority of bot traffic. That’s how Anubis actually works. You don’t need to punish actual users by abusing their CPU. POW is a far higher cost on your actual users than the bots.
Last I checked Anubis has an JavaScript-less strategy called “Meta Refresh”. It first serves you a blank HTML page with a <meta> tag instructing the browser to refresh and load the real page. I highly advise using the Meta Refresh strategy. It should be the default.
I’m glad someone is finally making an open source and self hostable bot management solution. And I don’t give a shit about the cat-girls, nor should you. But Techaro admitted they had little idea what they were doing when they started and went for the “nuclear option”. Fuck Proof of Work. It was a Dead On Arrival idea decades ago. Techaro should strip it from Anubis.
I haven’t caught up with what’s new with Anubis, but if they want to get stricter bot-management, they should check for actual graphics acceleration.
127.0.0.1 that should be 0.0.0.0.

Automatic Mapping
If a user already exists on one or more connected servers, they can log in directly with their existing Jellyfin credentials. Jellyswarrm will automatically create a local user and set up the necessary server mappings.
If the same username and password exist on multiple servers, Jellyswarrm will link those accounts together automatically. This provides a smooth experience, giving the user unified access to all linked servers.
Really should audit the implementation of that feature. So when you first log in it automatically sends you’re credentials to every connected server?


I always thought this would make more sense to implement client side in the media player. But its probably easier to implement this way.
Sounds like you don’t need the VPS then. Add a subdomain to your home IP. Port forward 443 and 80 to the sever. Run caddy to route the subdomain to localhost:8096. You will also need to tell jellyfin to accept on the new domain.
5 actually because you can use minimal hardware. You can probably just port forward your router and run caddy on the same jellyfin server but then expose your home IP address.
Obscuring home IP is the big one. You also don’t have to fiddle with opening ports on your router and maybe getting ISP attention for hosting on a residential network. But really obscuring home IP address would work.
Dirt simplest solution is caddy on the same jellyfin server and port forward 443 and 80 on your router to that host. Hopefully letsencrypt will work without a domain but I’m not sure.
But I ran into challenges getting my server safely accessible for users outside my LAN
FWIW:
Obviously not as trivial or seamless as Plex. Also I wouldn’t try to complicate this setup by using docker for everything. But once its up you can basically host whatever you want on the WAN from your LAN.


Every service is doing this as expenses and interest rates go up. Its the driving force of enshittification. All the VCs want internet startups to finally turn a profit.
That seems like a flight of stairs up.