When I first got into self hosting, I originally wanted to join the Fediverse by hosting my own instance. After realizing I am not that committed to that idea, I went into a simpler direction.

Originally I was using Cloudflare’s tunnel service. Watching the logs, I would get traffic from random corporations and places.

Being uncomfortable with Cloudflare after pivoting away from social media, I learned how to secure my device myself and started using an uncommon port with a reverse proxy. My logs now only ever show activity when I am connecting to my own site.

Which is what lead me to this question.

What do bots and scrapers look for when they come to a site? Do they mainly target known ports like 80 or 22 for insecurities? Do they ever scan other ports looking for other common services that may be insecure? Is it even worth their time scanning for open ports?

Seeing as I am tiny and obscure, I most likely won’t need to do much research into protecting myself from such threats but I am still curious about the threats that bots pose to other self-hosters or larger platforms.

  • Cyberflunk@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 hours ago

    Read up on shodan.io. bot networks and scrapers can use the database as a seed to find open ports.

    The cli massscan can (under reasonable conditions) scan the the entire ipv4 address space for a single port in 3 minutes. It would take an estimated 74 years for massscan to scan all 64k ports for the entire ipv4 network.

    So, using a seed like shodan, can compliment scanners/scrapers to isolate ip addresses to further recon.

    I honestly don’t know if this helps your question, I don’t actually know how services in general deal with nonstandard ports, but I’ve written a lot of scanning agents (not ai, old school agents) to recon for red/blue teams. I never started with raw internet guesses, I always used a seed. Shodan, or other scan results.