

Except I’m in rural Australia. Star link is objectively the best option.
Except I’m in rural Australia. Star link is objectively the best option.
Starlink gives me an ipv6 its not static as such but a dynamic DNS can solve that issue. My ISP issue is that my mobile provider doesn’t give me an ipv6 at all so I can’t route to my home server without a gateway to proxy.
Here is my searxng rocker compose:
services:
redis:
container_name: redis
image: docker.io/valkey/valkey:7-alpine
command: valkey-server --save 30 1 --loglevel warning
restart: unless-stopped
networks:
- local_bridge
volumes:
- ./data/reddis:/data
cap_drop:
- ALL
cap_add:
- SETGID
- SETUID
- DAC_OVERRIDE
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
searxng:
container_name: searxng
image: docker.io/searxng/searxng:latest
restart: unless-stopped
networks:
- local_bridge
- proxy
volumes:
- ./data/searxng:/etc/searxng
environment:
- SEARXNG_BASE_URL=https://${SEARXNG_HOSTNAME:-localhost}/
- SEARXNG_SECRET=${SEARXNG_SECRET}
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "1"
networks:
local_bridge: # local bridge with ipv6 internet access
driver: bridge
enable_ipv6: true
proxy:
external: true
And my searxng settings:
searxng/data/searxng/settings.yml
# see https://docs.searxng.org/admin/settings/settings.html#settings-use-default-settings
use_default_settings: true
server:
# base_url is defined in the SEARXNG_BASE_URL environment variable, see .env and docker-compose.yml
limiter: false # can be disabled for a private instance
image_proxy: false
ui:
static_use_hash: true
query_in_title: true
infinite_scroll: true
default_theme: simple
theme_args:
# style of simple theme: auto, light, dark
simple_style: dark
redis:
url: redis://redis:6379/0
search:
safe_search: 0
autocomplete: 'duckduckgo'
default_lang: "en"
formats:
- html
- json
outgoing:
# default timeout in seconds, can be override by engine
request_timeout: 3.0
enabled_plugins:
- 'Hash plugin'
- 'Basic Calculator'
- 'Self Informations'
- 'Tracker URL remover'
# - 'Ahmia blacklist'
- 'Hostnames plugin' # see 'hostnames' configuration below
- 'Open Access DOI rewrite'
And the proxy network is just the docker network that nginx is connected to. Here is my nginx conf https://github.com/muntedcrocodile/nginxconf .
OK let’s run through some debug steps.
Test to see if samba is working by using a docker volume instead of trying to mount a file path.
If that works we can then assume its purely a file permission issue. U can check/test that by opening a shell inside the docker container and doing investigation from their.
If from the container shell u have perm issues then u will probably need to use the docker parameter to specify the user id of the container to match that of ur host or alternativly set the filesystem to match that of the container (this will lock u out of ur servers user access to the filesystem as u will no longer be owner).
If the container shell has perms to do shit in the mounted volume then it’s a samba config issue. I’ve never done it myself but I’ve heard that samba is a bitch to configure.
U can always just use the media stack docker image it practically does everything for you
Nginx does not have default SSL but the example I’ve uploaded has its quite a simple setup and gives you far greater control and modularity.
Ik this sounds like a stackoverflow kind of thing to say but why u using caddy not nginx?
Edit: I’ve uploaded my nginx config if you would like to take a look https://github.com/muntedcrocodile/nginxconf
Why not just run tailscale? U can self host head scale to keep it all first party. Tail scale i s essentially just a fancy wire guard wrapper.
Make backup do yolo. So far backup has not been nessasary.
How long since getting an oracle CEO did this take?
I’ve heard that using ddns for mail gets u into all sorts of IP blacklisting issues. I don’t even have a non cgnat iv4 and I’m not sure if email can work with an ipv6 only
Don’t u need a static ipv4 or something? I looked into it a while back even got the point of deploying a docker container but the config was so awful I gave up.
I’ve never messed with it but I’ve heard mail servers are a pain in the ass.
Was gonna suggest just this. Most providers support openai api. Would also recommend checking out openwebui as they provide api access to whatever models u are running.
I use logseq. But I’m not entirely happy. Automation of processes is a pain in the ass. Mobile is buggy.
https://www.youtube.com/watch?v=BKCj6A4CHV4
Not necessarily docker but gives a good vibe for self signed certs. Also i don’t see why u need encryption if ur only accessing ur data over local network (I presume via a vpn) its unnecessary unless ur worried about someone snooping packets on ur lan.
Personally I have my services available to the internet with a letsencrypt cert for a domain served via nginx that served my services at relevent routes. SSL isn’t really nessasary unless ur transporting across an untrusted network (such as the internet instead of over a VPN).
Ahh so u can’t install packages into system python unless u use apt. What u need to do is create a virtual environment (venve) then u can source that venve and install packages into that.
Edit: docker is simple just use docker compose files. The compose file outlines how to run a prebuilt docker image (basically just a virtual machine).
Immich. Its awesome.
Depends how deep down the rabbit hole u wanna go?
I assume ur accessible via ipv4 (no cgnat) otherwise ur in for a far bigger pain in the ass.
Simple u can use portainer and it makes it relatively easy. Otherwise u can use docker compose if u want more fine grained control and are willing to learn a little more.
Dr GPT is usually pretty good at writing docked compose files given the application readme.
I can’t unfortunately. They only feature I use is that fact I can access my ipv6 only server via an ipv4 only network.