Putting a simple preseed file on a debian install image is probably going to be your best bet. Assuming you can run a VM on your current machine it shouldn’t be too difficult to test it until you’re happy with it.
Putting a simple preseed file on a debian install image is probably going to be your best bet. Assuming you can run a VM on your current machine it shouldn’t be too difficult to test it until you’re happy with it.
It’s going to be a balance between your time getting an automated approach to work and the cost/effort of getting a monitor. Getting preseed working can be a bit fiddly, but it does mean you’ve learnt a new skill, getting a monitor sounds like it’ll be a pain, and you might only need it once.
Yes, that’ll work too, it does involve adding the disk to your machine temporarily though, so just be carefully which disk you format to do it. Please don’t ask why I say that, it brings back painful memories…
While I agree with most people here that finding a keyboard and screen would be the easiest option, you do have a couple of other options:
Use a preseed file A preseed lets the installer run completely automatically, without user intervention. Get it to install a basic system with SSH and take it from there. You’ll want to test the install in a VM, where you can see what’s going on before letting it run on the real server. More information here: https://wiki.debian.org/DebianInstaller/Preseed
Boot from a live image with SSH Take a look at https://wiki.debian.org/LiveCD in particular ‘Debian Live’. It looks like ssh is included, but you’d want to check the service comes up on boot. You can then SSH to the machine and install to the harddrive that way. Again, test on a VM until you know you have the image working, and know how to run the install, then write it to a USB key and boot the tsrget server from that.
This all assumes the target server has USB or CD at the top of its boot order. If it doesn’t you’ll have to change that first, either with a keyboard and screen, or via a remote management interface sych as IPMI.
If you don’t need external calling you don’t need a trunk, it’s just for connecting to the outside world. I found [[https://www.asterisk.org/|Asterisk]] was a good place to start. The config is rather involved though, so there are various front ends for it.
You can, but I found it a bit laggy. It basically wraps your tcp stream over https, so I think the extra overhead was what was slowing it down.
Ah, ok. You’ll want to specify two allowedip ranges on the clients, 192.168.178.0/24 for your network, and 10.0.0.0/24 for the other clients. Then your going to need to add a couple of routes:
You’ll also need to ensure IP forwarding is enabled on both the VPS and your home machine.
Sort of. If you’re using wg-quick then it serves two purposes, one, as you say, is to indicate what is routed over the link, and the second (and only if you’re setting up the connection directly) is to limit what incoming packets are accepted.
It definitely can be a bit confusing as most people are using the wg-quick script to manage their connections and so the terminology isn’t obvious, but it makes more sense if you’re configuring the connection directly with wg.
The allowed IP ranges on the server indicate what private addresses the clients can use, so you should have a separate one for each client. They can be /32 addresses as each client only needs one address and, I’m assuming, doesn’t route traffic for anything else.
The allowed IP range on each client indicates what private address the server can use, but as the server is also routing traffic for other machines (the other client for example) it should cover those too.
Apologies that this isn’t better formatted, but I’m away from my machine. For example, on your setup you might use:
On home server: AllowedIPs 192.168.178.0/24 Address 192.168.178.2
On phone: AllowedIPs 192.168.178.0/24 Address 192.168.178.3
On VPS: Address 192.168.178.1 Home server peer: AllowedIPs 192.168.178.2/32
Phone peer: AllowedIPs 192.168.178.3/32
That’s what comes of late night posting, I’d meant to link you to PHPLDAPAdmin, not LDAPAdmin! It’s written in PHP, which isn’t lovely, but it does it’s job.
I confess I normally work from the command line, but I have set up LDAPAdmim for projects where others needed to manage the directory, and it worked pretty well.
I use an LDAP server, as it’s pretty much designed for exactly this task. You can tell PAM to authenticate and authorise from it to manage logins to the physical machines, and web apps typically either have a straightforward way to use LDAP, or support ‘external’ auth, with your web server handling the authentication and authorisation for it.
OpenLDAP is a solid, easily self hosted server. If you like working from the shell it has everything you need. If you prefer a GUI there are a variety of desktop and web based management frontends available.
Fair point about reminders, that’s not something I use it for, so I didn’t think of it. Canvas seems to be working now, and there are regularly push updates, so one of those might have fixed it.
Have a look at Obsidian. It runs on a variety of devices, you can sync either with their system, or pretty much anything else, as it just stores your notes as markdown files, and you can arrange notes like that with the canvas system.
You really shouldn’t have something kike SSHD open to the world, that’s just an unnecessary atrack surface. Instead, run a VPN on the server (or even one for a network if you have several servers on one subnet), connect to that then ssh to your server. The advantage is that a well setup VPN simply won’t respond to an invalid connection, and to an attacker, looks just like the firewall dropping the packet. Wireguard is good for this, and easy to configure. OpenVPN is pretty solid too.
It’s a non-starter for me because I sync my notes, and sometimes a subset of my notes, to multiple devices and multiple programs. For instance, I might use Obsidian, Vim and tasks.md to access the same repository, with all the documents synced between my desktop and server, and a subset synced to my phone. I also have various scripts to capture data from other sources and write it out as markdown files. Trying to sync all of this to a database that is then further synced around seems overly complicated to say the least, and would basically just be using Trillium as a file store, which I’ve already got.
I’ve also be burnt by various export/import systems either losing information or storing it in a incompatible way.