Yeah, I’m planning to spin them down so infrequently that it shouldn’t matter in the long run.
Yeah, I’m planning to spin them down so infrequently that it shouldn’t matter in the long run.
I’ll consider this!
I’m possibly biased by the amount of initial fiddling with all the disks and pcie cards and hunting down where the noise was coming from. Will keep in mind.
This could be an option I guess - however the current case is a HP z440, which is SO convenient for building in that I need an extra good reason to get rid of it. Zero screws, just latches. Carrying handles.


Huh.
There’s a time and place for a DIY solution and academia can well be like that sometimes.
The latest Mac Mini can’t run Linux though. It’s M4 and asahi doesn’t even support M3 chips yet. But if you actually got the previous model with M1/M2 you can do Linux if desired. I might not attempt, and just use the Mac as a server as-is. It’s not too different from Linux. Asking the duck for “how to xx on Mac” when you already know the Linux equivalents should make your life tolerable.


Get base Debian, you’ll have more options for desktop environment. Once you get past the installation hassle it should just work for the rest of times. MX has its place but it’s specifically made to have no systemd which may not be something a new user is looking for. It feels very opinionated, is what I’m trying to say. May be your thing of course, but I’d recommend reading more on its philosophy before picking.
8 years is probably not old enough to require lighter desktops if the machines were at least mid range at the time. You should be able to use gnome or KDE as you please. Nothing against XFCE in principle, but it can be a little clunky especially for a laptop. No touch gestures, for example.


I’d expect so, but you’ll need to test with your exact router model how it behaves. Some have a ‘DMZ’ function that you can use to pass all ports to a certain host. I use it to expose the WAN interface of my opnsense router to the internet through the ISP router. Then I can fine tune the open ports further in opnsense which is better designed for that than the usual ISP box.


This post considers the situation where you expose your ports to the internet, on the edge of your residential network, for example by setting your router to forward requests with port 443 to a certain host in your network. In this case you do have a public ip address and the configured port on your home server is now reachable from the internet. This is different from just exposing a port on a machine inside a residential network for local use.


Yeah we’re gonna use like 1 tanker for that, 2 on a busy day. The 28 others are going somewhere else


Oh yeah and I did enable Proxmox VM firewall for the TrueNAS, the NFS traffic goes via an internal interface. Wasn’t entirely convinced by NFS’s security posture when reading about it… At least restrict it to the physical machine 0_0 So I now need to intentionally pass a new NIC to any VM that will access the data, which is neat.


A wrap-up of what I ended up doing:
I have achieved:
I have not achieved (yet…):
Quite happy with the setup so far. Looking to automate actual backups next, but this is starting to take shape. Building the confidence to use this for my actual phone backups, among other things.


Really good to know. Planned to keep using very mainstream LTS versions anyway, but this solidifies the decision. Maybe on a laptop I’ll install something more experimental but that’s then throwaway style.


Always a good reminder to test the backups, no I would not sleep properly if I didn’t test them :p
Aiming to keep it simple, too many moving parts in the VM snapshots / hard to figure out best practices and notice mistakes without work experience in the area, so I’ll just backup the data separately and call it a day. But thanks for the input! I don’t think any of my services have in-memory db’s.


Right, thanks for the heads up! On the desktops I have simply installed zfs as root via the Ubuntu 24.04 installer. Then, as the option was not available in the server variant I started to think maybe that is not something that should be done :p


Aight thank you so much, confirms I’m on the right path! This clarifies a lot, I’ll keep the ext4 boot drive :)


Right, so my aversion to live backups comes initially from Louis Rossmann’s guide on the FUTO wiki where he mentions it’s non trivial to reliably snapshot a running system. After a lot of looking elsewhere as well I haven’t gotten much hints that it would be bad advice and I want to err on the side of caution anyway. The hypervisor is QEMU/KVM so in theory it should be able to do live snapshots afaik. But I’m not familiar enough with the consistency guarantees to fully trust it. I don’t wanna wake up one day to a server crash and trying to mount the backed up qcow2 in a new system and suddenly it wouldn’t work and I just lost data.
It won’t matter though as I’ll just place all the important data on the zpool and back that up frequently as a simple data store. The VMs can keep doing their nightly shutdown and snapshot thing.


Ok so wrapping my head around this, what I think I need to be clear about is the separation between applications and data. Applications get the nightly VM snapshot way of backing up, and data will get the frequent zfs snapshots (and other backups). Kinda what I tried to do to begin with, so I will look more on how to do this separation for the applications I intend to use.
Still unsure if samba is the way to go for linking it together on the same physical machine.
Should I just run syncthing on the bare metal host…? Will sleep on it.


Thanks! Can I ask what is your setup like? ZFS on bare metal? Do you have VMs?


I personally know three, myself included, who are switching right now.
I guess there are still some people who think it’d be a good idea to try and get Karelia back from Russia. That aside, most fringe things are not that unique I think.