

Proxmox is based on debian and uses debian under the hood…


Proxmox is based on debian and uses debian under the hood…


Care to elaborate? Proxmox’s paid tier is long term support for their older releaes, and paid support. The main code is entirely free, with no features gated behind paywalls or anything like that.


I don’t see any mention of games so far.
A minecraft server is always a good time with friends, and there are hundreds of other game servers you can self host.


I don’t really understand why this is a concern with docker. Are there any particular features you want from version 29 that version 26 doesn’t offer?
The entire point of docker is that it doesn’t really matter what version of docker you have, the containers can still run.
Debian’s version of docker receives security updates in a timely manner, which should be enough.


You are adding a new repo, but you should know that the debian repos already contain docker (via docker.io) and docker-compose.


I use authentik, which emables single sign on (the same account) between services.
Authentik is a bit complex and irritating at times, so I would recommend voidauth or kanidm as alternatives for most self hosters.


Does it require docker installed and being in the docker group, with the docker daemon running?
Just an FYI, having the ability to create containers and do other docker is equivalent to root: https://docs.docker.com/engine/security/#docker-daemon-attack-surface
It’s not really accurate to say that your playbooks don’t require root to run when they basically do.


Yeah. I’m seeing a lot a it in this thread tbh. People are stylizing themselves to be IT admins or cybersec people rather than just hobbyists. Of course, maybe they do do it professionally as well, but I’m seeing an assumption from some people in this thread that its dangerous to self host even if you don’t expose anything, or they are assuming that self hosting implies exposing stuff to the internet.
Tailscale in to you machine, and then be done with it, and otherwise only have access to it via local network or VPN.
Now, about actually keeping the services secure, further than just having them on a private subnet and then not really worrying about them. To be explicit, this is referring to fully/partially exposed setups (like VPN access to a significant number of people).
There are two big problems IMO: Default credentials, and a lack of automatic updates.
Default credentials are pretty easy to handle. Docker compose yaml files will put the credentials right there. Just read them and change them. It should be noted that you still should be doing this, even if you are using gui based deployment
This is where docker has really held the community back, in my opinion. It lacks automatic updates. There do exist services like watchtower to automatically update containers, but things like databases or config file schema don’t get migrated to the next version, which means the next version can break things, and there is no guarantee between of stability between two versions.
This means that most users, after they use the docker-compose method recommended by software, are manually, every required to every so often, log in, and run docker compose pull and up to update. Sometimes they forget. Combine this with shodan/zoomeye (internet connected search engines), you will find plenty of people who forgot, becuase docker punches stuff through firewalls as well.
GUI’s don’t really make it easy to follow this promise, as well. Docker GUI’s are nice, but now you have users who don’t realize that Docker apps don’t update, but that they probably should be doing that. Same issue with Yunohost (which doesn’t use docker, which I just learned today. Interesting).
I really like Kubernetes because it lets me, do automatic upgrades (within limits), of services. But this comes at an extreme complexity cost. I have to deploy another software on top of Kubernetes to automatically upgrade the applications. And then another to automatically do some of the database migrations. And no GUI would really free me from this complexity, because you end up having to have such an understanding of the system, that requiring a pretty interface doesn’t really save you.
Another commenter said:
20 years ago we were doing what we could manually, and learning the hard way. The tools have improved and by now do most of the heavy lifting for us. And better tools will come along to make things even easier/better. That’s just the way it works.
And I agree with them, but I think things kinda stalled with Docker, as it’s limitations have created barriers to making things easier further. The tools that try to make things “easier” on top of docker, basically haven’t really done their job, because they haven’t offered auto updates, or reverse proxies, or abstracted away the knowledge required to write YAML files.
Share your project. Then you’ll hear my thoughts on it. Although without even looking at it, my opinion is that if you have based it on docker, and that you have decided to simply run docker-compose on YAML files under the hood, you’ve kinda already fucked up, because you haven’t actually abstracted away the knowledge needed to use Docker, you’ve just hidden it from the user. But I don’t know what you’re doing.
You service should have:
Further afterthoughts:
Simple in implementation is not the same thing as simple in usage. Simple in implementation means easy to troubleshoot as well, as there will be less moving parts when something goes wrong.
I think operating tech isn’t really that hard, but I think there is a “fear” of technology, where whenever anyone sees a command line, or even just some prompt they haven’t seen before, they panic and throw a fit.
EDIT and a few thoughts:
adding further thoughts to my second afterthought, I can provide an example: I installed an adblocker for my mom (ublock origin). It blocked a link shortening site. My mom panicked, calling me over, even though the option to temporarily unblock the site was right there, clear as day.
I think that GUI projects overestimate the skill of normal users, while underestimating the skill of those who actually use them. I know people who use a GUI for stuff like this because it’s “easier”, but when something under the hood breaks, they are able to go in and fix it in 5 minutes, whereas an actual beginner could spend a two weeks on it with no progress.
I think a good option is to abstract away configuration with something akin to nix-gui. It’s important to note that this doesn’t actually make things less “complex” or “easier” for users. All the configs, and dials they will have to learn and understand are still there. But for some reason, whenever people see “code” they panic and run away. But when it’s a textbox in a form or a switch they will happily figure everything out. And then when you eventually hit them with the “HAHA you’ve actually been using this tool that you would have otherwise ran away from all along”, they will be chill because they recognize all the dials to be the same, just presented in a different format.
Another afterthought: If you are hosting something for multiple users, you should make sure their passwords are secure somehow. Either generate and give them passwords/passphrases, or something like Authentik and single sign on where you can enforce strong passwords. Don’t let your users just set any password they want.


Not at all. In fact I remember the day my server was hacked because I’d left a service running that had a vulnerability in it.
Was this server on an internal network?


I like Incus a lot, but it’s not as easy to create complex virtual networksnas it is with proxmox, which is frustrating in educational/learning environments.


This is untrue, proxmox is not a wrapper around libvirt. It has it’s own API and it’s own methods of running VM’s.


There are a few apps that I think fit this use case really well.
Languagetool is a spelling and grammer checker that has a server client model. Libreoffice now has built in languagetool integration, where it can acess a server of your choosing. I make it access the server I run locally, since archlinux packages languagetool.
Another is stirling-pdf. This is a really good pdf manipulation program that people like, that comes as a server with a web interface.


I think I have also seen socket access in Nginx Proxy Manager in some example now. I don’t really know the advantages other than that you are able to use the container names for your proxy hosts instead of IP and port
I don’t think you need socket access for this? This is what I did: https://stackoverflow.com/questions/31149501/how-to-reach-docker-containers-by-name-instead-of-ip-address#35691865


I’ve seen three cases where the docker socket gets exposed to the container (perhaps there are more but I haven’t seen any?):
Watchtower, which does auto updates and/or notifies people
Nextcloud AIO, which uses a management container that controls the docker socket to deploy the rest of the stuff nextcloud wants.
Traefik, which reads the docker socket to automatically reverse proxy services.
Nextcloud does the AIO, because Nextcloud is a complex service, but it grows to be very complex if you want more features or performance. The AIO handles deploying all the tertiary services for you, but something like this is how you would do it yourself: https://github.com/pimylifeup/compose/blob/main/nextcloud/signed/compose.yaml . Also, that example docker compose does not include other services, like collabara office, which is the google docs/sheets/slides alternative, a web based office.
Compare this to the kubernetes deployment, which yes, may look intimidating at first. But actually, many of the complexities that the docker deploy of nextcloud has are automated away. Enabling the Collabara office is just collabara.enabled: true in the configuration of it. Tertiary services like Redis or the database, are included in the Kubernetes package as well. Instead of configuring the containers itself, it lets you configure the database parameters via yaml, and other nice things.
For case 3, Kubernetes has a feature called an “Ingress”, which is essentially a standardized configuration for a reverse proxy that you can either separate out, or one is provided as part of the packages. For example, the nextcloud kubernetes package I linked above, has a way to handle ingresses in the config.
Kubernetes handles these things pretty well, and it’s part of why I switched. I do auto upgrade, but I only auto upgrade my services, within the supported stable release, which is compatible for auto upgrades and won’t break anything. This enables me to get automatic security updates for a period of time, before having to do a manual and potentially breaking upgrade.
TLDR: You are asking questions that Kubernetes has answers to.


Many helm charts, like authentik or forgejo integrate bitnami helmcharts for their databases. So that’s why this is concerning to me,
But, I was planning to switch to operators like cloudnativepostgres for my databases instead and disable the builtin bitnami images. When using the builtin bitnami images, automatic migration between major releases is not supported, you have to do it yourself manually and that dissapointed me.


I’m on my phone rn and can’t write a longer post. This comment is to remind me to write an essay later. I’ve been using authentik heavily for my cybersecurity club and have a LOT of thoughts about it.
The tldr about authentik’s risk of enshittification is that authentik follows a pattern I call “supportware”. It’s when extremely (intentionally/accidentally) complex software (intentionally/accidentally) lacks edge cases in their docs,because you are supposed to pay for support.
I think this is a sustainable business model, and I think keycloak has some similar patterns (and other Red Hat software).
The tldr about authentik itself is that it has a lot of features, but not all of them are relevant to your usecase, or worth the complexity. I picked up authentik for invites (which afaik are rare, also official docs about setting up invites were wrong, see supportware), but invites may not something you care about.
Anyway. Longer essay/rant later. Despite my problems, I still think authentik is the best for my usecase (cybersecurity club), and other options I’ve looked at like zitadel (seems to be more developer focused),or ldap + sso service (no invites afaik) are less than the best option.
Sidenote: Microsoft entra is offers similar features to what I want from authentik, but I wanted to self host everything.


Straying away from utilities, games are always fun to host. I got started with self hosting by hosting a minecraft server, but there are plenty of options.


that all those CD tools were specifically tailored to run as workers in a deployment pipeline
That’s CI 🙃
Confusing terms, but yeah. With ArgoCD and FluxCD, they just read from a git repo and apply it to the cluster. In my linked git repo, flux is used to install “helmreleases” but argo has something similar.


garden seems similar to GitOps solutions like ArgoCD or FluxCD for deploying helm charts.
Here is an example of authentik deployed using helm and fluxcd.
https://intronetworks.cs.luc.edu/