cross-posted from: https://programming.dev/post/39212874
I recently migrated my services from rootful
dockerto rootlesspodman quadlets. It went smoothly, since nothing I use actually needs to be rootful. Well, except forcaddy. It needs to be able to attach to privileged ports 80 and 443.My current way to bypass it is using
HAProxyrunning as root and forwarding connections using proxy protocol. (Tried to usefirewalldbut that makes the client IP opaque tocaddy.) But that adds an extra layer, which means extra latency. It’s perfectly usable, but I’d like to get rid of it, if possible.I’m willing to run
caddyin rootfulpodmanif needed. But from what I understand, that means I can’t have it in the same rootless network as my other containers. I really don’t wanna open most of my containers’ ports, so that’s not an option.So, I’m asking whether any of these three things are possible.
- Use
firewalldto forward ports tocaddywithout obscuring the client’s IP.- Make rootful
caddyshare a network with other rootless containers.- Assign privileged ports to caddy somehow, in rootless mode. (I know there’s a way to make all these ports unprivileged, but is it possible to only assign these 2 ports as unprivileged?)
Or maybe there’s a fourth way that I’m missing. I feel like this is a common enough setup, that there must be a way to do it. Any pointers are appreciated, thanks.
You can use rootless caddy via systemd socket activation, here’s a basic setup:
- rootless-caddy.service
[Unit] Description=rootless-caddy Requires=rootless-caddy.socket After=rootless-caddy.socket [Service] # a non root user here User=El_Quentinator ExecStart=podman run --name caddy --rm -v [...] docker.io/caddy:alpine [Install] WantedBy=default.target- rootless-caddy.socket
[Socket] BindIPv6Only=both ### sockets for the HTTP reverse proxy # fd/3 ListenStream=[::]:443 # fdgram/4 ListenDatagram=[::]:443 [Install] WantedBy=sockets.target- Caddyfile
{$SITE_ADDRESS} { # tcp/443 bind fd/3 { protocols h1 h2 } # udp/443 bind fdgram/4 { protocols h3 } [...] }And that’s it really.
You can find a few more examples over here: https://github.com/eriksjolund/podman-caddy-socket-activation
Systemd socket activation has a few more interesting advantages on top of unlocking binding priviliged ports:
- allowing to not bind any port on the container, which itself allows to use
--network nonewhile still being able to connect to it via systemd socket, pretty neat to expose some web app while completely cutting its own external access. - getting rootful networking performance
- starting up your service only when connecting to it (and potentially shutting it back down after some time)
Drawbacks is that the file descriptor binding is a bit awkward and not always supported (caddy/nginx/haproxy do support it though). And that podman pods / kube do not support it (or at least not yet).
It seems that I’d still need to modify
net.ipv4.ip_unprivileged_port_start=80in sysctl, which I don’t want to do. If I do it, the socket isn’t even strictly necessary.TBH I haven’t played with passing caddy’s podman network to other containers, mine is a simple reverse proxy to other standalone containers but not directly connected via
podman run --network(or quadlet network). In my scenario I can at least confirm thatnet.ipv4.ip_unprivileged_port_startdoesn’t need to be modified, the only annoyance is that I cannot use a systemd user service, even though the end process doesn’t run as root.EDIT: Actually looking at the examples a bit more closely I think the primary difference with my setup is that the systemd socket is started with
systemd --userwhich thus requires the sysctl change, whereas I’m not using a systemd user service, relying instead onUser=some-non-root-userto use rootless podman, but requiring root privileges to manage the systemd service.
Rootless podman caddy doesn’t need those priviliged ports, if you have your server behind a firewall device. You can map your ports on the firewall/router 80:8080 and then on the caddy container 8080:80. This way there is no need for priviliged ports and the traffic seems to go on ports 80 (and 443 the same way).
I mentioned in the post that it seems to make the client IP opaque to
caddy.I’ve never used your exact setup, but I have had issues with a web server behind a WAF not getting the client IP (all user traffic was shown as the WAF IP). In my case, the WAF was appending the client IP in a header, and I just had to tell web app to use that header as the client IP instead of the actual IP. Again, not sure if this helps since I have never used podman or caddy (this setup was with Wordpress and an Azure Application Gateway) but the same principles might apply.



