• 2 Posts
  • 27 Comments
Joined 2 years ago
cake
Cake day: December 12th, 2023

help-circle
  • I have two systems that sort of work together.

    The first system involves a bunch of text files for each task. OS installation, basic post OS installation tasks and a file for each program I add (like UFW, apparmor, ddclient, docker and so on). They basically look like scripts with comments. If I want to I can just copy/paste everything into a terminal and reach a a specific state that I want to be at.

    The second system is a sort of “skeleton” file tree that only contains all the files that I have added or modified.

    Here's an example of what my server skeleton file tree looks like
    .
    ├── etc
    │   ├── crontabs
    │   │   └── root
    │   ├── ddclient
    │   │   └── ddclient.conf
    │   ├── doas.d
    │   │   └── doas.conf
    │   ├── fail2ban
    │   │   ├── filter.d
    │   │   │   └── alpine-sshd-key.conf
    │   │   └── jail.d
    │   │       └── alpine-ssh.conf
    │   ├── modprobe.d
    │   │   ├── backlist-extra.conf
    │   │   └── disable-filesystems.conf
    │   ├── network
    │   │   └── interfaces
    │   ├── periodic
    │   │   └── 1min
    │   │       └── dynamic-motd
    │   ├── profile.d
    │   │   └── profile.sh
    │   ├── ssh
    │   │   └── sshd_config
    │   ├── wpa_supplicant
    │   │   └── wpa_supplicant.conf
    │   ├── fstab
    │   ├── nanorc
    │   ├── profile
    │   └── sysctl.conf
    ├── home
    │   └── pi-user
    │       ├── .config
    │       │   └── ash
    │       │       ├── ashrc
    │       │       └── profile
    │       ├── .ssh
    │       │   └── authorized_keys
    │       ├── .sync
    │       │   ├── file-system-backup
    │       │   │   ├── .sync-server-fs_01_root
    │       │   │   └── .sync-server-fs_02_boot
    │       │   └── .sync-caddy_certs_backup
    │       ├── .nanorc
    │       └── .tmux.conf
    ├── root
    │   ├── .config
    │   │   └── mc
    │   │       └── ini
    │   ├── .local
    │   │   └── share
    │   │       └── mc
    │   │           └── history -> /dev/null
    │   ├── .ssh
    │   │   └── authorized_keys
    │   ├── scripts
    │   │   ├── automated-backup
    │   │   └── maintenance
    │   ├── .ash_history -> /dev/null
    │   └── .nanorc
    ├── srv
    │   ├── caddy
    │   │   ├── Caddyfile
    │   │   ├── Dockerfile
    │   │   └── docker-compose.yml
    │   └── kiwix
    │       └── docker-compose.yml
    └── usr
        └── sbin
            ├── containers-down
            ├── containers-up
            ├── emountman
            ├── fs-backup-quick
            └── rtransfer
    

    This is useful to me because I can keep track of every change I make. I even have it set up so I can use rsync to quickly chuck all the files into place after a fresh install or after adding/modifying files.

    I also created and maintain a “quick install” guide so I can install a fresh OS, rsync all the modified files from my skeleton file tree into place, then run through all the commands in my quick install guide to get myself back to the same state in a minimal amount of time.


  • I’ve experienced gatekeeping issues long before I got into self-hosting specifically. Years ago I wanted to learn C++ for Arduino and I was constantly talked down for asking questions.

    “Why don’t you just do …” in response to a question feels very rude as a newcomer because it feels like I am being talked down to for not knowing what others already know. Even when I made an effort to show I was making an effort to learn on my own, I was still belittled.

    I’m all for hearing different ways of approaching my issue but from the replies, it often feels like other people insist there is only one true specific way to handle an issue.

    When I first got into self-hosting, people kept pushing Cloudflare on me. When I expressed concern over a large centralized corporation having that much control and how they might have service issues, I was mocked really hard. Half a year later and there was a significant outage and suddenly there’s all this talk about how centralized the internet is and how that is bad.

    After that I took it upon myself to find alternative ways to protect myself without Cloudflare’s services but every step of the way has been an isolating experience. Every step of the way has been full of people saying that my efforts are pointless and that the bots will win anyways so I shouldn’t bother.

    I decided to try to secure myself through multiple layers of obscurity and every question in that direction has been full of people saying that obscurity is not security, the bots will find you anyways!

    I’ve stopped myself from asking too many questions now. I still keep learning in my direction. I feel like I’ve managed to find multiple solutions that both obscure and protect myself. I’ve constantly check my logs for months now and the bot is less than I expected in places I expect them to be and completely zero in other places I thought there would be some activity.

    I want to share what I have learned and my experiences but I know I will receive backlash for deviating from the norm.

    I’ve spent a lot of my self-hosting efforts trying to find ways to protect myself with minimal use of third party services, documenting as much as I could only feel afraid to share what I have learned.

    This comment may not be about learning self-hosting as a beginner specifically but the vibe has been pretty damn consistent throughout me learning C++, self-hosting, linux and shell scripting. All things I enjoy but all so full of people ready to talk down to someone who wants to learn.


  • My web facing server has just enough packages installed to (kinda securely) host a Caddy and Kiwix docker container to work with my domain name and make a comfortable work environment through SSH. My Pi for my HomeAssistant docker container has less because it’s locked down to just my local network.

    I also wrote my own install scripts so reinstalling everything and getting it back to a running state would take about 15 minutes for each device.

    And I also wrote my own backup/restore scripts that evolved over 3/4 of a year. I use them often so I have confidence in those scripts.

    I personally don’t really care too much. I have multiple ways of dealing with issues for something that’s a hobby to me. Which is why I stick to simplicity.

    I’m sure this is a thing for people to worry about when dealing with more complex setups. I just wanna vibe out in my tiny corner of the internet.




  • I agree with the last point, I only mentioned that because I don’t really know what other setting in my SSHD config is hiding my SSH port from nmap scans. That just happened to be the last change I remember doing before running an nmap scan again and finding my SSH port no longer showed up.

    Accessing SSH still works as expected with my keys and for my use case, I don’t believe I need an additional passphrase. Self hosting is just a hobby for me and I am very intentional with what I place on my web facing server.

    I want to be secure enough but I’m also very willing to unplug and walk away if I happen to catch unwanted attention.


  • Thanks for the insight. It’s useful to know what tools are out there and what they can do. I was only aware of nmap before which I use to make sure the only ports open are the ports I want open.

    My web facing device only serves static sites and a file server with non identifiable data I feel indifferent about being on the internet. No databases or stress if it gets targeted or goes down.

    Even then, I still like to know how things work. Technology today is built on so many layers of abstraction, it all feels like an infinite rabbit hole now. It’s hard to look at any piece of technology as secure these days.


  • I use a different port for SSH, I also have use authorized keys. My SSHD is setup to only accept keys with no passwords and no keyboard input. Also when I run nmap on my server, the SSH port does not show up. I’ve never been too sure how hidden the SSH port is beyond the nmap scan but just assumed it would be discovered somehow if someone was determined enough.

    In the past month I did rename my devices and account names to things less obvious. I also took the suggestion from someone in this community and setup my TLS to use wildcard domain certs. That way my sub domains aren’t being advertised on the public list used by Certificate Authorities. I simply don’t use the base domain name anymore.


  • I found BashWrite which is just a very simple static site generator written completely in bash as a single file script.

    The only dependency is having an up-to-date sed command which most systems should have. I use Alpine Linux which comes with a minimal sed command so I had to download the full command through my package manager.

    It’s simple, basic and has support for the majority of markdown formatting. There’s some limitations due to it being written in Bash only but I am personally okay with that.

    I found it on this list of static site generators if you’re curious to see more options.



  • podman ps shows the following:

    CONTAINER ID  IMAGE                                 COMMAND               CREATED         STATUS         PORTS                                                         NAMES
    daae60bdcc65  docker.io/library/caddy-caddy:latest  caddy run --confi...  47 minutes ago  Up 47 minutes  0.0.0.0:80->80/tcp, 0.0.0.0:5050->443/tcp, 2019/tcp, 443/udp  caddy
    

    netstat -tunpl shows the following:

    Active Internet connections (only servers)
    Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
    tcp        0      0 0.0.0.0:5025            0.0.0.0:*               LISTEN      3270/sshd: /usr/sbi 
    tcp        0      0 0.0.0.0:5050            0.0.0.0:*               LISTEN      7342/conmon         
    tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      7342/conmon         
    tcp        0      0 10.89.0.1:53            0.0.0.0:*               LISTEN      7336/aardvark-dns   
    tcp6       0      0 :::5025                 :::*                    LISTEN      3270/sshd: /usr/sbi 
    udp        0      0 10.89.0.1:53            0.0.0.0:*                           7336/aardvark-dns 
    

    The only difference for the netstat command between Docker and Podman is that Podman show’s entries for aardvark-dns and Docker does not which is something I expect.



  • I sat down and managed to get wildcard certs working.

    I figured I would leave my Caddyfile here in case anyone in the future needs a working reference. This is based off the Caddyfile mentioned in the original post.

    Caddy Reference

    Caddyfile
    # GLOBAL ENCRYPTION - DESEC.IO
    {
            acme_dns desec {
                    token "DeSEC.io Token Number"
            }
    }
    
    *.samplesite.ca {
            # SITE WIDE ENCRYPTION
            tls {
                    dns desec {
                            token "DeSEC.io Token Number"
                    }
            }
            # SUB DOMAIN #1
            @files host files.samplesite.ca
            handle @files {
                    root * /srv
                    file_server {
                            hide misc
                            browse
                    }
            }
            # FALLBACK FOR UNHANDLED DOMAINS
            handle {
                    abort
            }
    }
    




  • I’ve been using Alpine Linux. I’ve always leaned towards minimalism in my personal life so Alpine seems like an appropriate fit for me.

    Since what is installed is intentional, I am able to keep track of changes more accurately. I keep a document for complete setup by hand, then reduce that to an install script so I can get back to the same state in a minimal amount of time if needed.

    Since I only have a Laptop and two Raspberry Pi’s with no intention of expanding or upgrading, this works for me as a personal hobby.

    I’ve even gone as far as to use Alpine Sway as a desktop to keep everything similar as well.

    I wouldn’t recommend it for anyone who doesn’t have the time to learn. It doesn’t use systemd and packages are often split meaning you will have to figure out what additional packages you may need beyond the core package.

    I appreciate the approach Alpine takes because from a security point of view, less moving parts means less surface area to exploit. In today’s social climate, who knows how or when I’ll become a target.



  • Sounds like what I’ve been doing manually for a while now as I learn more. For my desktop I have three scripts. One to install Alpine on full disk encryption. One for the initial setup up to the first required reboot and the last for the remaining setup plus transferring files.

    I’ve been learning how to edit files with sed, cat, echo and tee commands to help automate everything from a fresh install.

    Similar process for my Pi’s except I just copy-paste blocks of commands through a terminal instead of a script.

    To transfer files to all their proper directories, I have a whole system for that using rsync. I basically keep a bare-bones directory tree with only the files I have worked on. Then I have an rsync command to send all those files onto the Pi’s file system in a way that retains all the files and folder’s attributes.

    I wrote an rsync tool for myself to help me keep all these commands in files that I can neatly organize. I use that tool so much that it’s now my entire backup system. With a bunch of files organized with numbers, I can automate the backup of my phone, two pi’s and laptop to a partition on my laptop, then an additional copy to my external SSD in one command. And I have very high confidence in my restores since I do that frequently while testing new stuff. I also failed a lot before to get that much confidence.

    I have issues with over organization if you couldn’t tell by now hahaha.


  • I personally use rsync since I do most my work by command line these days. It’s taken nearly half a year really understand it but it offers the flexibility I desire.

    I have a small network with only a handful of devices. I keep all my incremental backups on encrypted partitions and encrypted detachable SSD’s which I manually decrypt. Rsync is set up to use SSH so there’s some form of encrypted transfers but that’s not actually a priority for me, just an added benefit.

    I also use rsync to sync files and directories while maintaining additional system attributes across multiple systems. That is to say, what’s root or user accessible stays root or user accessible after the transfer is complete.

    If I desired more protection, I’d probably look into Borg backup. Currently I just use encryption as an annoyance deterrence method. I also stick to the base Rsync command because every other option I tried brought with it complexities which have all failed me. I at least have a high level confidence in my backup/restore process now.



  • I haven’t tried arch at all. I used Linux Mint for a year, LMDE for a year and only really started working with command line since last December. I think I chose to try Alpine because I wanted my web facing devices to have the least amount of software installed. Security-wise it made sense to me to have less surface area to exploit.

    It took a bit extra effort for me to learn how to use OpenRC as the init system. As well as learning Linux from a bare bones linux perspective.

    I actually found using Busy-box Ash interesting to work with and that’s the only shell I currently use. I even wrote a whole script around Rsync in a POSIX friendly way because I liked the idea portable scripting.

    If you’re interested, I can send you a link that contains the setup notes for my server. It’s about 85% of my setup process, the rest being some files that are mostly customization that I rsync into place towards the end of the setup process. That can give you an idea of what Alpine on ARM is like.