Email Deliver-ability

Way back in the day in 1996, I remember attending a Birds of a feather session at the USENIX technical conference, on email and spam. The people in the room railed at the spam problem and it was clear that the leaders were taking the spam as a personal attack. I sat quietly in the room, silently noting to myself that none of the proposed solutions, not even adding extensions to the SMTP protocol, were going to stop the growing commercialization of email as a medium. This is because any magic dust that you can sprinkle on email to mark is as trustworthy and not spam, can be and will be ruthlessly adopted by commercial senders to increase their own deliver-ability.

Increasing deliver-ability

I just added DKIM signing to messages that come from vindaloo.com. I did this because I added a new domain to my mail server so I could support my wife’s LLC: moderncrc.com. Honestly, I might have been better off outsourcing this to Purely Mail and if you are here trying to figure out how to set up mail for your own domain, I say that for 90% of people, outsourcing to someone like Purely Mail is the right way to go.

For self hosters and smaller companies, considering hosting there own email, consider the fact that deliver-ability will be your biggest problem. This means that getting other people to accept mail from you and not automatically treat it as spam to be quarantined, rather than read, is the biggest hurdle you will have to get over. In the modern internet, achieving deliver-ability means jumping through a few hoops.

  • You need to get an IPv4 address that hasn’t been fouled by someone using it to send spam. When these addresses get fouled, they get enumerated onto lists called RBLs or real-time blackhole lists. These are DNS based lists that say, this IP address could be, a source of spam. This isn’t generally difficult but it means that you won’t ever be able to send SMTP mail from an end-user internet connection such as an Xfinity or FiOS internet account. And being clear, I mean across cable, fiber, and business, or residential. The best way past this hurdle is to setup your outgoing SMTP server on a VPS from someone like vultr.com. After this you’ll probably need to put in a support request to be allowed to send mail at all. Of course, this pretty much means that you need to know how to run a Linux server with all that that entails.
  • You’ll need to setup DNS for your domain at leas SPF, and DMARC, but probably also DKIM. Microsoft, Google, and Yahoo are all requiring DMARC and either SPF or DKIM to deliver your messages. SPF is simple. You just enumerate the IP addresses that you allow to send email from your domain. DKIM is a little harder. You setup a private-key, public-key pair; then for each message that you send, you extract a portion of it and you arrange for your email server to create a signature of the portion using your private key. You publish the public key in your DNS. People receiving your email from you can verify this signature and if it all works, they know that you are the actual sender of the email rather than a spammer.

Where we ended up

All of this generally works but my frustration stems from the fact that it does very little to reduce spam. For years, over 80% of the spam that I receive has had valid SPF and DKIM and I’m writing this today because yet another obvious phishing attempt was send to me. Of course, it passed SPF and DKIM with flying colors.

Thus we end up in a world of unintended consequences. Rather then the internet as envisioned, a large group of equally participating networks, we are slowly moving to a world where only Microsoft, Google, and Yahoo can deliver email.

Git mirroring

Not many people chose to run their own gitlabs instance these days. My preference for self reliance means that I do. If you value self reliance I have recommendations:

  • Use ansible, chef, or puppet to build your gitlab instance because you are going to build two.
  • Build one gitlabs server for your groups consumption. Put this one in a data center close to your user for good performance.
  • Build a second gitlabs server in a remote location, perhaps at your favorite cloud provider. Where ever the second gitlabs instance is, you’ll want either one way or bidirection access via https or ssh between the two servers.
  • Follow the directions in your gitlabs: Help -> User Documentation -> Mirror a Repository. To mirror the repository from the primary to the secondary.

At this point, you’ve created a great plan B for disaster recovery in case something terrible happens to your gitlabs. For me, gitlabs is storing the Terraform and Ansible that I use to build my infrastructure. The goal is to be able to jumpstart your whatever from the mirror. I called the mirror my plan be because my plan A is to directly restore gitlabs from a nightly backup.

Setting up the mirror

Setting up the mirror is well documented. In broad strokes, here are the steps:

  • On the mirror, create a group and project to hold the mirrored repository.
  • Choose Push or Pull mirroring. In Push, the primary will push updates to the secondary as you work. In Pull, the mirror will periodically poll the primary for changes. You’ll have to decide what works best for you.
  • Fill out the form and perform any needed setup. When using push over ssh, this means setting up the primary to push and then copying the ssh public key from the primary and adding it as an allowed key on the mirror user.

As you configure mirroring, remember that constructing the mirror URL can be tricky especially if you want to use ssh as transport. This is because a typical git cloning string looks like this: git@git.example.com:group/project.git but the mirroring URL for this is: ssh://git@git.example.com/group/project.git. The difference lies between the server, git.example.com, and path. When cloning, the separator is a colon, ‘:’. When mirroring, it’s a slash, ‘/’. Getting the authentication can also be tricky. Mirroring more than a few repositories using SSH can become tricky because gitlabs generates a new ssh key for each repository. This is one place of the few places where I like git+https more than git+ssh. Finally, git+https is not without its pitfalls. If like me you also have your own CA then you have the additional problem that git doesn’t do a good job configuring curl’s CA. You have two choices here. On the box initiating the transfer, run: git config --global http.sslCAPath my-ca-path to use your CA dir or git config --global http.sslCAInfo my-ca-file.pem to configure your CA file. One advantage of git+https in this configuration is that you can create a single user token for all of your mirroring. My concern with git+ssh here is the proliferation of keys may eventually cause git to fail with a “too many authentication attempts” error. With git+http, you can create one mirroring token for all mirror operations.

Once you’ve setup mirroring, you have a great plan B if the day ever comes that your gitlabs server becomes unavailable. You should have a working gitlabs mirror that you can use in any way that you please. You can even pull backups of the mirror server so you have a redundant, offsite-backup.

When an ansible task fails

It’s been a frustrating week. If it can break, it has broken and lately I’ve been shining up my ansible to fix it. So I find myself trying to use my shiny new playbooks to address problems and to get my machines to all line up. Today my ansible-playbook ... run hung up on an arm based mini-nas that I have in my vacation house. My first assumption was that ansible was the problem That was wrong. To find the problem, I ran the playbook and then logged onto the machine seperately. A quick ps alx gave me this little snippet:

1001 43918 43917  2  52  0  12832  2076 pause    Is    1       0:00.03 -ksh (ksh)
   0 43943 43918  3  24  0  18200  6916 select   I     1       0:00.04 sudo su -
   0 43946 43943  2  26  0  13516  2776 wait     I     1       0:00.02 su -
   0 43947 43946  2  20  0  12832  2024 pause    S     1       0:00.03 -su (ksh)
   0 51594 43947  3  20  0  13464  2572 -        R+    1       0:00.01 ps alx
   0 51578 51527  2  52  0  12832  1980 pause    Is+   0       0:00.01 ksh -c /bin/sh -c '/usr/local/bin/python3.9 /root/.ansible/tmp/ansible-tmp-1694615369.904476-9336-34642038817669/Ansib
   0 51579 51578  3  52  0  13536  2552 wait     I+    0       0:00.01 /bin/sh -c /usr/local/bin/python3.9 /root/.ansible/tmp/ansible-tmp-1694615369.904476-9336-34642038817669/AnsiballZ_pkg
   0 51580 51579  3  40  0  36756 23668 select   I+    0       0:01.51 /usr/local/bin/python3.9 /root/.ansible/tmp/ansible-tmp-1694615369.904476-9336-34642038817669/AnsiballZ_pkgng.py
   0 51582 51580  0  52  0  21388  9048 wait     I+    0       0:00.04 /usr/sbin/pkg update
   0 51583 51582  1  52  0  21708 10104 ttyin    I+    0       0:00.19 /usr/sbin/pkg update

This is relevant because because it traces the process tree from my ssh login all the down to the process that’s hung up. Note well that the pkg update run at PID 51583 is in a ttyin state. Running pkg update manually gave me this:

# pkg update
Updating FreeBSD repository catalogue...
Fetching packagesite.pkg: 100%    6 MiB   3.3MB/s    00:02
Processing entries:   0%
Newer FreeBSD version for package zziplib:
To ignore this error set IGNORE_OSVERSION=yes
- package: 1302001
- running kernel: 1301000
Ignore the mismatch and continue? [y/N]: 

The why of all this doesn’t really matter much. In this case the machine is running a copy of FreeBSD that’s stale, 13.1, and pkgng is asking my permission to update to a package repository from FreeBSD 13.2. What’s important here is a basic debugging technique. The important question is: How does ansible actually work under the covers? The answer is, each ansible builtin prepares a 100k or so blob of python that it spits in …/.ansible/tmp on the remote machine. Then it uses the local python interpreter to run that blob. The python within the blob idempotently does the work. My blob needed to verify that the sudo package on my box. For reasons that I don’t understand but also really don’t mind, it wanted to make sure that the local package collection was up to date. It’s not normal for a box to hang on pkg update but it’s not crazy either.

Nuke and Pave

I recently reinstalled MacOS on my work and home laptops and then brough back my working state using Time Machine on both. I’m always impressed by how much faster and better a computer is after you do this. My friend Matt Zagaja: https://zagaja.com calls this a “Nuke and Pave” from here: https://www.macsparky.com/blog/2016/3/t0kcqkdxmkapwyo9eno0hv98ojd2kx and I love the term. In my opinion, one of the bad side effects of MacOS’ success is that you don’t have to *Nuke and Pave* very often. I think I’d been carrying my working environment forward for better than 10 years without a refresh and moving from High Sierra to Catalina added a bunch of unwanted quirkiness. This was probably because Apple is deprecating a bunch of the tools that I used in 2012 and while I don’t use them today, they were still installing kernel extensions and other stuff that was making my machine a little unstable. If you want to do your own *Nuke and Pave* on mac, you’ll need the following:

  1. The operating system you want to install. I used Big Sur 11.6. I find that for MacOS you want to download the OS and then use instructions like these: https://support.apple.com/en-us/HT201372 to create usb install media.
  2. If you use MacPorts see the notes at the end to save a list of the ports that your run. You’ll need it when you rebuild.
  3. Backup media: If it’s important you should have one or two backups of it . In this case you want a Time Machine backup. Disk Clone style backups would normally be quicker but don’t give you the granularity you need here. I use a USB-C to NVMe drive enclosure for speed here. My second backup is on rotating rust.

The operation is pretty simple. You want to:

  1. Boot your Mac from the USB installer by shutting down completely and then booting and pressing *Option* and holding it until your Mac presents you with a choice of boot media. It’s handy that newer Macs will boot on a keypress so you can start this process by simply pressing and holding *Option* If you are on Catalina or later you have to boot to _recovery mode_ first by shutting down your mac completely and using the utilities menu to enable booting from other media. If you have a firmware password on your Mac, you’ll need that to change this setting.
  2. Once you’ve booted from your install media, you need to erase and repartition the hard drive on your Mac. This is the point of no return so don’t take this step unless you trust your backups.
  3. Follow the install media instructions to reinstall MacOS on your computer. It will pause and ask you how to build users. What’s going on behind the scenes is the mac is using Migration Assistant to populate your home directory. Choose Time Machine backup and go into the menus and trim all of Applications, Settings, etc. You really only want to carry over data at this point. If you don’t migrate enough information, you can use Migration Assistant or Time Machine to catch anything that you missed.
  4. Reinstall your apps using the App Store, and whatever other sources you have. As a developer I have a bunch of software installed that requires me to Control-Click on the Application and then give permission to run one time.
  5. Restore security permissions as needed. App Store packages generally won’t have this problem. Other packages will. I use Emacs as my main editor because I’ve been doing this for a while. That requires me to go into the System Preferences -> Security & Privacy -> Privacy pane and grant Emacs permission to read files from my specified locations.

That’s most of what you need. I did the operation overnight. I handled steps 1 ~ 3 and then went to sleep. When I woke up I finished up 4 and 5.

A side note here for MacPorts or Homebrew users. You’ll want to restore your MacPorts/Homebrew environment also. For MacPorts this isn’t hard. Basically run sudo port list requested > ~/Desktop/ports-requested.txt This will leave a copy of the ports you installed by hand in a text file. When you are rebuilding your machine, you’ll need to perform the prerequisites needed to run MacPorts. Then you can use this output to install the packages that you used. I don’t use HomeBrew but I’ imagine that there must be something similar to this in HomeBrew.

OpenBSD on Raspberry Pi

 

I haven’t played with Raspberry Pi quite enough. I’ll do a write up on my garage door opener project at another time. But a really promising place for the Pi in my opinion is the role of a traveling router/access point. I don’t find the process of connecting to the WiFi in a hotel room particularly hard. It’s connecting back into my own network to access my my services that is difficult. The methods that I have at my disposal are:

  • IPSec VPN
  • SSH/SCP to selected service
  • Direct access where things are configured for it

Running OpenBSD on a Raspberry Pi gives me a solid place to put an IPSec connection for the whole hotel room network. Getting there involves installing OpenBSD on the Pi in the first place

Ingredients:

  • A Raspberry Pi 3B or 3B+ each model has plusses and minusses
  • An OpenBSD Raspberry pi snapshot release available at this url.
  • OpenBSD does not support the Pi video yet. The install console is serial. You need an Arduino/Raspberry Pi serial cable. The link points to a 4pin style. It connects as follows:
    1. Black <-> Pi GND
    2. White <-> Pi TX0
    3. Green <-> Pi RX0
  • A fast USB stick. OpenBSD can’t run from MicroSD card yet. This one works.
  • A WiFi adapter that you can live with this is going to be a compromise because WiFi has somewhat left the BSDs behind. These two CanaKit Wifi, and  TP-Link WiFi, work.

Continue reading “OpenBSD on Raspberry Pi”

SSL Everywhere? Maybe not cups

Last night I made the aggravating discovery that cups has gone SSL. The option to have cups protected by SSL is wonderful but I’m not sure that SSL by default is a good thing for printing services. I discovered this because printing from my Apple machines was failing with no log messages from my the Apple machines on my network. At first I thought this might be an IPv6 issue. Using tcpdump I quickly determined that cups on my Mac was not only using IPv6 but that it was using the semi-random “private/temporary” address of my cups server. But continued debugging showed that IPv6 wasn’t the issue, and the private/temporary address wasn’t it either. Disabling Encryption with the:

DefaultEncryption Never

Did the trick. This is clearly not safe. What would be best would be cutting a certificate for my cups server. That’s problematic because two years from now when the certificate expires, how long will it take me to figure out why printing stops working. Perhaps best would be to encrypt requests that need a password and allow cleartext communications for plain printing.

Turn off arp change noise on FreeBSD

If you run a FreeBSD server on a machine with any Apple infrastructure, Airport, AppleTV, etc then you are probably used to seeing lots of messages like this:

...
+arp: 169.254.124.133 moved from --- somewhere --- to - somewhere else- on em0
+arp: 169.254.124.133 moved from - somewhere else- to --- somewhere --- on em0
...

This is the Bonjour Sleep Proxy service in action. A device that provides a sleep proxy attempts to make Bonjour services available on your network at all times by advertising  the proxy’s IP address  as  the service destination while the  true provider is sleeping. For example, if you have an older, non-networked shared printer connected to an iMac Desktop, the sleep proxy will advertise it’s own address as the destination for your share printer. If someone sends a print request to your printer, the sleep proxy intercepts the request, sends a wake up packet to your iMac, and then the printing can actually go on.

This activity looks a lot like an arp poisoning attack. If you want to check for that look at the mac address of the devices in question. You can look up the first three octets of the mac address at Google. Those are a manufacturer ID. If one or both of the devices is from Apple, it’s more likely that you have a Bonjour Sleep Proxy working on your network.

Over time these messages are disruptive on a FreeBSD server because they blow valid information out of the kernel’s dmesg buffer. You can still the kernel’s boot dmesg by groveling through sysctl if you have a disk drive that’s misbehaving, that information will be lost in a day or two.

To turn these messages off, do the following:

$ sudo su -
Password: 
# ## Fix this for this kernel boot session...
# sysctl -w net.link.ether.inet.log_arp_movements=0
net.link.ether.inet.log_arp_movements: 1 -> 0
# ## Fix this permanently.
# echo 'net.link.ether.inet.log_arp_movements=0' >> /etc/sysctl.conf
#

Ansible step zero

In my previous article I showed the steps to take to build an ansible repository that you could grow to fit your existing infrastructure. The first step here to setup the repository that you built to self-bootstrap. For that you’ll need to flesh out your inventory and build your first playbook.

Building Inventory

Ansible is driven off of an inventory. The inventory specifies the elements of your infrastructure and the groups them. This is to make things easy to manage. Ansible is compatible with three kinds of inventory: Inventory specified as a Windows style .ini formatted static file, or specified in a yaml file, or finally specified dynamically. Dynamic inventory is the holy grail. I recommend starting with a yaml inventory.

Although both yaml and ini style inventories have roughly the same capabilities, I prefer yaml because if you work with ansible, you’re going to become good friends with yaml no matter what. If you aren’t familiar with yaml format, find some time to study it. yaml is just a markup format that allows you to structure things. I didn’t really get yaml until I played with the python yaml module. I realized that yaml, like json, allows you to write python variables into a file in a structured fashion. the python yaml module can read a properly formatted yaml file and will return a python variable containing the contents of the yaml “document” or it can take any python variable, an array, a dict, a static, and write it such that another python program could read it. Yaml differs from json in that it’s generally parseable and readable by human beings. If the consumer of your data is program, use json. If a human is expected to read it, use yaml.

Your starting yaml inventory should look something like this:

---
all:
  children:
    maestro-test:
      vars:
        std_pkg:
          - ansible
          - terraform
          - git
        add_pkg:
          - emacs

      hosts:
        192.168.100.3:
          my_domain: mydomain.com
          my_host: maestro-test

This defines an inventory with one group: maestro-test. It includes one machine at IP address 192.168.100.3. and it defines some variables for the group. This should be stored in an approriately named file:

base-maestro-inventory.yml

In the Ansible directory.

The first playbook

With an inventory, you can build a playbook. The first playbook looks like this:

---
- hosts: maestro-test
  
- tasks:
    - name: Install standard packages
      package:
        name: "{{ item }}"
        state: latest
      with_items: "{{ std_pkg }}"

    - name: Install additional packages
      package:
        name: "{{ item }}"
        state: latest
      with_items: "{{ add_pkg }}"

This should be installed in a file named something like:

base-maestro-playbook.yml

in the Ansible directory. At this point presuming that you have a machine, physical or virtual at 192.168.100.3 into which you can ssh, as root, you can bootstrap your maestro as follows:

chris $ ansible-playbook -i base-maestro-inventory.yml --user root base-maestro-playbook.yml

And that should install the correct packages onto your maestro test box. I’ll revisit this article later to add users.