Email Deliver-ability

Way back in the day in 1996, I remember attending a Birds of a feather session at the USENIX technical conference, on email and spam. The people in the room railed at the spam problem and it was clear that the leaders were taking the spam as a personal attack. I sat quietly in the room, silently noting to myself that none of the proposed solutions, not even adding extensions to the SMTP protocol, were going to stop the growing commercialization of email as a medium. This is because any magic dust that you can sprinkle on email to mark is as trustworthy and not spam, can be and will be ruthlessly adopted by commercial senders to increase their own deliver-ability.

Increasing deliver-ability

I just added DKIM signing to messages that come from vindaloo.com. I did this because I added a new domain to my mail server so I could support my wife’s LLC: moderncrc.com. Honestly, I might have been better off outsourcing this to Purely Mail and if you are here trying to figure out how to set up mail for your own domain, I say that for 90% of people, outsourcing to someone like Purely Mail is the right way to go.

For self hosters and smaller companies, considering hosting there own email, consider the fact that deliver-ability will be your biggest problem. This means that getting other people to accept mail from you and not automatically treat it as spam to be quarantined, rather than read, is the biggest hurdle you will have to get over. In the modern internet, achieving deliver-ability means jumping through a few hoops.

  • You need to get an IPv4 address that hasn’t been fouled by someone using it to send spam. When these addresses get fouled, they get enumerated onto lists called RBLs or real-time blackhole lists. These are DNS based lists that say, this IP address could be, a source of spam. This isn’t generally difficult but it means that you won’t ever be able to send SMTP mail from an end-user internet connection such as an Xfinity or FiOS internet account. And being clear, I mean across cable, fiber, and business, or residential. The best way past this hurdle is to setup your outgoing SMTP server on a VPS from someone like vultr.com. After this you’ll probably need to put in a support request to be allowed to send mail at all. Of course, this pretty much means that you need to know how to run a Linux server with all that that entails.
  • You’ll need to setup DNS for your domain at leas SPF, and DMARC, but probably also DKIM. Microsoft, Google, and Yahoo are all requiring DMARC and either SPF or DKIM to deliver your messages. SPF is simple. You just enumerate the IP addresses that you allow to send email from your domain. DKIM is a little harder. You setup a private-key, public-key pair; then for each message that you send, you extract a portion of it and you arrange for your email server to create a signature of the portion using your private key. You publish the public key in your DNS. People receiving your email from you can verify this signature and if it all works, they know that you are the actual sender of the email rather than a spammer.

Where we ended up

All of this generally works but my frustration stems from the fact that it does very little to reduce spam. For years, over 80% of the spam that I receive has had valid SPF and DKIM and I’m writing this today because yet another obvious phishing attempt was send to me. Of course, it passed SPF and DKIM with flying colors.

Thus we end up in a world of unintended consequences. Rather then the internet as envisioned, a large group of equally participating networks, we are slowly moving to a world where only Microsoft, Google, and Yahoo can deliver email.

Netgear Mifi MR5100

I’ve been carrying this device for a few years now. I think I bought it in the midst of the pandemic. It provides internet backed by the local cellular network. I’ve used MiFi’s of one form or another since the mid 1990s. In the beginning, the devices were dreadful and the service matched. This devices isn’t bad at all. One of it’s most powerful features is the ability to piggyback it’s network on the wifi provided by your hotel. I use this feature to connect a carried Apple TV in remote places.

The problem with Apple TVs on Hotel Wifi: T&C acceptance

Most hotel wifi requires you to accept terms and conditions on a splash screen before you can use the wifi. This T&C splash screen is a once per device occurance so blindly connecting an Apple TV means that you would have to accept but as far as I know, the Apple TV has no way to show this screen. If you are reading this and you know that I’m wrong, email me at chris / at / vindaloo / dot / com. I’m curious to know what works for you. I have gotten my Apple TV onto the hotel’s network by spoofing the Apple TV’s MAC address to get the Apple TV checked in as having accepted the T&C but that’s the only other way that I know how to do this.

Apple TVs really want a private broadcast domain

Apple TVs heavily advertise their existing on the local network or broadcast domain. This is totally fine in your house where you control the broadcast domain but putting an Apple TV directly onto the hotel’s network is really asking for trouble. All the hotel’s guests can see it, ask me how I know, and any hotel customers who have Apple products can attempt to use your Apple TV as a broadcast device. Now, I know for a fact that an Apple TV that’s directly connected to any modern television via HDMI can wake up, turn on the TV and start broadcasting video when asked to another Apple device. Yes, this can be prohibited by setting a password on the Apple TV but that’s a changeable setting. Finally on a private broadcast domain, this is a really useful and powerful feature.

The mifi solves both of these problems. You check the T&C screen once for the mifi and since it’s using NAT, everything behind it works. And the mifi using that NAT to create a private, local broadcast domain. Furthermore, if you join your iPhone, or iPad, or MacBook to the network created by the mifi, you can use the Apple TV as you would if you were at home.

Setting up the MR5100

The feature that you want from the Mifi is called Data Offloading. You get to choose whether to rebroadcast the internet from either local WiFi or ethernet if you are lucky enough to have working ethernet in a hotel room these days. With data offloading you choose the network you want to consume and off you go.

Data Offloading weirdness

So I’m actually writing this to document some quirks that I’ve discovered on a two week long jaunt through western Europe.

You need to bring some sort of a wireless analyzer because if you want to rebroadcast the hotel’s wifi, the Mifi’s wifi must be on and it must be set to use a different channel from the hotel. Further, if you are lucky enough to have ethernet in your room, you actually just want to use your MiFi’s network backhauled against ethernet.

  • Configuring the Mifi / Apple TV against hotel wifi: String and ethernet cable between the Mifi and the Apple TV. Configure data offloading to use wifi and to connect to the hotel’s network.
  • Configuring the Mifi / Apple TV against hotel ethernet: Connect the mifi’s ethernet port to the hotel. Configure data offloading to use ethernet. Connect your Apple TV to the mifi using your network and credentials.

This combination is useful in other ways. At one of the hotels during our stay, the wifi was very week except by the hotel room door. It turned out that I could connect the mifi to power near the door. This created a local network that I could use anywhere in the room with more reliable, but more latent, connectivity to the internet that I could get from the hotel alone. Note well that in this hotel where the wifi was strong it was awesome. In the lobby I clocked a speedtest of 200Mb/s up and down. I was able download a 9Gb virtual machine image over dinner and drinks without a problem. In our room, the wifi analyser showed nearly no 5GHz network activity and a middling to weak 2.4GHz signal except for a spot on a table about 2m (6ft) from the door. Placing the mifi on this table and using data offloading to broadcast a 5GHz network created usable signal within the room.

In conclusion

The Netgear MR5100 with data offloading is a useful device by itself. It’s essential if you want to carry an Apple TV on a long trip to keep up with your streaming. It can also help to fix wifi problems in hotels rooms where you just get blank stares from the hotel staff. But, if you are lucky, it actually really shines in a situation where you can feed it local ethernet.

Add the watchdog timer on Raspberry pi

I learned about the kernel watchdog timer when I ran my network appliances on hardware from Soekris Engineering. The appliances should be running unless one has specifically turned them off but as the that’s not the reality of the world. A watchdog timer is something that has to be frobbed, I’ve also seen petted periodically or the kernel will assume that all of userland is borked. When userland is borked this way, the kernel will reset, hoping that a reset will kickstart enough of the userland infrastructure programs to put the system into a workable state. On Soekris, you could program the kernel watchdog to timeout of 60s. On the Pi, it looks like the maximum is 15s. Raspberry Pi’s are weird. For small jobs, the best way to run them is on SD cards.But an SD card in a Pi will die from write exhaustion after running for something between a year and two years. This is my experience based on using SanDisk 8G SD cards without any consideration for write exhaustion. You can mitigate this and lengthen the lifetime in many ways:

  • Purchase a 16G card where you have an 8G need.
  • Purchase a Pi with more RAM than you need and perform write heavy task to a memory backed filesystem or ramdisk.
  • Do all of your logging via syslog and push all your logs onto a remote log server.

On the Pi devices that I use as cameras, I write the picture / movie output to a RAM disk. I have also noticed that the older SD devices I was buying in the past were more susceptible to this problem that the devices that I buy lately. This is probably a biased observation.

Facebook account locked…

I spent my birthday on a trip to Europe. It’s a combo trip, Jay’s moving on from college to his job and I turned 61. If you know me you know that means we took a cruise. I like cruising. I get to meet lots of interesting people and get a taste of lot’s of interesting places. Ask me about the Mosque in Casablanca if you want to hear me gush. Part of the way through the cruise I checked into my Facebook account. I’m very leery of Facebook. Their function is to reconnect you with your friends but the way you pay for it is by sacrificing a large amount of your privacy. For most, the tradeoff is worth the sacrifice. That includes me but I like to hedge a little by only using Facebook from a privacy/incognito mode browser. I never consume www.facebook.com either on a device or through an app. This keeps their tracking information to a minimum. I had assumed that this behavior poisoned the relationship to the point where Facebook could live without me. That may be true but I don’t think that they would be so overt. The saying goes: Never blame malice for an action that’s adequately explained by incompetence. I tried using the process for unlocking the account unsuccessfully for three days. The process includes:

  • Upload a picture of your driver’s license that just so, at least 1500×1000 pixels, on a dark background,
  • Appeal to the Facebook security people for a code, write that code on a piece of paper by hand and make a video of yourself holding the paper. Make sure to move your head and the paper.

This morning I got up, very early because I’m still a little jet lagged, and decided to do what all good computer scientists do. I looked at the logs because sometimes the emails from Facebook would reach me but most o the time I just got what I though was very frustrating radio silence. It turns out that like Microsoft in January, facebook has found their way into an email DNS based realtime blocklist, or DNSRBL. And I happen to use that block list in my email server so Facebook emails were getting dropped on the floor. This is probably the root cause of the problem. Each time you log into Facebook, it tries to put a piece of information in your browser, app, or device that says: “Facebook, you can trust this because it’s really Chris”. If you do this in an incognito mode browser, that token gets deleted when you close the browser window or tab. Thus, people like me don’t have a place that facebook can say it’s really me. Lacking that they assume the worst. If I keep logging in from someplace near my house, it’s all good. But if I’m on a cruise ship in the harbor of Casablanca, that could be a hacking attempt. I’ll write a different post about 2FA and how it applies here later. When they assume the worst, they send you an email which saying: “Hey, someone logged into your Facebook account using your password and your 2FA token but they are in Morocco. Was this really you?” Now, if you receive that email and respond, yes, I’m on vacation” the gears keep turning. But if that email gets dropped on the floor, you know the rest of the story.

So what can one do to fix this. I still want to hedge my bets but Facebook has become a little too sensitive to the stream of brand new logins that they saw from me each time I fired up a new private tab and logged in. If you’re like me, you’ll still only consume www.facebook.com from a browser tab but the next best thing to private mode is a separate profile. Profile’s are supported in both Chrome and Safari. A profile is essentially domain under which browser information, cookies etc, are stored. In Chrome each profile is a separate space. Tracking information that you generate by browsing in one profile won’t cross into another one. I’m not happy to recommend Chrome but in this case, it gets the job done. I will note that profiles work under Chrome. I implemented by creating a separate “social media” profile for twitter / X and Facebook. Facebook just goes into a login loop when I try to do it from a profile in Safari.

MacOS disk repairs

If anybody ever says that Apple is a lot better than Microsoft one thing that they need to pay attention to is the fact that both companies are guilty of the same problems. In this case I’m talking about both companies habit of letting long standing regressions in their software languish, unaddressed for long periods of time. Apple’s sin in this case is with Disk Utility. Apples has allowed a bug in the Disk Utility in macOS go on, unaddresses since about OS X 10.13 or so when they changed the structure of Time Machine Backups to force an encrypted drive. I’ll admit that I’m not being completely fair. I’m running an older version of macOS on my laptop so this bug may indeed be fixed but it still stayed in the software for a good 3 years.

By trying to make it easier to use an encrypted volume for backups Apple has added a few steps to the process of checking these volumes for structural errors. This means the graphic Disk Utility frequently false positives, saying that your volume has a problem. The real issue is that Disk Utility hasn’t properly set things up for the volume check to happen. Back in the olden days, UNIX you wouldn’t let you use a Volume with structural problems because you couldn’t mount it with write allowed. Today is you can mount broken drives in write mode. Then you get to cross your fingers that you’re not compounding an existing problem. Side note: Here’s where I admit to being really really old because 99% of the time its actually okay and that’s actually the case. The result is that Disk Utility can’t properly check out your Time Machine Volumes. To check one out you need to take the time to boot your machine into recovery mode where all of this shiny that makes users happy is disabled. In recovery mode, Disk Utility just works. Compounding this problem, when Apple does the check from a normal boot, it doesn’t detect its own bug and declares that your volume is dangerously corrupted and unreliable so your best best is to start from scratch. This article shows how you can at least get some peace of mind by checking the state of the volume and repairing it from the command line in a terminal window. I would’ve liked to have seen a screenshot of the command line session. But the author decided that figuring out which disk you need to check is too difficult and they didn’t include one. That’s the responsible choice since you are going to be doing a lot of potentially destructive commands with sudo. I worked my way through the process on my own third Time Machine Volume. I have this issue because this Volume is connected to my docking station. It auto mounts when I use my machine on my desktop so I can have a full sized monitor. It’s easy to forget that the Volume needs to be ejected cleanly and quiesced before I disconnect from the docking station. I’m cultivating the habit of ejecting this Volume when my backup has completed.

Git mirroring

Not many people chose to run their own gitlabs instance these days. My preference for self reliance means that I do. If you value self reliance I have recommendations:

  • Use ansible, chef, or puppet to build your gitlab instance because you are going to build two.
  • Build one gitlabs server for your groups consumption. Put this one in a data center close to your user for good performance.
  • Build a second gitlabs server in a remote location, perhaps at your favorite cloud provider. Where ever the second gitlabs instance is, you’ll want either one way or bidirection access via https or ssh between the two servers.
  • Follow the directions in your gitlabs: Help -> User Documentation -> Mirror a Repository. To mirror the repository from the primary to the secondary.

At this point, you’ve created a great plan B for disaster recovery in case something terrible happens to your gitlabs. For me, gitlabs is storing the Terraform and Ansible that I use to build my infrastructure. The goal is to be able to jumpstart your whatever from the mirror. I called the mirror my plan be because my plan A is to directly restore gitlabs from a nightly backup.

Setting up the mirror

Setting up the mirror is well documented. In broad strokes, here are the steps:

  • On the mirror, create a group and project to hold the mirrored repository.
  • Choose Push or Pull mirroring. In Push, the primary will push updates to the secondary as you work. In Pull, the mirror will periodically poll the primary for changes. You’ll have to decide what works best for you.
  • Fill out the form and perform any needed setup. When using push over ssh, this means setting up the primary to push and then copying the ssh public key from the primary and adding it as an allowed key on the mirror user.

As you configure mirroring, remember that constructing the mirror URL can be tricky especially if you want to use ssh as transport. This is because a typical git cloning string looks like this: git@git.example.com:group/project.git but the mirroring URL for this is: ssh://git@git.example.com/group/project.git. The difference lies between the server, git.example.com, and path. When cloning, the separator is a colon, ‘:’. When mirroring, it’s a slash, ‘/’. Getting the authentication can also be tricky. Mirroring more than a few repositories using SSH can become tricky because gitlabs generates a new ssh key for each repository. This is one place of the few places where I like git+https more than git+ssh. Finally, git+https is not without its pitfalls. If like me you also have your own CA then you have the additional problem that git doesn’t do a good job configuring curl’s CA. You have two choices here. On the box initiating the transfer, run: git config --global http.sslCAPath my-ca-path to use your CA dir or git config --global http.sslCAInfo my-ca-file.pem to configure your CA file. One advantage of git+https in this configuration is that you can create a single user token for all of your mirroring. My concern with git+ssh here is the proliferation of keys may eventually cause git to fail with a “too many authentication attempts” error. With git+http, you can create one mirroring token for all mirror operations.

Once you’ve setup mirroring, you have a great plan B if the day ever comes that your gitlabs server becomes unavailable. You should have a working gitlabs mirror that you can use in any way that you please. You can even pull backups of the mirror server so you have a redundant, offsite-backup.

When an ansible task fails

It’s been a frustrating week. If it can break, it has broken and lately I’ve been shining up my ansible to fix it. So I find myself trying to use my shiny new playbooks to address problems and to get my machines to all line up. Today my ansible-playbook ... run hung up on an arm based mini-nas that I have in my vacation house. My first assumption was that ansible was the problem That was wrong. To find the problem, I ran the playbook and then logged onto the machine seperately. A quick ps alx gave me this little snippet:

1001 43918 43917  2  52  0  12832  2076 pause    Is    1       0:00.03 -ksh (ksh)
   0 43943 43918  3  24  0  18200  6916 select   I     1       0:00.04 sudo su -
   0 43946 43943  2  26  0  13516  2776 wait     I     1       0:00.02 su -
   0 43947 43946  2  20  0  12832  2024 pause    S     1       0:00.03 -su (ksh)
   0 51594 43947  3  20  0  13464  2572 -        R+    1       0:00.01 ps alx
   0 51578 51527  2  52  0  12832  1980 pause    Is+   0       0:00.01 ksh -c /bin/sh -c '/usr/local/bin/python3.9 /root/.ansible/tmp/ansible-tmp-1694615369.904476-9336-34642038817669/Ansib
   0 51579 51578  3  52  0  13536  2552 wait     I+    0       0:00.01 /bin/sh -c /usr/local/bin/python3.9 /root/.ansible/tmp/ansible-tmp-1694615369.904476-9336-34642038817669/AnsiballZ_pkg
   0 51580 51579  3  40  0  36756 23668 select   I+    0       0:01.51 /usr/local/bin/python3.9 /root/.ansible/tmp/ansible-tmp-1694615369.904476-9336-34642038817669/AnsiballZ_pkgng.py
   0 51582 51580  0  52  0  21388  9048 wait     I+    0       0:00.04 /usr/sbin/pkg update
   0 51583 51582  1  52  0  21708 10104 ttyin    I+    0       0:00.19 /usr/sbin/pkg update

This is relevant because because it traces the process tree from my ssh login all the down to the process that’s hung up. Note well that the pkg update run at PID 51583 is in a ttyin state. Running pkg update manually gave me this:

# pkg update
Updating FreeBSD repository catalogue...
Fetching packagesite.pkg: 100%    6 MiB   3.3MB/s    00:02
Processing entries:   0%
Newer FreeBSD version for package zziplib:
To ignore this error set IGNORE_OSVERSION=yes
- package: 1302001
- running kernel: 1301000
Ignore the mismatch and continue? [y/N]: 

The why of all this doesn’t really matter much. In this case the machine is running a copy of FreeBSD that’s stale, 13.1, and pkgng is asking my permission to update to a package repository from FreeBSD 13.2. What’s important here is a basic debugging technique. The important question is: How does ansible actually work under the covers? The answer is, each ansible builtin prepares a 100k or so blob of python that it spits in …/.ansible/tmp on the remote machine. Then it uses the local python interpreter to run that blob. The python within the blob idempotently does the work. My blob needed to verify that the sudo package on my box. For reasons that I don’t understand but also really don’t mind, it wanted to make sure that the local package collection was up to date. It’s not normal for a box to hang on pkg update but it’s not crazy either.

On FreeBSD, git can’t find the certificate store

When I was playing with git checkout of modules I discovered that git doesn’t know how to set the certificate store for curl when it tries to retrieves a module via https. In general, I don’t recommend using git with https unless you have to. Using git+ssh obviates away a bucket of authentication issues. In this case, https is the better choice. To tell git where to look for certificates, to verify and https website, I had to add the following to my ~/.gitconfig:

[http]
sslCApath=/etc/ssl/certs

The command that does this is: git config --global http.sslCAPath "/etc/ssl/certs". If your operating system uses a CA file rather than a CA directory this is the setting: git config --global http.sslCAInfo "/etc/ssl/cert.pem". You can also make this work by setting an environment variable for curl in /etc/profile.

Mirroring in Gitlab

I normally strongly prefer git+ssh over git+https. If you are mirroring between two gitlab-ce instances over git+https, you can handle your mirroring with a single authentication token.

Pip + git for development

I’m working on what should be a simple raspberry pi display project and I came up with the need for a set of ad-hoc python modules that were installable from my gitlab server. It was a bit of a journey. Here are the broad steps:

  • Create a gitlabs project for your python module. Since will probably have a few of these, it might be good to make a group for them right now.
  • I think that you can use git+ssh://git@gitlabs.example.com... for this but I chose to use a gitlabs impersonation token for this since ssh isn’t installed everywhere and sometimes the installation needs a bunch of hints in ~/.ssh/config.
  • A standard install will be done with pip as follows pip install git+https://{user}:{password}@git.example.com/example-group/example-project.git. If you created a gitlabs impersonation token about, you can substitute it for password here.

Sometimes I need to edit the installed package that I’m working on. The way to do this is to use the –editable flag to pip. To do that you need to specify some extra information to git when checking out the project. I found that this command line:

pip install --editable git+https://{user}:{token}@gitlabs.example.com/example-group/example-project.git#egg={module_name}

I think that the #egg={module_name} piece provides pip with name of the module as installed. I found the documentation that explains this here: “https://pip.pypa.io/en/stable/topics/vcs-support/”. Assuming that you are doing this in a venv, and it doesn’t make sense not to, you’ll get a new directory called venv/src/{module_name} which has a git checkout of your module so you can edit it to your needs for this particular project.