Facebook account locked…

I spent my birthday on a trip to Europe. It’s a combo trip, Jay’s moving on from college to his job and I turned 61. If you know me you know that means we took a cruise. I like cruising. I get to meet lots of interesting people and get a taste of lot’s of interesting places. Ask me about the Mosque in Casablanca if you want to hear me gush. Part of the way through the cruise I checked into my Facebook account. I’m very leery of Facebook. Their function is to reconnect you with your friends but the way you pay for it is by sacrificing a large amount of your privacy. For most, the tradeoff is worth the sacrifice. That includes me but I like to hedge a little by only using Facebook from a privacy/incognito mode browser. I never consume www.facebook.com either on a device or through an app. This keeps their tracking information to a minimum. I had assumed that this behavior poisoned the relationship to the point where Facebook could live without me. That may be true but I don’t think that they would be so overt. The saying goes: Never blame malice for an action that’s adequately explained by incompetence. I tried using the process for unlocking the account unsuccessfully for three days. The process includes:

  • Upload a picture of your driver’s license that just so, at least 1500×1000 pixels, on a dark background,
  • Appeal to the Facebook security people for a code, write that code on a piece of paper by hand and make a video of yourself holding the paper. Make sure to move your head and the paper.

This morning I got up, very early because I’m still a little jet lagged, and decided to do what all good computer scientists do. I looked at the logs because sometimes the emails from Facebook would reach me but most o the time I just got what I though was very frustrating radio silence. It turns out that like Microsoft in January, facebook has found their way into an email DNS based realtime blocklist, or DNSRBL. And I happen to use that block list in my email server so Facebook emails were getting dropped on the floor. This is probably the root cause of the problem. Each time you log into Facebook, it tries to put a piece of information in your browser, app, or device that says: “Facebook, you can trust this because it’s really Chris”. If you do this in an incognito mode browser, that token gets deleted when you close the browser window or tab. Thus, people like me don’t have a place that facebook can say it’s really me. Lacking that they assume the worst. If I keep logging in from someplace near my house, it’s all good. But if I’m on a cruise ship in the harbor of Casablanca, that could be a hacking attempt. I’ll write a different post about 2FA and how it applies here later. When they assume the worst, they send you an email which saying: “Hey, someone logged into your Facebook account using your password and your 2FA token but they are in Morocco. Was this really you?” Now, if you receive that email and respond, yes, I’m on vacation” the gears keep turning. But if that email gets dropped on the floor, you know the rest of the story.

So what can one do to fix this. I still want to hedge my bets but Facebook has become a little too sensitive to the stream of brand new logins that they saw from me each time I fired up a new private tab and logged in. If you’re like me, you’ll still only consume www.facebook.com from a browser tab but the next best thing to private mode is a separate profile. Profile’s are supported in both Chrome and Safari. A profile is essentially domain under which browser information, cookies etc, are stored. In Chrome each profile is a separate space. Tracking information that you generate by browsing in one profile won’t cross into another one. I’m not happy to recommend Chrome but in this case, it gets the job done. I will note that profiles work under Chrome. I implemented by creating a separate “social media” profile for twitter / X and Facebook. Facebook just goes into a login loop when I try to do it from a profile in Safari.

MacOS disk repairs

If anybody ever says that Apple is a lot better than Microsoft one thing that they need to pay attention to is the fact that both companies are guilty of the same problems. In this case I’m talking about both companies habit of letting long standing regressions in their software languish, unaddressed for long periods of time. Apple’s sin in this case is with Disk Utility. Apples has allowed a bug in the Disk Utility in macOS go on, unaddresses since about OS X 10.13 or so when they changed the structure of Time Machine Backups to force an encrypted drive. I’ll admit that I’m not being completely fair. I’m running an older version of macOS on my laptop so this bug may indeed be fixed but it still stayed in the software for a good 3 years.

By trying to make it easier to use an encrypted volume for backups Apple has added a few steps to the process of checking these volumes for structural errors. This means the graphic Disk Utility frequently false positives, saying that your volume has a problem. The real issue is that Disk Utility hasn’t properly set things up for the volume check to happen. Back in the olden days, UNIX you wouldn’t let you use a Volume with structural problems because you couldn’t mount it with write allowed. Today is you can mount broken drives in write mode. Then you get to cross your fingers that you’re not compounding an existing problem. Side note: Here’s where I admit to being really really old because 99% of the time its actually okay and that’s actually the case. The result is that Disk Utility can’t properly check out your Time Machine Volumes. To check one out you need to take the time to boot your machine into recovery mode where all of this shiny that makes users happy is disabled. In recovery mode, Disk Utility just works. Compounding this problem, when Apple does the check from a normal boot, it doesn’t detect its own bug and declares that your volume is dangerously corrupted and unreliable so your best best is to start from scratch. This article shows how you can at least get some peace of mind by checking the state of the volume and repairing it from the command line in a terminal window. I would’ve liked to have seen a screenshot of the command line session. But the author decided that figuring out which disk you need to check is too difficult and they didn’t include one. That’s the responsible choice since you are going to be doing a lot of potentially destructive commands with sudo. I worked my way through the process on my own third Time Machine Volume. I have this issue because this Volume is connected to my docking station. It auto mounts when I use my machine on my desktop so I can have a full sized monitor. It’s easy to forget that the Volume needs to be ejected cleanly and quiesced before I disconnect from the docking station. I’m cultivating the habit of ejecting this Volume when my backup has completed.

Git mirroring

Not many people chose to run their own gitlabs instance these days. My preference for self reliance means that I do. If you value self reliance I have recommendations:

  • Use ansible, chef, or puppet to build your gitlab instance because you are going to build two.
  • Build one gitlabs server for your groups consumption. Put this one in a data center close to your user for good performance.
  • Build a second gitlabs server in a remote location, perhaps at your favorite cloud provider. Where ever the second gitlabs instance is, you’ll want either one way or bidirection access via https or ssh between the two servers.
  • Follow the directions in your gitlabs: Help -> User Documentation -> Mirror a Repository. To mirror the repository from the primary to the secondary.

At this point, you’ve created a great plan B for disaster recovery in case something terrible happens to your gitlabs. For me, gitlabs is storing the Terraform and Ansible that I use to build my infrastructure. The goal is to be able to jumpstart your whatever from the mirror. I called the mirror my plan be because my plan A is to directly restore gitlabs from a nightly backup.

Setting up the mirror

Setting up the mirror is well documented. In broad strokes, here are the steps:

  • On the mirror, create a group and project to hold the mirrored repository.
  • Choose Push or Pull mirroring. In Push, the primary will push updates to the secondary as you work. In Pull, the mirror will periodically poll the primary for changes. You’ll have to decide what works best for you.
  • Fill out the form and perform any needed setup. When using push over ssh, this means setting up the primary to push and then copying the ssh public key from the primary and adding it as an allowed key on the mirror user.

As you configure mirroring, remember that constructing the mirror URL can be tricky especially if you want to use ssh as transport. This is because a typical git cloning string looks like this: git@git.example.com:group/project.git but the mirroring URL for this is: ssh://git@git.example.com/group/project.git. The difference lies between the server, git.example.com, and path. When cloning, the separator is a colon, ‘:’. When mirroring, it’s a slash, ‘/’. Getting the authentication can also be tricky. Mirroring more than a few repositories using SSH can become tricky because gitlabs generates a new ssh key for each repository. This is one place of the few places where I like git+https more than git+ssh. Finally, git+https is not without its pitfalls. If like me you also have your own CA then you have the additional problem that git doesn’t do a good job configuring curl’s CA. You have two choices here. On the box initiating the transfer, run: git config --global http.sslCAPath my-ca-path to use your CA dir or git config --global http.sslCAInfo my-ca-file.pem to configure your CA file. One advantage of git+https in this configuration is that you can create a single user token for all of your mirroring. My concern with git+ssh here is the proliferation of keys may eventually cause git to fail with a “too many authentication attempts” error. With git+http, you can create one mirroring token for all mirror operations.

Once you’ve setup mirroring, you have a great plan B if the day ever comes that your gitlabs server becomes unavailable. You should have a working gitlabs mirror that you can use in any way that you please. You can even pull backups of the mirror server so you have a redundant, offsite-backup.

When an ansible task fails

It’s been a frustrating week. If it can break, it has broken and lately I’ve been shining up my ansible to fix it. So I find myself trying to use my shiny new playbooks to address problems and to get my machines to all line up. Today my ansible-playbook ... run hung up on an arm based mini-nas that I have in my vacation house. My first assumption was that ansible was the problem That was wrong. To find the problem, I ran the playbook and then logged onto the machine seperately. A quick ps alx gave me this little snippet:

1001 43918 43917  2  52  0  12832  2076 pause    Is    1       0:00.03 -ksh (ksh)
   0 43943 43918  3  24  0  18200  6916 select   I     1       0:00.04 sudo su -
   0 43946 43943  2  26  0  13516  2776 wait     I     1       0:00.02 su -
   0 43947 43946  2  20  0  12832  2024 pause    S     1       0:00.03 -su (ksh)
   0 51594 43947  3  20  0  13464  2572 -        R+    1       0:00.01 ps alx
   0 51578 51527  2  52  0  12832  1980 pause    Is+   0       0:00.01 ksh -c /bin/sh -c '/usr/local/bin/python3.9 /root/.ansible/tmp/ansible-tmp-1694615369.904476-9336-34642038817669/Ansib
   0 51579 51578  3  52  0  13536  2552 wait     I+    0       0:00.01 /bin/sh -c /usr/local/bin/python3.9 /root/.ansible/tmp/ansible-tmp-1694615369.904476-9336-34642038817669/AnsiballZ_pkg
   0 51580 51579  3  40  0  36756 23668 select   I+    0       0:01.51 /usr/local/bin/python3.9 /root/.ansible/tmp/ansible-tmp-1694615369.904476-9336-34642038817669/AnsiballZ_pkgng.py
   0 51582 51580  0  52  0  21388  9048 wait     I+    0       0:00.04 /usr/sbin/pkg update
   0 51583 51582  1  52  0  21708 10104 ttyin    I+    0       0:00.19 /usr/sbin/pkg update

This is relevant because because it traces the process tree from my ssh login all the down to the process that’s hung up. Note well that the pkg update run at PID 51583 is in a ttyin state. Running pkg update manually gave me this:

# pkg update
Updating FreeBSD repository catalogue...
Fetching packagesite.pkg: 100%    6 MiB   3.3MB/s    00:02
Processing entries:   0%
Newer FreeBSD version for package zziplib:
To ignore this error set IGNORE_OSVERSION=yes
- package: 1302001
- running kernel: 1301000
Ignore the mismatch and continue? [y/N]: 

The why of all this doesn’t really matter much. In this case the machine is running a copy of FreeBSD that’s stale, 13.1, and pkgng is asking my permission to update to a package repository from FreeBSD 13.2. What’s important here is a basic debugging technique. The important question is: How does ansible actually work under the covers? The answer is, each ansible builtin prepares a 100k or so blob of python that it spits in …/.ansible/tmp on the remote machine. Then it uses the local python interpreter to run that blob. The python within the blob idempotently does the work. My blob needed to verify that the sudo package on my box. For reasons that I don’t understand but also really don’t mind, it wanted to make sure that the local package collection was up to date. It’s not normal for a box to hang on pkg update but it’s not crazy either.

On FreeBSD, git can’t find the certificate store

When I was playing with git checkout of modules I discovered that git doesn’t know how to set the certificate store for curl when it tries to retrieves a module via https. In general, I don’t recommend using git with https unless you have to. Using git+ssh obviates away a bucket of authentication issues. In this case, https is the better choice. To tell git where to look for certificates, to verify and https website, I had to add the following to my ~/.gitconfig:

[http]
sslCApath=/etc/ssl/certs

The command that does this is: git config --global http.sslCAPath "/etc/ssl/certs". If your operating system uses a CA file rather than a CA directory this is the setting: git config --global http.sslCAInfo "/etc/ssl/cert.pem". You can also make this work by setting an environment variable for curl in /etc/profile.

Mirroring in Gitlab

I normally strongly prefer git+ssh over git+https. If you are mirroring between two gitlab-ce instances over git+https, you can handle your mirroring with a single authentication token.

Pip + git for development

I’m working on what should be a simple raspberry pi display project and I came up with the need for a set of ad-hoc python modules that were installable from my gitlab server. It was a bit of a journey. Here are the broad steps:

  • Create a gitlabs project for your python module. Since will probably have a few of these, it might be good to make a group for them right now.
  • I think that you can use git+ssh://git@gitlabs.example.com... for this but I chose to use a gitlabs impersonation token for this since ssh isn’t installed everywhere and sometimes the installation needs a bunch of hints in ~/.ssh/config.
  • A standard install will be done with pip as follows pip install git+https://{user}:{password}@git.example.com/example-group/example-project.git. If you created a gitlabs impersonation token about, you can substitute it for password here.

Sometimes I need to edit the installed package that I’m working on. The way to do this is to use the –editable flag to pip. To do that you need to specify some extra information to git when checking out the project. I found that this command line:

pip install --editable git+https://{user}:{token}@gitlabs.example.com/example-group/example-project.git#egg={module_name}

I think that the #egg={module_name} piece provides pip with name of the module as installed. I found the documentation that explains this here: “https://pip.pypa.io/en/stable/topics/vcs-support/”. Assuming that you are doing this in a venv, and it doesn’t make sense not to, you’ll get a new directory called venv/src/{module_name} which has a git checkout of your module so you can edit it to your needs for this particular project.

Nuke and Pave

I recently reinstalled MacOS on my work and home laptops and then brough back my working state using Time Machine on both. I’m always impressed by how much faster and better a computer is after you do this. My friend Matt Zagaja: https://zagaja.com calls this a “Nuke and Pave” from here: https://www.macsparky.com/blog/2016/3/t0kcqkdxmkapwyo9eno0hv98ojd2kx and I love the term. In my opinion, one of the bad side effects of MacOS’ success is that you don’t have to *Nuke and Pave* very often. I think I’d been carrying my working environment forward for better than 10 years without a refresh and moving from High Sierra to Catalina added a bunch of unwanted quirkiness. This was probably because Apple is deprecating a bunch of the tools that I used in 2012 and while I don’t use them today, they were still installing kernel extensions and other stuff that was making my machine a little unstable. If you want to do your own *Nuke and Pave* on mac, you’ll need the following:

  1. The operating system you want to install. I used Big Sur 11.6. I find that for MacOS you want to download the OS and then use instructions like these: https://support.apple.com/en-us/HT201372 to create usb install media.
  2. If you use MacPorts see the notes at the end to save a list of the ports that your run. You’ll need it when you rebuild.
  3. Backup media: If it’s important you should have one or two backups of it . In this case you want a Time Machine backup. Disk Clone style backups would normally be quicker but don’t give you the granularity you need here. I use a USB-C to NVMe drive enclosure for speed here. My second backup is on rotating rust.

The operation is pretty simple. You want to:

  1. Boot your Mac from the USB installer by shutting down completely and then booting and pressing *Option* and holding it until your Mac presents you with a choice of boot media. It’s handy that newer Macs will boot on a keypress so you can start this process by simply pressing and holding *Option* If you are on Catalina or later you have to boot to _recovery mode_ first by shutting down your mac completely and using the utilities menu to enable booting from other media. If you have a firmware password on your Mac, you’ll need that to change this setting.
  2. Once you’ve booted from your install media, you need to erase and repartition the hard drive on your Mac. This is the point of no return so don’t take this step unless you trust your backups.
  3. Follow the install media instructions to reinstall MacOS on your computer. It will pause and ask you how to build users. What’s going on behind the scenes is the mac is using Migration Assistant to populate your home directory. Choose Time Machine backup and go into the menus and trim all of Applications, Settings, etc. You really only want to carry over data at this point. If you don’t migrate enough information, you can use Migration Assistant or Time Machine to catch anything that you missed.
  4. Reinstall your apps using the App Store, and whatever other sources you have. As a developer I have a bunch of software installed that requires me to Control-Click on the Application and then give permission to run one time.
  5. Restore security permissions as needed. App Store packages generally won’t have this problem. Other packages will. I use Emacs as my main editor because I’ve been doing this for a while. That requires me to go into the System Preferences -> Security & Privacy -> Privacy pane and grant Emacs permission to read files from my specified locations.

That’s most of what you need. I did the operation overnight. I handled steps 1 ~ 3 and then went to sleep. When I woke up I finished up 4 and 5.

A side note here for MacPorts or Homebrew users. You’ll want to restore your MacPorts/Homebrew environment also. For MacPorts this isn’t hard. Basically run sudo port list requested > ~/Desktop/ports-requested.txt This will leave a copy of the ports you installed by hand in a text file. When you are rebuilding your machine, you’ll need to perform the prerequisites needed to run MacPorts. Then you can use this output to install the packages that you used. I don’t use HomeBrew but I’ imagine that there must be something similar to this in HomeBrew.

Git Sparse Checkout

Git Sparse Checkout

At work we had a very large monorepo. I’m tempted to quote Douglas Adams here but the reference is good enough. Checking out the whole thing runs the possibility of confusing git status messages as a result changes in the other part of the tree. These messages are a distraction. Dealing with them can consume large amounts of time. The best way to avoid them perform what’s called a sparse checkout. This is a checkout that only puts what you need into your working directory. In a normal checkout:

 $ git clone ...

You get the entire code base in your working directory. A sparse checkout is more complicated to perform:

 $ mkdir _target directory_
 $ cd _target directory_
 $ git init .
 $ git config core.sparsecheckout true
 $ echo "_your desired subdir_" >> .git/info/sparse-checkout
 $ ## Repeat the echo for each directory you need.
 $ git remote add origin https://git.neopost.com/PPT/IBMHSM.git
 $ git fetch
 $ git checkout master

It’s eight steps but if you do it this way, you gain complete control over what’s in your working directory.

OpenBSD’s ksh adds configurable tab completion

I saw a configuration for bash tab completion a few years ago and I’ve always wanted it for the korn shell. I use either the “true” ksh from AT&T via David Korn or one of the variants that has sprung out of the pdksh project. OpenBSD’s ksh is a descendant of pdksh. In a recent release of OpenBSD someone patched it to kludge configurable tab completion via environmental arrays. The article is here: https://www.vincentdelft.be/post/post_20210102

This ksh is shells/oksh in FreeBSD.