It seems that I’m not alone: https://www.ntpsec.org/white-papers/stratum-1-microserver-howto/
Python Arg parsing
Everyone should be using the built in python argparser. This is a good example: https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/list_datastore_info.py
Flask + SQLAlchemy instead of FlaskSQLAlchemy
https://towardsdatascience.com/use-flask-and-sqlalchemy-not-flask-sqlalchemy-5a64fafe22a4
OpenBSD on Raspberry Pi
I haven’t played with Raspberry Pi quite enough. I’ll do a write up on my garage door opener project at another time. But a really promising place for the Pi in my opinion is the role of a traveling router/access point. I don’t find the process of connecting to the WiFi in a hotel room particularly hard. It’s connecting back into my own network to access my my services that is difficult. The methods that I have at my disposal are:
- IPSec VPN
- SSH/SCP to selected service
- Direct access where things are configured for it
Running OpenBSD on a Raspberry Pi gives me a solid place to put an IPSec connection for the whole hotel room network. Getting there involves installing OpenBSD on the Pi in the first place
Ingredients:
- A Raspberry Pi 3B or 3B+ each model has plusses and minusses
- An OpenBSD Raspberry pi snapshot release available at this url.
- OpenBSD does not support the Pi video yet. The install console is serial. You need an Arduino/Raspberry Pi serial cable. The link points to a 4pin style. It connects as follows:
- Black <-> Pi GND
- White <-> Pi TX0
- Green <-> Pi RX0
- A fast USB stick. OpenBSD can’t run from MicroSD card yet. This one works.
- A WiFi adapter that you can live with this is going to be a compromise because WiFi has somewhat left the BSDs behind. These two CanaKit Wifi, and TP-Link WiFi, work.
Getting out of git hell easily
Everyone has war stories about git. They almost always involve letting a feature/topic branch get far out of date from the feature branch’s parent. My friend Sebastian has actually figured out a quick way to get out of what I call git hell.The best way to avoid that situation is incorporate git rebase and git rebase -i into your normal workflow. Basically, use git rebase periodically and right before you submit your merge request to make sure that your branch will cleanly play onto it’s parent integration branch. And use git rebase -i and git push –force if you need it, on your topic branch to keep a concise commit history as you build your topic deltas.
No matter what happens, you or someone on your team will end up in git hell where you are replaying a stack of commits so you can generate a clean commit history and publish your deltas. If you end up there, you need to understand that git merge is actually your friend. Sebastian suggests the following:
## Be safe, do this work on a test branch.
git checkout topic/branch
git checkout -b test-topic/branch
## Make sure the local copy of master is up-to-date.
git checkout master
git pull master
## Go back to your test branch and merge in master.
git checkout test-topic/branch
git merge master
## Reset your state to be that of master. This leaves your changes as
## an unstaged blob against master.
git reset master
At this point your working directory should have all the changes from your topic branch unstaged against the HEAD of master
git diff
should confirm this. Now you can add what you want and write a new commit that performs the changes that you want comfortable in the knowledge that you aren’t undoing upstream changes. You’ll use git add
and git commit
to accomplish this. The difference is that your new set of change should apply cleanly to master. From here you can:
## When you are comfortable that you your test branch captures your
## deltas.
git checkout test-topic/branch
git branch -D topic/branch
git checkout -b topic/branch
git push --force
git branch -D test-topic/branch
Sebastian says “Have fun!”
SSL Everywhere? Maybe not cups
Last night I made the aggravating discovery that cups has gone SSL. The option to have cups protected by SSL is wonderful but I’m not sure that SSL by default is a good thing for printing services. I discovered this because printing from my Apple machines was failing with no log messages from my the Apple machines on my network. At first I thought this might be an IPv6 issue. Using tcpdump I quickly determined that cups on my Mac was not only using IPv6 but that it was using the semi-random “private/temporary” address of my cups server. But continued debugging showed that IPv6 wasn’t the issue, and the private/temporary address wasn’t it either. Disabling Encryption with the:
DefaultEncryption Never
Did the trick. This is clearly not safe. What would be best would be cutting a certificate for my cups server. That’s problematic because two years from now when the certificate expires, how long will it take me to figure out why printing stops working. Perhaps best would be to encrypt requests that need a password and allow cleartext communications for plain printing.
Turn off arp change noise on FreeBSD
If you run a FreeBSD server on a machine with any Apple infrastructure, Airport, AppleTV, etc then you are probably used to seeing lots of messages like this:
... +arp: 169.254.124.133 moved from --- somewhere --- to - somewhere else- on em0 +arp: 169.254.124.133 moved from - somewhere else- to --- somewhere --- on em0 ...
This is the Bonjour Sleep Proxy service in action. A device that provides a sleep proxy attempts to make Bonjour services available on your network at all times by advertising the proxy’s IP address as the service destination while the true provider is sleeping. For example, if you have an older, non-networked shared printer connected to an iMac Desktop, the sleep proxy will advertise it’s own address as the destination for your share printer. If someone sends a print request to your printer, the sleep proxy intercepts the request, sends a wake up packet to your iMac, and then the printing can actually go on.
This activity looks a lot like an arp poisoning attack. If you want to check for that look at the mac address of the devices in question. You can look up the first three octets of the mac address at Google. Those are a manufacturer ID. If one or both of the devices is from Apple, it’s more likely that you have a Bonjour Sleep Proxy working on your network.
Over time these messages are disruptive on a FreeBSD server because they blow valid information out of the kernel’s dmesg buffer. You can still the kernel’s boot dmesg by groveling through sysctl if you have a disk drive that’s misbehaving, that information will be lost in a day or two.
To turn these messages off, do the following:
$ sudo su - Password: # ## Fix this for this kernel boot session... # sysctl -w net.link.ether.inet.log_arp_movements=0 net.link.ether.inet.log_arp_movements: 1 -> 0 # ## Fix this permanently. # echo 'net.link.ether.inet.log_arp_movements=0' >> /etc/sysctl.conf #
Oh, that was really easy…
I just bought an Apple Magic Keyboard. My initial reaction is awesome. This is because of the ease of pairing with another Apple device. To pair this you literally turn it on and then plug it into your computer with the supplied lightning cable. No passcodes, no discovery mode, just plug it in and it works. Given that Bluetooth and USB go hand in hand these days, I really think that nearly anything that requires bluetooth pairing should work this way.
Why buy the keyboard? I’m one of the many software developer/devops engineer/sysadmin guys who’s avoided upgrading to the latest generation of Apple laptop mainly because of the new keyboards:
- Forcing me to use the touch bar for the Esc key is honestly a complete non-starter.
- and, the reduced travel of the butterfly keyboard, combined with the fact that if you get a crumb in it you need to take it back to apple to get it repaired. This is another non-starter.
So, I’ve been slogging through life with the top of the line 2015 15″ MacBook Pro for quite a few years. To pull me over the hump, a new MacBook Pro would have to be:
- Quad-Core i7 or better
- 32GB of RAM
- 15″ Display
If such a machine had the keyboard from the the 2015 MacBook Pro, I would have already bought it.
But my current laptop is starting to show it’s age. I have to recondition the battery before a long flight to maximize battery lifetime. The current machine’s dusty enough inside that the fans have lost some of their efficiency.
For $99.00, and even less from Amazon, I can try out the new mechanism and make a better evaluation of my ability to use the new laptop. I’m typing this blog post with the new machine and I have to admit that the new mechanism is nice. And, in the worst case, this would always be a good media center PC keyboard.
Ansible step zero
In my previous article I showed the steps to take to build an ansible repository that you could grow to fit your existing infrastructure. The first step here to setup the repository that you built to self-bootstrap. For that you’ll need to flesh out your inventory and build your first playbook.
Building Inventory
Ansible is driven off of an inventory. The inventory specifies the elements of your infrastructure and the groups them. This is to make things easy to manage. Ansible is compatible with three kinds of inventory: Inventory specified as a Windows style .ini formatted static file, or specified in a yaml file, or finally specified dynamically. Dynamic inventory is the holy grail. I recommend starting with a yaml inventory.
Although both yaml and ini style inventories have roughly the same capabilities, I prefer yaml because if you work with ansible, you’re going to become good friends with yaml no matter what. If you aren’t familiar with yaml format, find some time to study it. yaml is just a markup format that allows you to structure things. I didn’t really get yaml until I played with the python yaml module. I realized that yaml, like json, allows you to write python variables into a file in a structured fashion. the python yaml module can read a properly formatted yaml file and will return a python variable containing the contents of the yaml “document” or it can take any python variable, an array, a dict, a static, and write it such that another python program could read it. Yaml differs from json in that it’s generally parseable and readable by human beings. If the consumer of your data is program, use json. If a human is expected to read it, use yaml.
Your starting yaml inventory should look something like this:
--- all: children: maestro-test: vars: std_pkg: - ansible - terraform - git add_pkg: - emacs hosts: 192.168.100.3: my_domain: mydomain.com my_host: maestro-test
This defines an inventory with one group: maestro-test. It includes one machine at IP address 192.168.100.3. and it defines some variables for the group. This should be stored in an approriately named file:
base-maestro-inventory.yml
In the Ansible directory.
The first playbook
With an inventory, you can build a playbook. The first playbook looks like this:
--- - hosts: maestro-test - tasks: - name: Install standard packages package: name: "{{ item }}" state: latest with_items: "{{ std_pkg }}" - name: Install additional packages package: name: "{{ item }}" state: latest with_items: "{{ add_pkg }}"
This should be installed in a file named something like:
base-maestro-playbook.yml
in the Ansible directory. At this point presuming that you have a machine, physical or virtual at 192.168.100.3 into which you can ssh, as root, you can bootstrap your maestro as follows:
chris $ ansible-playbook -i base-maestro-inventory.yml --user root base-maestro-playbook.yml
And that should install the correct packages onto your maestro test box. I’ll revisit this article later to add users.
Getting started with Ansible, et al
For admins, young and old, getting started with orchestration tools like ansible I believe that the wise man’s first move is to create an orchestration workstation. This machine will have: ansible, terraform, git, and your favorite editor. You are going to use this machine as the basis for infrastructure as code for your organization for the short term future. Basically, you’ll stop using this machine for infrastructure as code once you get to the point where you can repeatably automate the creation and change management of things. At that point the role of this machine will be testing infrastructure changes. And there will be another machine exactly like this one that controls your production infrastructure.
The first thing that this machine should be able to do is replicate itself. That’s a simple task. In Unix terms you are looking at a box that can:
- allows you to log in via ssh keys
- allows you to edit the ansible and terraform configurations which
- are stored in git so that they are version controlled
That really specifies three users, you, ansible, and terraform. Also, as specified before, you need a hand full of packages: ansible, git, and your favorite editor. The whole thing looks pretty similar to this:
chris $ mkdir Ansible chris $ git init Ansible chris $ cd Ansible chris $ mkdir -p files/global group_vars host_vars roles/dot.template/{defaults,files,handlers,tasks,templates,tests} chris $ find * -type d -exec touch {}/Readme.md \; chris $ touch Readme.md chris $ git add . && git commit -m 'Initial revision.'
That builds an ansible configuration as a git repository and checks in the first revision. It also populates the ansible repository with directories that roughly correspond to ansible best practices. This will be a working repository which you are going to build out to support your infrastructure. You’ll do this by adding inventory, playbooks and roles bespoke to your needs.