Apple’s Captive Network Assistant

In an attempt to make life easier Apple added the Captive Network Assistant App to OS X. I think this addition was made sometime around Lion. Captive Network Assistant is an App that can display a little the very simple web page you get when you connect to a wifi network that has Captive Portal. These are the pages you get when you first log onto your coffee shop WiFi. They usually ask you to agree to some terms and conditions before you can use the network. In the case of hotels, resorts, and cruise ships they will also tie to the site billing system so you can be charged if that’s appropriate. Lately I’ve started to get these sites on both my MiFi hotspot and most lately, my home WiFi. This article explains three major drawbacks to Apple’s approach here. The authors of these web pages will frequently embed logout information into the page when the captive portal mechanism is being used to track usage for billing. In this use case, the app is a hinderance because when it disappears, it takes the logout link with it. Also, Apple triggers the app by attempting to fetch a known page over the web when your WiFi first connects. If it doesn’t get what it expects, it knows it’s behind a Captive Portal. In my case, the Captive Portal App is displaying Apple’s static page which indicates that you aren’t behind a portal.

When I started seeing captive portal on my home network, I decided the turn the thing off. To turn captive portal off do this command:

sudo defaults write /Library/Preferences/SystemConfiguration/com.apple.captive.control Active -boolean false

To restore the old behavior, do this, again in a terminal window:

sudo defaults delete /Library/Preferences/SystemConfiguration/com.apple.captive.control Active

Other people including the article linked above recommend renaming the App. I’m not in love with that solution, mostly because two months from now I don’t expect to remember that I did this in the first place. My solution isn’t much better. One could argue it’s worse because it requires terminal and sudo. It’s the one I went with though.

FreeBSD cross compiling or “Thanks Captain Obvious…”

It would be nice to manage my fleet of FreeBSD machines from one place. But I’ve diversified from i386 only to i386 and amd64 as I start doing more and more stuff with virtual machines; single purpose servers and less power usage for-the-win. The question comes up, do I need build-i386.vindaloo.com and build-amd64.vindaloo.com — Nope:

# env TARGET=i386 MAKEOBJDIRPREFIX=/usr/obj/i386 make buildworld...

NFS – Old habits die hard

Old habits and myths die hard. Conventional wisdom asserts that UDP is better because it has lower overhead; then conventional wisdom suggests that you tune the buffer sizes to improve performance. On the face of things that would seem to work but once the the write size exceeds the max packet size, NFS delivers the packet by using multiple packets. Sending multiple packets triggers the issue because dropping just one UDP packet means the whole buffer must be resent. Contrast with TCP: yes the packet header is larger so less data can be sent and yes the receiving side has to ack each packet. But: with TCP if a packet gets dropped, only that packet needs to be resent; with a modern TCP stack the kernel will constantly adjust the window size to make the best use the available bandwidth. In other words NFS over tcp will automatically tune the buffer sizes for the current conditions.

MySQL lovefest

Great, just discovered how easy it is to break things with mysql views and stored functions. It turns out that to create a view after a dump, mysql must create table temporarily for each view, then one by one drop the tables and create views in their place.  This presents two potential problems. 1. It’s possible to have a view with more columns than you have in a table. 2. Views can use stored functions to modify results but stored functions aren’t a part of the mysql dump process until after the view have been defined.

The solution to the problem may be: Create the database, Create all the stored procedures and functions, create the tables and views from mysqldump –no-data. Reload all the data. It’s not. It looks like the only way to do this is to use information schema to make a list of tables to dump. Follow this up with routines, and then follow this up with views.

bash / ksh / pdksh

Fo my new job I’ve decided to try not to be so old and crotchety and use bash without complaining rather the just changing my shell to pdksh. Today I needed to process options in a shell function which I’ve done in ksh before. It turns out that you have to preface your option processing with OPTIND=1 if you are in a function. Dunno why but I’ll find it out.

py2exe + anydbm error

So, I tried something simple yesterday adding a call to anydbm to a python program that I plan to distribute on windows with py2exe. Doing so I ran into this error:

ImportError: no dbm clone found; tried ['dbhash', 'gdbm', 'dbm',  'dumbdbm']

Turns out that the way py2exe works though it misses the dependency that anydbm has on a db module. The moral of the story is that if you want to use anydbm and py2exe you need to do something like:


import anydbm, dbhash

f = anydbm.open("dbname.db", "c")
...

Finally sat down with sqlalchemy…

I’ve been meaning to sit down and play with sqlalchemy for a while now. As an old person though my needs are a little different. Most people use the ORM model to relieve themselves of the burden of dealing with SQL database engines. This is all fine and dandy if you have the luxury of using an SQL database solely as a repository for data with permanence. However, SQL databases can do much more than that. In this vein the sqlalchemy tutorials don’t tell you much about database introspection, that is figuring out what the layout of a table is from the information available in the database. Introspection is very important to me because I frequently create tables (and contraints and triggers etc) on the database. I also have a frequent need to an SQL feature called views but that’s a story for a different day. I came up with this code:

#! /usr/bin/env python

''' ============================================================================================
Program: sqla.py -- A test of sqlalchemy's ability to introspect tables from a database.
============================================================================================ '''

from sqlalchemy import *
from sqlalchemy.orm import *

metadata = MetaData("mysql://scott:tiger@mysql.example.com/test")

workEntryMeta = Table('work_entry', metadata,
                      Column('work_entry_id', Integer, primary_key=True),
                      autoload=True, )

class WorkEntry(object):
    pass

def sqlToStr(c):
    if c is None:
        return "NULL"
    else:
        return str(c)

mapper(WorkEntry, workEntryMeta)

session = create_session()
q = session.query(WorkEntry)
entries = q.all()

headers = None
for e in entries:
    if headers is None:
        headers = [ c for c in e.__dict__.keys() if c[0] <> '_' ]
        print headers
        print "\t".join(headers)

    d = [ sqlToStr(e.__dict__[c]) for c in headers ]
    print "\t".join(d)

This creates an ORM object for the table work_entry in the current python program and grabs all the rows from the database printing each one as it goes.