Great, just discovered how easy it is to break things with mysql views and stored functions. It turns out that to create a view after a dump, mysql must create table temporarily for each view, then one by one drop the tables and create views in their place. This presents two potential problems. 1. It’s possible to have a view with more columns than you have in a table. 2. Views can use stored functions to modify results but stored functions aren’t a part of the mysql dump process until after the view have been defined.
The solution to the problem may be: Create the database, Create all the stored procedures and functions, create the tables and views from mysqldump –no-data. Reload all the data. It’s not. It looks like the only way to do this is to use information schema to make a list of tables to dump. Follow this up with routines, and then follow this up with views.
Fo my new job I’ve decided to try not to be so old and crotchety and use bash without complaining rather the just changing my shell to pdksh. Today I needed to process options in a shell function which I’ve done in ksh before. It turns out that you have to preface your option processing with OPTIND=1 if you are in a function. Dunno why but I’ll find it out.
So, I tried something simple yesterday adding a call to anydbm to a python program that I plan to distribute on windows with py2exe. Doing so I ran into this error:
ImportError: no dbm clone found; tried ['dbhash', 'gdbm', 'dbm', 'dumbdbm']
Turns out that the way py2exe works though it misses the dependency that anydbm has on a db module. The moral of the story is that if you want to use anydbm and py2exe you need to do something like:
import anydbm, dbhash
f = anydbm.open("dbname.db", "c")
I’ve been meaning to sit down and play with sqlalchemy for a while now. As an old person though my needs are a little different. Most people use the ORM model to relieve themselves of the burden of dealing with SQL database engines. This is all fine and dandy if you have the luxury of using an SQL database solely as a repository for data with permanence. However, SQL databases can do much more than that. In this vein the sqlalchemy tutorials don’t tell you much about database introspection, that is figuring out what the layout of a table is from the information available in the database. Introspection is very important to me because I frequently create tables (and contraints and triggers etc) on the database. I also have a frequent need to an SQL feature called views but that’s a story for a different day. I came up with this code:
#! /usr/bin/env python
Program: sqla.py -- A test of sqlalchemy's ability to introspect tables from a database.
from sqlalchemy import *
from sqlalchemy.orm import *
metadata = MetaData("mysql://scott:email@example.com/test")
workEntryMeta = Table('work_entry', metadata,
Column('work_entry_id', Integer, primary_key=True),
if c is None:
session = create_session()
q = session.query(WorkEntry)
entries = q.all()
headers = None
for e in entries:
if headers is None:
headers = [ c for c in e.__dict__.keys() if c <> '_' ]
d = [ sqlToStr(e.__dict__[c]) for c in headers ]
This creates an ORM object for the table work_entry in the current python program and grabs all the rows from the database printing each one as it goes.
It looks like the Exodus XMPP/Jabber client has a new home here. I found out because a friend had signed up of a gmail account and wanted to use a real chat client. As an old Jabber user I told her about Exodus.
Mod_python and Django have me going around in circles because I don’t want to take my old brain with learning either how to build apache with the worker MPM or mod_wsgi.
This seems obvious but it’s really handy to have my website setup with DAV access to the backend for administration purposes. I’ve recently setup on of my sites this way and it works out quite nicely.
I always have to look this up. To reload a zone on bind 9 where the server is running split horizon you need: rndc reload Class is almost always going to be “in”. In my case this works out to: rnd reload example.com in inside
Looks like something burped on my mailserver and my bogofilter wordlist got too big. Probably something to do with limits anyhow. In any case I was looking for a way to recover from the issue and came across this pearl in the Bogofilter FAQ. Well, the advice is incomplete. If you really hose up the database then bogoutil -d will stop printing entries before the end of the database. The next recovery step is to use the db utilities: db_dump and db_load to fix the database. db_dump -r (on FreeBSD db_dump-<version>) dumps the database into a text file and db_load creates a text file from a word list. The problem is that the advice in the bogofilter faq is out of date. It looks like there are some parameters that have to be specified. My solution: use db_dump without the -r that creates a broken database with a default header. Copy the header into the new text file and then append the output of db_dump -r to that. Et voila!
My Mother of all MiFi wishlist:
- Runs for 4 ~ 5 hours on rechargable batteries. Preferably 4xAA NiMh cells which I have in abundance.
- WPA encryption if possible otherwise pre-auth by mac address or live auth via authpf.
- Automatically connects to my lan using certificate based IPSec.
- Provides DNS locally.
- Gui configuration but can be a python TkInter of X11 Gui.
- 802.11b/g although given my experience last week 802.11n over 5GHz would be nice.
- SNMP configuration? That’s why I got an enterprise number from IETF.
- Put the Soekris Net4511 on my Kill-a-watt meter to see how much juice it really needs (and how efficient the power supply is.)
- Figure out how to get USB into the thing. The outside internet will be a Verizon or Sprint network dongle.
- Get a case and power supply for the 4511
- Will OpenBSD provide WPA2 authentication?
- How hard is it going to be to get a USB jack into a 4511 case? (Bill Johnson?)
- How many people can I connect to it before it’s overloaded?
- 4521 Case? Automatically has room for batteries.