Archive for the 'Technical' Category

Recreating Foreign Keys in MySQL

Tuesday, October 20th, 2009

The short version of this story is that I had a test server that was inadvertently configured to use the MyISAM engine of MySQL. This engine doesn't support foreign keys. It will quietly ignore your attempts to add them. I meant to use the InnoDB engine (which does support foreign keys). Of course, who hasn't done that? Am I right?

I fixed the engine problem quickly enough. Next I wanted to take a version of our production / dev / whatever that had the foreign keys and export the necessary "alter table" statements to add them to the fixed version of the test database. I couldn't find anything so I whipped up this SELECT statement to generate a script based on my limited understanding of MySQL. If it helps someone else then great.

SELECT concat('ALTER TABLE `',  table_name, '` ADD CONSTRAINT `', CONSTRAINT_NAME, '` FOREIGN KEY (`', column_name, '`) REFERENCES `', referenced_table_name, '`(`', referenced_column_name, '`);') from information_schema.key_column_usage where referenced_table_name is not null and constraint_schema = 'ourserverdb' order by table_name, column_name

This of course results in a whole bunch of rows of the form:

ALTER TABLE `licensekeys` add constraint `FK_keysIssuerId__appuserId` FOREIGN KEY (`issuer_id`) REFERENCES `app_user`(`id`);
ALTER TABLE `subscription` add constraint `FK_subscription_entity_group_id__entityGroupId` FOREIGN KEY (`entity_group_id`) REFERENCES `entityGroup`(`id`);
ALTER TABLE `user_role` add constraint `FK_userRoleRoleId__roleId` FOREIGN KEY (`role_id`) REFERENCES `role`(`id`);

From there it's just a little copy / paste into MySQL command prompt and I'm done. Incidentally mysqldump with the --no-data flag didn't do quite what I wanted since the foreign key creation is in the middle of a CREATE TABLE statement. There are surely other ways to do this but this is what worked for me.


Shell Scripting Madness

Friday, October 16th, 2009

Every now and then I bask in the beauty of the simple things. I'm not talking about children smiling, flowers, or any of that other crap. Shell scripting, baby! Today I had to move some SQL statements in some XML document into a Java class. So I needed to change this (which I didn't write):

    WHEN primaryStartAge < 20  THEN ' 0 to 19'
	WHEN primaryStartAge BETWEEN 20 AND 29 THEN '20 to 29'
	WHEN primaryStartAge BETWEEN 30 AND 39 THEN '30 to 39'
	WHEN primaryStartAge BETWEEN 40 AND 49 THEN '40 to 49'
	WHEN primaryStartAge BETWEEN 50 AND 59 THEN '50 to 59'
	WHEN primaryStartAge BETWEEN 60 AND 69 THEN '60 to 69'
	WHEN primaryStartAge > 70 THEN '70 and up'
	END as "Primary Start Age Range",
	count(1) as "Count" FROM analyticsResults
	WHERE calculatorType like ?
	WHEN primaryStartAge < 20  THEN ' 0 to 19'
	WHEN primaryStartAge BETWEEN 20 AND 29 THEN '20 to 29'
	WHEN primaryStartAge BETWEEN 30 AND 39 THEN '30 to 39'
	WHEN primaryStartAge BETWEEN 40 AND 49 THEN '40 to 49'
	WHEN primaryStartAge BETWEEN 50 AND 59 THEN '50 to 59'
	WHEN primaryStartAge BETWEEN 60 AND 69 THEN '60 to 69'
	WHEN primaryStartAge > 70 THEN '70 and up'

to something like this (which I still didn't write):

            + "    WHEN primaryStartAge < 20  THEN ' 0 to 19' "
            + "    WHEN primaryStartAge BETWEEN 20 AND 29 THEN '20 to 29' "
            + "    WHEN primaryStartAge BETWEEN 30 AND 39 THEN '30 to 39' "
            + "    WHEN primaryStartAge BETWEEN 40 AND 49 THEN '40 to 49' "
            + "    WHEN primaryStartAge BETWEEN 50 AND 59 THEN '50 to 59' "
            + "    WHEN primaryStartAge BETWEEN 60 AND 69 THEN '60 to 69' "
            + "    WHEN primaryStartAge > 70 THEN '70 and up' "
            + "    END as \"Primary Start Age Range\", "
            + "    count(1) as \"Count\" FROM analyticsResults "
            + "    WHERE calculatorType like ? "
            + "    GROUP BY CASE "
            + "    WHEN primaryStartAge < 20  THEN ' 0 to 19' "
            + "    WHEN primaryStartAge BETWEEN 20 AND 29 THEN '20 to 29' "
            + "    WHEN primaryStartAge BETWEEN 30 AND 39 THEN '30 to 39' "
            + "    WHEN primaryStartAge BETWEEN 40 AND 49 THEN '40 to 49' "
            + "    WHEN primaryStartAge BETWEEN 50 AND 59 THEN '50 to 59' "
            + "    WHEN primaryStartAge BETWEEN 60 AND 69 THEN '60 to 69' "
            + "    WHEN primaryStartAge > 70 THEN '70 and up' "
            + "END "
            + "ORDER BY 1 ASC";

I could copy and paste and fix it manually, use a text editor with regex search and replace, or something equally bland. Since it was Friday though i decided to treat myself and do it from a Cygwin shell. This got me close enough and made me giddy with satisfaction:

getclip |sed -e 's/"/\\"/g' -e 's/^/"/g' -e 's/$/ " +/g' |putclip

This grabs the contents of the clipboard, replaces all quotes with escaped quotes, replaces the beginning of each line with a double quote, and replaces the end of each line with a space / double quote / space / plus combo. It then sticks it back into the clipboard. It's not fancy, it could be better, but it was a minor bright point. And thanks to Cygwin it happened in Windows. Sort of.


Ruby One Liner to Sort and Run Length Encode a String

Tuesday, December 30th, 2008

I'm not a Ruby programmer but I thought this was kind of cool. While poking around on Stack Overflow the subject of storing letter frequency for words came up. While there may be a better solution, the idea of alphabetizing the word and storing letter frequencies of 3 or over as the number of occurrences followed by the letter seemed like a passable solution. For instance, "mississippi" is alphabetized to "iiiimppssss" and the multiple occurrences are further reduced to result in "4impp4s". Seems simple enough and in the case being discussed it would result in very little impact on the storage mechanism or the code around it.

The whole thing turns out to be pretty easy as a Ruby one liner:

"mississippi".split( // ).sort.join.gsub(/(.)\1{2,}/) { |s| s.length.to_s + s[0,1] }

That can probably be made a lot better by a Ruby expert. The regular expression finds any character followed by the same character two or more times and then passes the matching string to the following block as a parameter s. It then returns the replacement string which will be the length of the matched string (the character count) followed by one of characters from the matching string. It executes this as a global substitution on the original string. Wha-bam!!! I wonder if there's an odd edge case where this breaks.


Linux in the (Wannabe) Enterprise

Wednesday, December 10th, 2008

The footholds of Linux in small Windows shops are skunkworks projects and discarded hardware. Inevitably the old mail server or the equivalent is considered woefully underpowered and gets replaced. The old hardware sits in a corner of the server room and collects dust. That is until I need a "no money down" VMWare solution.

Of course the downside of this is that you will find yourself installing on frequently inadequate, old hardware that may or may not work–no one ever seems to be sure. When something goes wrong it's Linux's fault. Such was the case when I had to install on an old Dell PowerEdge 600SC. Of course, the install didn't work right off the bat.

The install hung with the last message being "Uniform CD-ROM driver Revision: 3.20". I randomly upgraded the BIOS hoping it's some weird problem with the on-board IDE and see the same problem. Then I noticed that the CD-ROM is attached to the tertiary channel. I can't recall ever having that setup before so I moved the CD-ROM drive from the tertiary to the secondary channel (by accident because the order of the IDE connectors from bottom to top of the motherboard appears to be secondary, primary, tertiary).

After the install it appears that I can't get either DHCP or a static IP to work. Everybody assures me that it's not the IP address they gave me or our DHCP server. I try a different network card with the same effect. Finally, I figure out that it is in fact the network of the IP address they gave me that is to blame (and our DHCP server seems to have crapped out at the same time, and no it isn't running on Linux). But, people stubbornly insist that it's Linux's fault until I waste my time proving otherwise. While I'm gathering evidence they make a point of wandering by my desk and asking why I'm not just using Windows. When society collapses they've got a special place on my post apocalyptic TODO list.

I finally get it all working with a fresh install of VMWare 2.0 (hate the new management web app, by the way) and a migrated VM from my desktop that has a copy of Zenoss Core happily monitoring our new production environment on EC2. Everything in that setup is new from the point of view of this organization. Of course while I'm patting myself on the back over a job well done, someone asks how to get to the desktop UI. Although it probably won't help them much I go ahead and install GNOME, VNC, and Webmin on the box even though I consider it a waste.

Now I get to sit back and eagerly await the opportunity to bask in the criticism the next time anything goes wrong with the box. I'm sure it'll be the fault of that darn Linux.


Clearing Cached Authentication Info in Windows

Tuesday, November 4th, 2008

This comes up every now and then for me and I can never remember how to do it so I'm sticking it here to make it easier for me to find. The problem happens when I'm using Windows Explorer to open or browse a Windows share / Samba share / SMB mount point / etc. Windows Explorer has a tendency to cache the authentication information for the share and doesn't re-present the opportunity to provide authentication information in the event that the cached credentials have become invalid. This happened to me again today when the account I had used in the past had become disabled. You can find and clear the cached authentication(s) by doing the following:

Click Start, Run and type Control keymgr.dll
Remove the entries from the list


Click Start, Run and type Control Userpasswords2
Click Advanced, Manage Passwords

The information is also in the Registry but these worked well enough for me to not go poking around in that rat's nest.


Who Needs Milliseconds Anyway?

Thursday, October 9th, 2008

My latest bug adventure has to do with the fact that at work we're transitioning to MySQL from SQL Server, a move I fully support.

First some detail on the way our application works. When our applet client syncs with the server it copies the records locally and stores them in a local database which is not MySQL. When you modify a record in the client it gets persisted first to the local database. Anywhere from immediately to the nebulous "later", the client will sync again with the server. When this happens a summary list of the data you can see is sent to the client. This data includes when the record was last updated on the server. This time is compared with your local records and a sync occurs. Local records with a later modified date get sent to the server and remote records with a later modified date get pulled to the client.

I'm not wild about this setup, mainly because I don't trust the time on the client machine since it's well outside of my control. We're also using the client generated time on the server as the last modified time. I think at the very least we should use the server time (interestingly, this wouldn't solve this problem in this case). Slightly more ideally we should use an incrementing version field that will have the benefit of better detecting update conflicts. That aside, we found that when we moved our test systems to MySQL the client was sending way too many records up to the server. Everything in the client-side database appeared to be newer.

It turns out that MySQL truncates timestamps and dates to second granularity. Anything finer than a second (millisecond, microsecond, whatever) is simply dropped. In the client, we're using a database that supports milliseconds. What this means is that if you modify a record at 11:52:27.421 it gets stored with that timestamp locally. When it gets stored in MySQL it is marked as last modified at 11:52:27. Therefore, your local record is almost always newer by literally a fraction of a second. Cool, huh?

Luckily, there's already a bug report. Given that it was reported over 3 years ago, I'm confident it is very nearly fixed. I am still a bit amazed that a database so popular in the enterprise fails at this very basic level of functionality.

As always, there are workarounds to the problem ranging anywhere from storing sub-second values in a separate field and/or creating a user defined type.


Total File Sizes by Extension

Tuesday, September 2nd, 2008

Every so often I have a brief love affair with awk. Today I got curious about the file sizes beneath a directory. In particular I wanted to see the totals by file extension. I did a quick search but came up with nothing. I decided that even if there is something out there to do the job, it'd be a lot more fun to do it myself. Tada:

ls -Rl | \
grep ^- | \
awk \
'{ split($9,e,"."); \
exts[e[length(e)==1?2:length(e)]]+=$5 } \
{ for (ext in exts) printf "%10d %s\n", exts[ext], ext }' | \

Yeah, there's an ugly hack in there to deal with file names that either have no extension or multiple dots in their name.

As an added bonus, looking at the previous post on awk got me all pissed off about "smart quotes" in WordPress blogs and the problems they cause when copying and pasting code examples. So, out they go.


Interminably Long Timeouts for META-INF Under IIS

Wednesday, August 27th, 2008

How's that for a catchy title? To recap recent events: I'm working on speeding up a Java applet, sloppy code in applet libraries try to load resources from the server which then 404, you can avoid this by setting the codebase_lookup property to false in the applet tag, and finally eliminating 23 megs of invisible data can help speed up downloading. Now that we're all caught up, let's turn to today's adventure: "deployment nightmares" OR "why the hell doesn't my test environment match production?"

Applet Won't Load

I finally got the applet to the "good enough for government work" level of load-time performance. Understand that I don't even work on any of the code in the applet, I'm just trying to optimize what's there and how it's delivered from the server. Today was the day we decided to quietly deploy to production.

The first sign of a problem was when the Apple guys came into the office. The applet wouldn't load for them in either Safari, Firefox 2, or Firefox 3. However, it worked fine on every server they tried except the production server. While trying to figure out what was going on it turned out that the issue had to do with all machines using JRE 1.5 regardless of OS or browser. They all worked against every server except production.

Differences Between Production and Test Environments

In production we have some sort of load balancer, Tomcat is behind IIS, and it's an external network. Nothing in our test environment has a load balancer in front of it, only one machine has IIS but works fine, and my external EC2 deployment is obviously off our network. I'm not sure why we don't mirror as much of this as possible in our test environment, but we don't.

Now back to the bug. Turning off the load balancer had no effect. Eventually, someone let their browser sit long enough to see that the applet did in fact load. It just took around 10 minutes. I finally noticed that the Java console would hang on different non-existent resources it tried to load from the server. I used cURL to retrieve the URL and had to wait 2 minutes until it returned an empty reply. Most non-existent resources timed out immediately. Only URLs that contained META-INF or WEB-INF would hang.

Various 3rd partly libraries were trying to load odd things from the server as I mentioned previously. A few of these load attempts point at the META-INF directory. This only happens under 1.5 because I used the codebase_lookup parameter in the tag. Tomcat, Apache in front of Tomcat, and our internal IIS server all return immediately. The first two serve a custom 404 page while the IIS server sends an immediate empty reply.

WEB-INF and META-INF Protection

Both WEB-INF and META-INF are directories that you probably shouldn't be exposing. In fact, in most versions of the Tomcat Connector the connector will automatically 403 or 404 when any resource from those directories is requested. In our case, we were running an older version of the connector that just happened to have a bug that caused requests to either directory to take 2 minutes to timeout. A quick upgrade and an IIS service bounce fixed everything.

So the debugging lessons for the day are: use something like ngrep to watch your traffic, your test environment should mirror your production environment, applets under 1.5 sucks, and check your version numbers on third party libraries (and consider upgrading).


Optimizing PNG File Sizes

Thursday, August 21st, 2008

For the past few weeks I've been working on improving the load time of an applet at work. Someone here noticed a while back that we seemed to have a ridiculous amount of image files in the applet. There are roughly 23 megabytes of images in the final jar which is around 14 megabytes in total size.

I opened a couple of the files in GIMP and resaved them to find that the final file was much smaller than the original. It turns out that the images were created using Fireworks and by default it puts some extra information in the PNG file, things like layers or palette information I believe. A few minutes of searching around on the internet and I found a wonderful tool named PNGOUT that could be used to losslessly optimize the size of PNG files. I used Cygwin (I'm on Windows at work) to run all of the PNG files through the utility via this command: find . -type f -name '*.png' -exec pngout.exe {} \; and waited a few minutes. The end result was as follows:

Total size of images: 24,147,770 bytes
Total size of app jar: 14,086,540 bytes

Total size of images: 1,026,186 bytes
Total size of app jar: 4,335,560 bytes

Yikes. Not bad for a few minutes worth of work.


More on applets and codebase_lookup

Friday, August 8th, 2008

As I mentioned in the last post I'm farting around with applets for work. You may remember that the applet was hammering the server whenever it couldn't find a resource in the jars. Of course, everything the applet needs is already in the jars so if it's not in them then it's not on the server either. It's all because the AppletClassLoader tries to load stuff from the codebase on the server if it fails to load it from the jars.

As part of the fix of setting codebase_lookup to false I did some very quick, unofficial benchmarking. The test environment was an EC2 deployment I've been messing around with–the smallest instance. I timed the startup time of the applet and counted the number of server hits during startup. To further minimize variablility, I did this after the jars had already been cached. This roughly corresponds to the startup time of a visitor to the site that has already successfully launched the applet. The results were as follows:

codebase_lookup – true (default setting)
Startup time: 35 seconds
Server hits: 442

codebase_lookup – false
Startup time: 7 seconds
Server hits: 34

When the the applet does hit the server to look stuff up in the code base, it doesn't just do it at startup. Some libraries that don't cache failed attempts to load a resource keep on hitting the server. In our app it's XFire and a lot of non-existent "aegis.xml" and "doc.xml" files as well as a whole bunch of "BeanInfo.class" attempts thanks to java.beans.Introspector. Each of these attempts takes a tiny bit of time and puts an annoying 404 in the access logs. It's hard to say what the distributed sluggishness of the app will be because of all these little attempts, but it's definitely non zero. It also has an affect on server side scalability when each client is potentially 13 times more chatty with the server than it needs to be.

An additional concern I have is that some of this cavalier attitude toward resource loading in these libraries also happens on the server side. How many failed getResourceAsStream attempts am I not seeing and what impact are they having on overall server performance? At the current traffic levels it's probably insignificant but the idea of that much inefficiency spread out throughout the app kind of bothers me.