Archive for the 'Technical' Category

Why Install When I Can Dockerize?

Monday, March 2nd, 2015

I haven't posted in forever. This post is mostly so I can break that streak. As such, there may be a lack of super interesting things in it.

Software Rant

I'll avoid the tired "containers are just encapsulation, man" rant. I might normally make the whole "nothing new under the sun" point if it weren't for the fact that the I'm currently mired in how wrong I think the "microservices are just SOA" crowd is. Just be glad that people are interested in the new thing and hope that they make it shinier.

I dislike having to install things on my laptop. Not everything, of course, but I don't want to install something for every little thing I want to play around with. Everything starts with install Ruby, Python, Go, D, Node.js then let them install who knows what (because everybody has to reinvent the package manager), who knows where, and create whatever inter-version incompatibilities they feel like. Then I wind up on StackOverflow trying to figure out the installation and debugging tricks those communities take for granted.

Sadly I still wind up with most of this shit on my box thanks to a handful of awesome tools like Ansible and Vagrant. I know it's ironic but I still have to draw the line somewhere.

WiFi Rant

So, I'm working at Google now (if you say congratulations I will punch you in the taint) and taking the bus to work. The bus has very shitty WiFi so to pass the time I either read e-books or watch downloaded content. XFinity has the best DVR option for this since their Android app will let you download anything you've recorded for offline viewing–fucking bad ass. Netflix and TiVo (and YouTube I believe) think that super awesome, streamy, creamy WiFi is everywhere. It's not and companies that don't support offline viewing hate America, freedom, and teenage Jesus. You really don't hear enough about teen Jesus…

Then I thought it'd be great to download some of the blog post backlog I have in Feedly (my RSS reader currently because fucking Google shut down Reader). That'd let me catch up on some of the stuff that falls under the "sharpening the tools" category of time wasting. Unfortunately 95% of the stuff in there comes from InfoQ which, to their credit, have video AND slides synced in their presentations. This means simply downloading video alone isn't good enough. What's a boy to do?

"Someone Beat Me to it" Rant

Someone else already wrote something to download InfoQ presentations for offline viewing. Since it's written in Python I took it for a test drive in a VM to avoid unnecessary dependency baggage from invading my system. The sumbitch works like a charm. It pulls everything into a folder and creates an HTML page you can use for offline viewing.

Fine. So I don't get to do that as a fun mini-project. I could still do something cool with Docker. For this post I don't care if Docker (or Rocket) will save the world. I care that I can encapsulate a bunch of shit into a container that is lighter weight than a virtual machine and treat it like a command. The inspiration came in the form of someone else's post that takes it even further with X11 applications. Baby steps.

Well some motherfucker beat me to that, too. Luckily though he used a bad naming convention and I couldn't get his container to work. So I built my own any way.

What Did I Accomplish?

I now have a bash script that wraps the slightly onerous Docker command to launch my container and download an InfoQ presentation for offline viewing. Yay! I also got to increase my public GitHub footprint (never a bad thing).

I learned that Docker handles host to container file permissions badly, Docker security has a long way to go, Docker Hub has some pretty nice continuous integration options for GitHub projects that have Dockefiles, and finally that I think it's better to waste 458mb of disk space to avoid putting more Python stuff on my laptop. Not a bad Saturday afternoon.

Share

Splitting Files By Column Value Using Awk

Thursday, August 9th, 2012

At the day job a data fairy gives me a giant pipe delimited text file that contains data for a bunch of our customers. The customer ID is contained in one of the columns. Ideally I'd like to have one file per customer but it's usually very difficult to get data fairies to do the things you want.

For reference here's a reasonable facsimile of what the file looks like. Let's pretend this is some sort of interesting survey. Bonus points if you can figure out a question that would make sense for these answers.

FIELD1|FIELD2|CUSTOMER|FIELDN
"Once in college but it wasn't my idea."|3|"CUST1"|"blah blah"
"Like your mom."|14|"CUST2"|""
"Blame it on the dog."|15|"CUST1"|"Frankenberry"
"That wasn't chicken."|9|"CUST2"|"Definitely the mouth."
"Never professionally"|26|"CUST3"|"And then she stepped on the ball!"

What we want is three files: one for each customer. We drop the split file in a different directory for each customer to keep things a little neater and we name the file with the customer code prepended to the original file name. All nice and orderly.

As with many things involving text files this winds up being stupid easy using Awk. I'm showing it here mostly so I can find it again and because this type of command line file processing always makes me giddy. The comments should do a good enough job of explaining things.

#! /usr/bin/awk -f
BEGIN {
  if($CUSTOMER < 1) {
    print "Usage: split -vCUSTOMER=[split column] [files]";
    exit;
  }

  # Set the input and output field delimiters
  FS="|";
  OFS="|";
  "mkdir -p split" | getline;
}

{
  # If this is the first line of a file...
  if (FNR==1) {
    # Grab the entire first row as the header
    header=$0;

    # Close open files from the previous file (if any)
    for(customer in customers) {
      close(customers[customer]);
    }
 
    # Clear the array of customers / output files   
    delete customers;
  }

  if (FNR!=1) {
    # Grab the customer code and strip out the quotes
    customer=tolower($CUSTOMER);
    gsub(/"/, "", customer);

    # Store the output file name.  This is the customer code followed 
    # by the original file name.
    outputFile="split/" customer "/" customer "_" FILENAME;

    # If this is the first time this file we've seen this customer code...
    if(customers[customer]=="") {
      ("mkdir -p split/" customer) | getline;

      # Overwrite any previous output file and print the header
      print header > outputFile; 
      # Track the fact that we've seen this customer code and store the output file
      customers[customer]=outputFile;
    }

    # Append the current line to the output file
    print >> outputFile;
  }
}

I'm sure someone could do this more succinctly and without some of the odd things I've done in there (maybe parameterize the delimiters or the output directory structure), but I kind of like it. It's already proved useful for a number of other cases for me. Also the fact that it's relatively tiny and super fast is all the answer I need if one of the co-workers asks why I didn't write it in Java.

Share

He's Got This Ultimate Set of Tools

Sunday, May 6th, 2012

"Relax, all right? My old man is a television repairman, he's got this ultimate set of tools. I can fix it." If you don't remember your Fast Times at Ridgemont High quotes you're probably not alone. The scene is worth remembering because the context is ridiculous. So it is sometimes with software development. The cost and effort of fixing the existing implementation is sometimes just too great. The changes cut too deep. You're better off throwing out the current stuff and starting from scratch.

In software development you rarely understand your problem domain perfectly, if ever. You learn what your customers want through trial and error. Sometimes your organization has made such poor attempts at delivering the product people want that you can't help but throw away what you've currently got and try again with what you learned from your previous attempt.

Managers usually hate to hear such talk from developers. Developers always want to rewrite things. But in some rare cases they're absolutely right. Refactoring is great if you're even remotely close to what you want to do. But what if your product is built on bad assumptions of epic proportions?

Could CVS have been refactored incrementally to arrive at git? Could Windows have been refactored to create Linux? Could MacOS have been refactored to create OSX? Could Internet Explorer be refactored to create Chrome? When do you come to the realization that what you want, what you need, is so far away from what you have that you can't get there from here? When is the cost of making changes to your current product artificially inflated by the technical debt and faulty abstractions to the extent that it's better to throw it all away?

That's the advantage your competition has. You've shown them your near miss at a great product. If the people in your organization advocating a rewrite were magically transported into a competing startup that was creating a competing product from scratch would you be at all worried? If the answer is "yes" then you should use the advantages you have (those very same people plus a more intimate knowledge of the problem domain and where you went wrong) and do something about it. Plus if something in your product actually proves useful you can copy and refactor it into the new product.

There are certainly risks but the rewards are incredible.

Share

Autowiring Jackson Deserializers in Spring

Wednesday, May 2nd, 2012

Recently I was working in a Spring 3.1 controller for a page with a multi-select of some other entity in the system. Let's say an edit user page that has a User object for which you're selecting Role objects (with Role being a persistent entity with an ID). And let's further say that I'm doing some fancy in place editing of a user within a user list so I want to use AJAX and JSON to submit the user to the server, for whatever reason (probably because it's rad \oo/).

Okay now that we have our contrived scenario I want to serialize the collection of roles on a user so that they're a JSON array of IDs of said roles. That part is pretty easy. Let's just make all of our persistent entities either extend some BaseDomainObject or implement some interface with getId and then write a generic JSON serializer for Jackson:

package com.runningasroot.webapp.spring.jackson;

import java.io.IOException;
import org.codehaus.jackson.JsonGenerator;
import org.codehaus.jackson.JsonProcessingException;
import org.codehaus.jackson.map.JsonSerializer;
import org.codehaus.jackson.map.SerializerProvider;
import org.springframework.stereotype.Component;
import com.runningasroot.persistence.BaseDomainObject;

@Component
public class RunningAsRootDomainObjectSerializer extends JsonSerializer<BaseDomainObject> {

    @Override
    public void serialize(BaseDomainObject value, JsonGenerator jgen, SerializerProvider provider) 
            throws IOException, JsonProcessingException {
        jgen.writeNumber(value.getId());
    }
}

Awesome if that's what I want. We'll assume it is. Now if I submit this JSON back to the server I want to convert those IDs into real live boys, er, domain objects. To do this I need a deserializer that has access to some service that can find a domain object by ID. I'll leave figuring out ways to genericize this for multiple domain objects as an exercise for the reader because frankly that's not the part I'm interested in.

So how do I control how Jackson instantiates deserializers and make sure that I can inject Spring beans into them? You would think it would be very easy and it is. Figuring it out turned out to be unnecessarily hard. The latest version of Jackson has a class for this and even says that's what it's for. So let's make us an implementation of a HandlerInstantiator that is aware of Spring's ApplicationContext. Note that you could do this entirely differently with an interface from Spring but who cares? Here's what I did:

package com.runningasroot.webapp.spring;

import org.codehaus.jackson.map.DeserializationConfig;
import org.codehaus.jackson.map.HandlerInstantiator;
import org.codehaus.jackson.map.JsonDeserializer;
import org.codehaus.jackson.map.JsonSerializer;
import org.codehaus.jackson.map.KeyDeserializer;
import org.codehaus.jackson.map.MapperConfig;
import org.codehaus.jackson.map.SerializationConfig;
import org.codehaus.jackson.map.introspect.Annotated;
import org.codehaus.jackson.map.jsontype.TypeIdResolver;
import org.codehaus.jackson.map.jsontype.TypeResolverBuilder;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.stereotype.Component;

@Component
public class SpringBeanHandlerInstantiator extends HandlerInstantiator {

    private ApplicationContext applicationContext;

    @Autowired
    public SpringBeanHandlerInstantiator(ApplicationContext applicationContext) {
        this.applicationContext = applicationContext;
    }

    @Override
    public JsonDeserializer<?> deserializerInstance(DeserializationConfig config,
            Annotated annotated,
            Class<? extends JsonDeserializer<?>> deserClass) {
        try {
            return (JsonDeserializer<?>) applicationContext.getBean(deserClass);
        } catch (Exception e) {
            // Return null and let the default behavior happen
        }
        return null;
    }

    @Override
    public KeyDeserializer keyDeserializerInstance(DeserializationConfig config,
            Annotated annotated,
            Class<? extends KeyDeserializer> keyDeserClass) {
        try {
            return (KeyDeserializer) applicationContext.getBean(keyDeserClass);
        } catch (Exception e) {
            // Return null and let the default behavior happen
        }
        return null;
    }

    // Two other methods omitted because if you don't get the idea yet then you don't 
    // deserve to see them.  phbbbbt.
}

Great now we just need to hook up a custom ObjectMapper to use this thing and we're home free (extra shit that would probably trip you up as well included at no extra charge):

package com.runningasroot.webapp.spring;

import org.codehaus.jackson.map.DeserializationConfig;
import org.codehaus.jackson.map.HandlerInstantiator;
import org.codehaus.jackson.map.ObjectMapper;
import org.codehaus.jackson.map.SerializationConfig.Feature;
import org.codehaus.jackson.map.annotate.JsonSerialize;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.ApplicationContext;
import org.springframework.stereotype.Component;
import com.fasterxml.jackson.module.hibernate.HibernateModule;

@Component
public class RunningAsRootObjectMapper extends ObjectMapper {

    @Autowired
    ApplicationContext applicationContext;

    public RunningAsRootObjectMapper() {
        // Problems serializing Hibernate lazily initialized collections?  Fix here.
        HibernateModule hm = new HibernateModule();
        hm.configure(com.fasterxml.jackson.module.hibernate.HibernateModule.Feature.FORCE_LAZY_LOADING, true);
        this.registerModule(hm);

        // Jackson confused by what to set or by extra properties?  Fix it.
        this.setSerializationInclusion(JsonSerialize.Inclusion.NON_NULL);
        this.configure(DeserializationConfig.Feature.FAIL_ON_UNKNOWN_PROPERTIES, false);
        this.configure(Feature.FAIL_ON_EMPTY_BEANS, false);
    }

    @Override
    @Autowired
    public void setHandlerInstantiator(HandlerInstantiator hi) {
        super.setHandlerInstantiator(hi);
    }
}

Now you just have to tell everything to use your custom object mapper. This can be found elsewhere on the web but I'll include it here in case of link rot:

package com.runningasroot.webapp.spring;

import javax.annotation.PostConstruct;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.converter.HttpMessageConverter;
import org.springframework.http.converter.json.MappingJacksonHttpMessageConverter;
import org.springframework.stereotype.Component;
import org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter;

@Component
public class JacksonConfigurer {
    private AnnotationMethodHandlerAdapter annotationMethodHandlerAdapter;
    private RunningAsRootObjectMapper objectMapper;

    @PostConstruct
    public void init() {
        HttpMessageConverter<?>[] messageConverters = annotationMethodHandlerAdapter.getMessageConverters();
        for (HttpMessageConverter<?> messageConverter : messageConverters) {
            if (messageConverter instanceof MappingJacksonHttpMessageConverter) {
                MappingJacksonHttpMessageConverter m = (MappingJacksonHttpMessageConverter) messageConverter;
                m.setObjectMapper(objectMapper);
            }
        }
    }

    @Autowired
    public void setAnnotationMethodHandlerAdapter(AnnotationMethodHandlerAdapter annotationMethodHandlerAdapter) {
        this.annotationMethodHandlerAdapter  = annotationMethodHandlerAdapter;
    }

    @Autowired
    public void setObjectMapper(RunningAsRootObjectMapper objectMapper) {
        this.objectMapper = objectMapper;
    }
}

I think you can also perform this bit of trickery inside of an application-context.xml. But whatever works for you works. I think Yogi Berra said that.

Of course you still need to annotate your getters and setters with special Jackson annotations:

@JsonSerialize(contentUsing=RunningAsRootDomainObjectSerializer.class) 
public Collection<Role> getRoles() {
    ...
}

// Some deserializer with some hot Spring injection going on in the back end (if you know what I mean)
@JsonDeserialize(contentUsing=RoleListDeserializer.class)
public void setRoles(Collection<Role> roles) {
    ...
}

So there you have it: an example of a Spring Jackson JSON serializer that serializes the contents of collections of domain objects as an array of IDs and then deserializes JSON arrays of IDs into domain objects to be put into a collection. Say that three times fast.

Share

I Think We're Going to Need a Bigger Box

Tuesday, April 10th, 2012

I was reading this post on the Instagram buyout by Facebook today and it got me to thinking about the benefits of the cloud, DevOps, horizontal scalability (one of my favorites), and well thought out architectures and monitoring.

One of the more interesting things about the $1 billion purchase price is that Instagram has 13 employees and 35 million users. That's just so crazy to me. It also ends up being yet another argument against the "bigger box" method of solving scalability issues. Eventually you cannot simply add more RAM to fix things. Trying to solve your problems that way is like trying to solve world hunger by breeding a single, giant cow.

Share

Let's Just Burn It All Down and Start Again

Saturday, April 7th, 2012

All software sucks to some extent including everything you are working on right now. If you reexamine your code six months from now and don't think it sucks then it probably means you didn't learn anything in those six months. That's the downside of being a software developer. You feel like the code you're working around is some degree of horrible. For the most part you just accept it and try to make incremental improvements to things. If you're lucky you'll work on something that you think is magnificent (and then think it's shit in six months).

But what happens when the code is truly horrific? For example: you wrote your own FTP client, your own templating engine, you have mutating getters, there's database access in your pages and data objects, you cut and paste DDL statements into SQL clients and call it "upgrading the schema", etc. We can argue about whether some of those things are truly bad but from my perspective they're pretty rotten. Throw that into a 100k+ line code base with many active customers and too few developers and then you've got some real fun.

In these situations I can envision a more ideal code base pretty easily. Update the libraries and start using them, fix the schema that no longer matches the problem domain (if it ever did), start pushing things into neat little tiers, get rid of that shitty build, run a continuous integration build server, use Chef or Puppet to manage configuration, scale your shit horizontally and get all elasticy with the cloud, etc. Pretty soon I've built a shining city on the hill in my mind. The only problem is I'm still calf deep in shit and I need to go back to standing on my head just as soon as my lunch break is over.

My solution has always been to burn everything to the ground and start over. It's not a popular position even among software developers. "Let's just slowly fix everything that is wrong," they say. It sounds good but progress on paying down your massive technical debt always seems to take a backseat to a shiny new feature (with its own share of technical debt). Pretty soon you're not even paying the interest on that debt. Nope. Burn it all down. Or at least build a new bridge next to the old bridge and then blow the old bridge up. Maybe you can even be nice enough to divert traffic first.

The "fix in place" crowd always sounds like this to me: "I bought a new motorcycle. It's a Honda. I kind of want a Harley instead. Can you turn it into a Harley while I ride it around? Thanks. xxxooo"

At least I'll always have these rants before the void. Thanks for listening.

Share

Geek TGI Friday's Flair

Monday, September 19th, 2011

TGI Friday's walls are littered with "vintage" wall decor. Red Lobster has old lobster traps and fish photos all over their walls. Then it hit me: geek hangouts need their own brand of wall flair. Why not outdated tech books?

I've got a ton of books on technologies that aren't in widespread use any more. I'd donate them but even Goodwill doesn't want stuff like that. When you think about it it makes sense. So where do they go? The landfill? I like to pretend I'm much more environmentally friendly than that.

Some hangout for geeks needs to step up and offer a free appetizer or something for anyone that brings in a tech book that was published before, say, 2000? That seems like a reasonable cutoff. Then all the geeky people can laugh at the titles lining the shelves above their tables. "PowerBuilder? Oh, shit! I wrote something in that once!" (Apologies to Sybase, but you really need to give up on that shit.)

Share

MP3s and Ratings

Friday, August 13th, 2010

Don't you hate when you put ratings on most of the songs in your massive music library only to find that you need to do it again when you switch players? On Ubuntu I use Banshee which allows you to save ratings to the ID3 tag right in the MP3 file. That means those ratings are available from any Banshee player. Nice.

The problem is that I'm working a contract gig that sort of requires Windows (well, they think they do at least) and I don't fully trust the port in progress of Banshee to Windows. So, I'm using iTunes (which I hate). I think it'd be nice if other players could use that same custom ID3 tag to use the ratings but I realize that many people have an issue with subjective information (the ratings) being stored in a repository meant to store common supposedly objective information about the song itself. Then there's the whole issue of standardizing on the custom tag. In a perfect world more stuff would use a plugin based design and you could simply write an extension to get the ratings from wherever you wanted.

A simple import / export to an agreed upon format could also sort of solve the problem but you can't get people to agree on things and you would then have some annoying synchronization issues. I think it'd be swell if something like last.fm acted as that song and ratings repository since they're a bit of a de facto standard supported by most MP3 players. It seems simple to stick the rating in there when you scrobble whatever you're listening to. Then it's just a hop, skip, and a jump to an import / export to get up and running. It also feels like it'd add some value to their existing service. Somebody get on that…

Share

The Tech That Should Not Be

Thursday, August 12th, 2010

I just read this post about a thing called the Espresso Book Machine that allows a bookstore to print a fully bound book in minutes. The idea is that they could print an out of stock book for you rather than ordering it.

I have mixed emotions about this. Nothing pisses me off more than going to an old fashioned bookstore in search of some instant gratification only to find that they don't have the book I'm looking for. "We can order it for you," they say. Well, I can order it for me too. Only, when I order it for me it comes to my house and not to your stupid little store (and I don't pay sales tax (or shipping fees usually (nested parens FTW))). This print on demand idea seems pretty boner inducing on the surface.

Unfortunately the kinds of technology that make this dream possible also instantly make it unnecessary. In a world where this machine can acquire and store the number of books required to make it useful it has already been replaced by the ability to instantly purchase, download, and read the book on an e-reader without leaving my precious home or touching any dirty, sweaty money. Sure it will probably still be successful but only because of the Luddite fetishists that insist on consuming their information the old fashioned way.

This whole thing reminds me of those stupid redbox DVD dispensaries. In any sane world they would have never existed. I have a relatively high speed Internet connection and an abundance of digital cash. Can't I just instantly stream those movies directly to my viewer of choice for a the same reasonable price? Ah, the devil's in the (bold) details. I have a variety of ways to pseudo instantly watch movies but the only reasonably priced option is Netflix. Unfortunately their instant queue selection needs a little work. Knock down that barrier and the only benefit of redbox is to satisfy weirdos that reached for the technological dream and missed, coming up with a beer in one hand and their disk in the other.

I digress. To sum up, in a perfect world everything would be peer reviewed, indexed, searchable, remixable, and digitally available from the comfort of my own home. I could watch new movies on my own television without someone kicking the back of my seat or mistaking the theater for open mic night at the Laughatorium. "And I wanna be rich. You know, someone important … like an actor."

Share

Job Postmortem #2

Tuesday, August 3rd, 2010

About the Company

Now that I'm done with my current job it's again time to reflect on what I learned and what went wrong. I've changed the names to protect the innocent. I spent about 2 years at "Company V." They make a retirement planning tool. It allows you to do some nice "what if" scenarios to determine whether or not you're on track to do all those things you dream of someday doing after you retire. It's much more sophisticated than the crappy one or two question forms on the website of most financial planning companies.

It's a great idea in my opinion. It has a lot of potential. For the record, I like the people at Company V and I love the product idea. I just think things could be better.

Now for the lessons. I won't bother talking about the many issues I had about software development methodology at Company V. Instead I'll just talk about the product side of things.

Analytics, Stupid

The first is a simple one: collect some fucking analytics. Any discussion about how important a feature is, why people aren't signing up, which type of sign up button is more attractive are all bullshit if you don't have some way of collecting data about your visitors. We collected almost zero data about our visitors. What was our conversion rate? Fuck if I know. How many people abandoned the sign up process once they saw all of the data we required? Fuck if I know. That's the answer to every one of those questions because there's no goddamn data.

I can't talk about analytics as well as these two videos: Startup Metrics for Pirates and Web App Marketing Metrics. They're pretty short and definitely worth a few minutes of your time.

Multiple Masters

Company V has two very different target customers. Home users and financial advisors. If you are serving two very disparate customer types you will wind up with some very serious conflicts. Each customer type is a reason not to do something for the other customer or a great way to more than double your effort in the rare case you actually get to work on a feature.

In the case of Company V it was that they have a feature called "offline mode." This allowed financial advisors to take their laptop to locations where they don't have an Internet connection and sit down with a customer, going over their retirement plan. This was accomplished via a desktop application written in Java.

Getting Java working on someone's computer is an unnecessary hurdle and places without Internet connections only exists in movies. Offline mode is not useful to the home user. I would argue that it's not sufficiently useful to the financial advisor either. However, it was a feature that kept us from doing a lot of cool stuff because we had to have it. Yes, this feature could be accomplished in a better way but the need to keep the feature presented unnecessary overhead and complexity in my opinion.

Too Many Hurdles

There's just too much shit for someone to do before they can use the product. They have to sign up for an account, install the Java plug-in, download the application (or launch the applet) which is over 100 megabytes, and figure out how to use your product.

The more of those steps you can eliminate the better. Each one of those steps throws away half of your potential users. They just go bye-bye. The observant reader will realize that I just pulled that number out of my ass since Company V doesn't collect that kind of data. Prove me wrong.

The Things I'd Do

Short and sweet. Here's a list of things I would have done that I firmly believe would make for a better product for Company V.

Web App

Easy. Ditch the desktop application and make it a web application. Use something like GWT so you can get some good use out of your current Java development staff and have a relatively rich UI for your user. No installation on your computer, no downloading. Nice. You could even use Gears to get some workable solution for offline mode.

Use It Before You Register

If you have that nice web application, let people start making their retirement plan without even signing up. Just start using the product. Of course it would be nice if your product guided people through unfamiliar territory, but that's a given.

Once you've proven your value to them then you can try and get them to create an account if that's really your sort of thing.

Don't Even Register

Even better is to let them sign in with their Facebook, Google, Yahoo!, or OpenID login. Create an imperfect, incomplete profile off of whatever data you've got and bug them later to fill in the blanks. So what if you don't have their email address? Why the hell do you want to email them anyway?

Stop Emailing People

We collected email as part of the registration so we could bother our customers. Why? If you have a product announcement or a change in your training schedule why not just Tweet it? Or post an update on your product's Facebook page? Fine, let them put in their email if they want to be updated that way or need a password reminder (assuming they aren't using a 3rd partly authentication mechanism), but don't demand it.

Be the Tool

With retirement planning there are a lot of financial advisors that blog about how cool they are and how huge their planning penises are. We should have helped them do that. Our web app should have allowed embedding of whole or partial plans into web pages. If you want to show the benefits of a 529 savings plan create a couple of portfolios and embed the relevant portions into your blog. Company V would have a teeny tiny link in there so they get a little free press and the financial planner gets a tool that makes displaying unwieldy information a little easier. It's one of those win-win things I hear so much about.

Be the Tool Part 2

If you go to a financial planner they need to ask you roughly 3500 questions (I made that up) to determine the current state of your financial clusterfuck. Company V helped them do this by creating a PDF that was 10 megabytes and 40 pages long. The advisor would email it to the potential customer, pray it doesn't bounce because it's fucking huge, the customer would print it out, fill out the relevant portions, take it to the financial advisor who then hands it off to some data entry monkey to type into our desktop application. Simple, no?

Yeah, to hell with that. Use the no registration web application to allow the financial advisor to email, host, whatever a guided process to determine the relevant data and collect it directly from the user and dump it straight into the Company V application. The advisor has access to it immediately and the end user doesn't see most of those irrelevant questions. Throw in some tracking codes so the advisor can see the ROI for different ad campaigns. Let the advisor create a special URL that they can include in every email signature or even print right on their business card that takes the potential customer right to where they need to go. You get the idea.

Nice Ideas, But…

In fairness Company V thought some of my ideas were good. They just weren't good enough to actually do. There was no shortage of excuses. We have to keep offline mode, there are more important features to work on, who's going to pay for the development, etc. I still think each of these is potentially a great idea in general and for Company V especially. My next task is to find a place to work that agrees with me.

Share