Archive for the 'Project Management' Category

Job Postmortem

Saturday, June 10th, 2006

The Big Sticky Ball

It's job switching time again. I've had a good time watching another software project that I don't work on struggle to grow, fail, then struggle to survive and reinvent itself. The extremely quick summary is that a system admin with a lot of domain knowledge converted his convenience scripts into a product and a company. More developers were brought on to continue to develop on top of the initial attempt. Like many projects they were very heavy on implementation but very light on planning and process.

At some point, someone realized that they had severe problems with their overall architecture. A product that is really just glorified sys admin scripts didn't work well in large scale customer installations, was difficult to install, and a little buggy / unstable. Doesn't scale, hard to install, and buggy. Hey, just add that no one can tell you how much it costs and I think I just defined Enterprise software! In addition, since everything just kind of "grew" there weren't any real unit tests or, from what I hear, any real encapsulation into semi-independent submodules. Thus making fixing things a very dangerous proposition.

As I've mentioned previously, the idea of system to which you cannot easily add new functionality has been called a big sticky ball problem by one of my managers.

A Man, a Plan, But No Canal

The best plan moving forward seemed to be trying to stabilize the existing product and make it easier to build and install (yeah, their build sucked too). While that was being done, another product would be created to address the many issues of the first product. The first product would initially serve as a good starting point for requirements (both what it did and didn't do) as well as feed data to the new product allowing some measure of scalability through additional instances of the collector. Later, maybe the data collection would be improved in the new product and the first product could go away completely.

The problem is this means you won't be pumping out features like your customers, or worse yet your sales people, are used to. If your revenue model is heavily skewed to getting existing customers to upgrade (rather than a healthy effort to acquire new customers) then your sales are going to drop off since there's nothing to which to upgrade. If this happens, upper management may panic and decide that it's not worth taking two steps back so you can take many steps forward. They'd rather just see something, anything, that denotes forward progress and a return to sales. Why sales couldn't adjust their model and temporarily stop sucking on the upgrade revenue teat, I have no idea. I'm not a sales guy.

And, of course, this is what happened. A drop in sales brought executive attention. The new project was killed because in their eyes it too closely resembled another product being developed. The idea of stabilizing things in the original product took a bit of a backseat to pumping out some features. Then at some point, I imagine, the word "legacy" will be associated with that product and it'll slowly go away after all the developers are either transferred elsewhere or laid off.

And What About Me

My project had a ton of other problems. Briefly, it's a pluggable module into another company's framework. We would sell to a subset of the customers of the other company's product. More accurately, the other company's sales people would sell our product for us.

From the beginning the project was plagued with a lack of resources (test equipment primarily). Then, amid some project killing and reorganizing, the person who had the original vision for the partnership voluntarily left the company, the tech lead on the project left, and the development manager left (not quite in that order). The management to which the project was transitioned works primarily on a different project and seems to feel that my project runs itself. Sales and marketing aren't selling the product directly so they don't feel they need to do anything in the way of requirements.

In a final blow, it looks like the new version of the framework does a better job of what our product is currently doing. There is still room for our product to change its tact, but with no equipment, requirements, and now with one less developer (leaving them with one) I don't think it will happen.

And What Have We Learned

I don't know that I actually learned anything I didn't already know, but it's always nice to get confirmation on things.

From an architecture / code standpoint, you don't just tack on scalability. In order to do this you need to first concretely define your goal. "Must scale" is not an measurable quantity. Once you can measure it, early on make sure you can hit it and that everything you do moving forward won't jeopardize hitting it.

Unit tests don't just show you that your current implementation works. They show you that your refactored solution and all your new stuff didn't break anything that used to work. If you can't figure out how you could possibly test it, you better fucking rethink it.

If you ask someone to be the "build person" and they groan, your build sucks. Fix it and make it a one click build. There is a faster turnaround time to getting builds to QA (you do have QA, right?) and there are fewer human errors in the process (oh shit, I forgot to manually add the doc directory to the top level so this build is bad).

Domain experts aren't necessarily good developers, your developers shouldn't have to be domain experts. Domain experts may not be good customer advocates. There is a difference to knowing all the ins and outs of a domain and knowing what someone that works on it day to day (below the theory level) will want to accomplish. Your customers can tell you what they want to do but will have a limited idea on how they want to do it. For example, the customer wants to be able to mass delete, but they may not have a good idea of how to select what it is they want to delete.

Everyone needs to have skin in the game. If your own sales people don't sell your product, why would they ever contribute requirements (probably bad requirements, but requirements nonetheless)? If management feels their other project is more important, how can you expect them to devote the necessary time to get the developers the things they need? If developers never listen to how hard they've made things for the customer why would they ever change how they do things?

And finally, if a CEO ever tells you that if you don't like the way things are there are plenty of other places for you to get a job, you should probably listen. They're absolutely right.

Share

Semi-Random User Interface Thoughts

Thursday, May 25th, 2006

More Tales of the Expert User

Another thing I noticed during the expert user presentation was how he used the information in the application. He leaned heavily toward using the charts and graphs in the application rather than the tables. From his viewpoint, the tables were useless. He didn't care about 99% of the individual incidents in the tables. He just wanted to see the overall trends. Multiple pieces of information were presented as different components of a single graph with a vertical line cursor that allowed him to compare the timing of different events from different mini-graphs. When he found something out of the ordinary he could click and drag to view a table version of a point in time. This is vastly superior to what I normally see–disconnected table and chart views with the tabular format being the kitchen sink view of things.

The point is that I normally witness user interface design happening in a complete vacuum. UI features are thrown in just like product features, just put everything in there because some of this is going to be useful to someone. Hopefully. Maybe. Now I haven't been here long enough to know how that UI was conceived, but this is certainly another example of the importance of mining your users for information. It's not only important to know what they need, but how they intend to use it.

Hey, Look at Me! I Read a Book

A case in point, I'm reading Things That Make Us Smart and the author uses several really good examples of the representation of information affecting the ease with which certain tasks can be performed. One example from the book is the case of Roman numerals, ticks, and Arabic numerals.

Supposedly it is easier to perform addition with Roman numerals than the other two systems. Read the Wikipedia explanation to see it in action. This is apparently easier to learn. Fortunately I'm too heavily indoctrinated into our current system to tell if it really is. A good thing since multiplication and division would obviously make my wiener hurt.

Ticks are superior for counting or tallying things. This is because their representation of larger numbers is additive. You just add another tick. With Arabic numerals you have to completely change the symbol you're using. This means erase and redraw. Not very efficient for the use case of counting.

And Arabic pretty much kicks the ass of the other two in everything else. But the point is that each of these representations are superior to the others depending on what you are using them for (although I will never use Roman numerals even if I'm only doing addition).

Give Them What They Didn't Even Know They Wanted

Apparently the king of information and interface design is Edward Tufte. I've only seen a few examples of his stuff, but it is very sweet. On top of this there are courses available to learn to improve your presentation of information. Everyone I've talked to that has attended raves about their sheer brilliance. I've decided I'm attending the very next chance I get. It's a step toward the final piece of the puzzle which is knowing how to not just get the information the user wants but to know how to present it to them in a manner that greatly facilitates their understanding and usage of it.

Now that's something that should impress customers and make you stand out from the competition…

Share

Guest Speaker

Monday, May 15th, 2006

I finally had something useful happen at work. We had a consultant that uses one of our products during the course of his job come in and give a quick presentation on how he uses the tool. Unfortunately, it's not the product I work on, but it was still interesting for a number of other reasons. As quick background, the tool demo'ed is a storage monitoring tool.

Three Users of Enterprise Software

Nothing new here, but Enterprise software users typically fall into three categories: experts in a domain using the software to make their tasks easier, users largely ignorant of the domain that want an expert in a box to tell them what to do, and executives that want some charts to determine if things are running smoothly: the infamous "dashboard."

The person we had in was definitely an expert in the domain. Listening to how he uses the software to determine if a customer's installation is running smoothly was very educational. Often, features seem to get added to a product by people that don't really understand how those features are going to be used. It's put in just in case someone might find it useful or, worse yet, a customer demanded it without explaining why the wanted it.

This part interested me because, through his demo, he identified a ton of information that the "expert in a box" customer would drool over. Apparently the way he used different pieces of information wasn't just news to me, it was news to the current development team as well. They furiously scribbled while he showed how he used combinations of charts to find configuration errors, determine the need for new equipment, and a number of other things. He also suggested some features that would make his life easier and explained why he didn't use certain parts of the system at all.

Hired Guns

I, like most people, really value the input of a customer or, if that's not available, a customer advocate. The problem is that most product owners in a typical company don't seem to really understand what a customer needs. They pay more attention to what the competition is doing or at most what the squeakier wheels in their customer base say they want.

As early as possible you should post a position on one of the online job sites looking for an expert in whatever problem domain you're working. Throw them a few thousand dollars or so to just come in and show you what it is they do. Do all the standard information mining stuff on the poor bastard. Make sure the developers are involved. Try and build a tool that is useful to him and at the same time captures some of his knowledge for the EIAB (expert in a box) user. After you build a few iterations of the product, ask another one in to evaluate the product. Repeat, get the EIAB user to check it out, etc etc. Did I mention that you should make sure the developers are involved?

Ship your product, sell your product, host a message board for customers of your product, make the developers participate on the board. Keep that interaction with the customer going. At some point, throw a few extra grand at an expert that uses your product and record a demo session to include in your product. Something to orient the expert customer and maybe teach the EIAB user to be more effective. It's not like anyone reads the manual. That training video can double as sales literature, too.

No Shit

Yeah, it's pretty obvious. A veritable no-brainer. I'm ashamed for even suggesting such an obvious idea. Now why, if it's so obvious, is it that I've never seen this done at any organization at which I've worked?

Share

That Dirty, Dirty Software

Wednesday, March 1st, 2006

We Can Rebuild It

The dream of most of the software developers I know is to get the opportunity to rewrite a system they work on from scratch. I think as things move more toward test driven development (TDD) this desire may fade a bit, but never entirely disappear. TDD should allow developers to aggressively refactor an existing product rather than starting from scratch. There will be less fear of hopelessly breaking the system if you've got solid tests and good test coverage. Of course, even getting the opportunity to refactor may be rare in many cases. The product owners don't always care how dirty the implementation is if there are no visible effects in the final product (at least not right now anyway). The customer won't buy the product because it has pretty source code behind it. A damn shame.

The incessant desire of software developers to rewrite all code they encounter is perfectly natural in my opinion. It's the plight of the knowledge worker. Each day you collect new information and techniques. When you look at someone else's code (maybe even your own from 6 months ago) you can't help but feel that there is a better way of doing it. If you have to write new code within the constraints laid down by this less than ideal solution, you're eventually going to come to the conclusion that you should just rewrite the whole thing. You just don't like having to work with / around whatever it is you've been given. Maintaining someone else's software is like putting someone else's sweaty clothes on. That opportunity to refactor is crucial.

From Hacks to Good Software

I was talking with another of the developers on my team a few days ago about how we should fix a problem we're having. We're working under a deadline and may not have time to implement the more ideal solution. I proposed a couple of different stopgap solutions that would get us to our release date. Most of the ideas were met with the infamous developer credo, "but that's a hack." I definitely understand the sentiment, but unfortunately we have to ship some software. And in my experience, most good solutions started as a hack. If you properly encapsulate everything you can go back and improve that hack into a decent solution with minimal impact. But again, willingness to accept the less than ideal solution is much easier if you know you will be able / allowed to refactor it later.

Now, does the world work this way? Hell no. So far, for me anyway, TDD is hard. I seem to always find myself smack dab in the middle of a container or a framework that makes effective testing extremely difficult to do. That inevitably gets coupled with a product development cycle that allows no time to retro-fit tests and test harnesses into the existing product. On top of that, the product managers often don't see the value of refactoring unless they get a ton of new features with it.

Usually in a simple situation you just start doing TDD for all features moving forward and write a test for every bug you find. The idea is that eventually you'll get there. For cases where you can't do the initial work in small chunks I guess you're just screwed. Eventually you'll have to bite the bullet and do whatever big tasks are necessary to start you down the whole TDD road. If you can't do that, it's probably time to move on and try to make sure that the next place doesn't suffer from the same problems.

Any Questions?

In the back of my mind I keep filing away little tidbits like this for later use so that I can come up with a decent set of questions to ask any potential employers. In a typical interview I forget the importance of grilling the employer. I need a set of questions like the Joel Test. I think the first one I need to add is: Can I see a copy of your code coverage reports, preferably over the last 3 months?

Update

Matt Kinman rocks!

Share

Feeling Co-location

Monday, February 13th, 2006

I'm working on a very small team at the moment and everyone I need to talk to is, at most, several cubicles away (don't get me started on how much cubicles suck, by the way). Today I started working on something completely new to me. Luckily, one of my nearby co-workers is intimately familiar with it and he sits 8 steps away. When I got started I asked him everything I thought I would need and went back to my desk. Several minutes later expectations weren't meeting reality and I had more questions. He answered those and asked some of his own about something on which he's working. About twenty minutes later he had another question about something in Windows and I had another question about my stuff.

I've worked on teams with members in different cities that were 1) in the same time zone, 2) in time zones a couple of hours earlier / later, and 3) with people half a world away. While number 3 sucked the most (over a 12 hour turnaround time on email, no hope of phone conversations) the inability to fire off a quick question or two without picking up the phone, the complete lack of feedback from facial expressions and body language, and the lack of being able to overhear a conversation related to what you're doing have all been irritating and wasteful.

I think a lot of higher ups are too concerned at using the resources they already have in different cities or of saving tons of money by offshoring a part of the project. From down here at ground level I would say that it is a mistake. The best and most efficient working situations I've been in involved everyone being less than 30 seconds of walking distance away from each other. Less time is wasted on trying to exhaustively and preemptively document / explain everything. You don't have to wait for return phone calls. It's harder to blow people off or hand off crappy work when you have to see them in the halls (not that I would ever do such things).

Although there are solutions to alleviate problems when it is just impossible to have a co-located team (video conferencing, teleconferencing, instant messaging, VNC / remote desktop, etc) I think most of the people that have to do the actual work would agree there is no real substitute. As such, I think I need to work this into my employer interview questions.

Share

Diversification

Wednesday, February 1st, 2006

It's (Third) Party Time

A lot of computer products, particularly in the monitoring / administration domain, are centered around some third party piece of hardware of software. Usually a company finds that they can greatly enhance and simplify the admin and reporting on another company's flagship product. Presumably the original company is either unable or unwilling to make it easy manage their stuff because they're concentrating on their core problem domain.

As an example, if you build backup servers, you're working on getting that backup stuff working smoothly, not necessarily on making it easy to manage or get information out of it. Sure, you include some out of the box support for doing some of the stuff, but ultimately it's an immature solution that feels like it was tacked on at the last second. It's particularly bad with hardware vendors. They usually can't design decent user software or APIs.

This creates a great opportunity for software companies to swoop in and carve out a spot in (or create a market for) the area of third party management software. If you can write a piece of software that makes the administration of Foo servers more simple, particularly in a large enterprise environment where the out of the box tools break down, then you'll be sitting pretty for a while.

You Mean, We Can Make Money Off This?

One of the problems occurs when the original vendor suddenly realizes that their software sucks. If they ever get off their ass and write some decent management stuff around their product and bundle it for free, then customers have much less incentive to go looking for your offerings in the first place. If other companies begin offering management solutions around the same product, you will see even more of a decline in sales.

The temptation seems to be to get into a feature war with everyone else. Once you've exhausted all of the useful features you start to include numerous niche features and one-off customer requests. Dump these onto a product sheet in no particular order and you look just as good, if not better than all of your competitors. A lot of these features aren't likely to matter very much to the people that use the software, so you need to get some marketing materials, product advocates, consultants, etc to push your product to the people that make the decisions (and don't actually use the product). For more on this you can see the ClearCase situation in another post.

Now you've lost your way. You started out simplifying the administration of another company's product and have created what is, more than likely, an overly complicated and hard to use product of your own.

Wider Is Better

Rather than freak out and get all feature crazy when the competition starts, it's better to diversify what it is your product simplifies. For example, if you've a database tool for a particular backend, why not add support for another database backend before exhausting your laundry list of niche features? Of course this works better if you're simplifying or abstracting multiple offerings from vendors in the same problem domain that are likely to be found uneasily co-existing in large organizations.

By that I mean, you want to do something like take three operating systems and present a common management interface for a problem domain (like security or backup) that sensibly abstracts away the subtle differences. This allows an admin to more easily administer the hodge-podge of systems he's found himself in charge of. And, since the same person is likely to have to deal with all three, your tool is quite handy. Handier in fact than if three people each only deal with one of the situations. Then the fact that your tool acts as an abstraction for all three isn't particularly useful, especially since each of them will have their own, very deep on functionality, tool that they prefer. No, I think the better target is to go wide and relatively shallow. It's also better insulates you from the competition, either from the original vendors (who usually don't care about working with products from other vendors) or other third party companies (who may not have their act fully together).

And Your Point Is?

What's my point? Up to now, what I'm talking about is fairly obvious to most people. The issue comes in when you consider how your original, one vendor product came to be. I've seen a lot of software teams and companies paint themselves into corners by making the assumption that their product only works with one vendor product or by tightly coupling domain / business logic to the first product they support. When they finally get around to adding support to more products they often find out their "abstraction" doesn't play well with others. By then it's too late. It's all well and good to only build exactly what you need at the time, but you need to think about where you're going. This is something I haven't fully figured out with agile development. This "architectural escape velocity" doesn't seem to come easily in the bite size chunks that agile advocates. I'm not saying it can't happen, it's just very easy to get it very wrong.

As early as you can, hopefully when you initially start your project, you might want to start asking these questions:

  • What other vendors' products could we manage?
  • How have the vendors represented this domain differently?
  • How can we create a unified view (abstraction) of this domain?
  • If something doesn't fit the abstraction, how can it still fit into our product?
  • How do we keep one-off customer requests from making our core product less maintainable?
  • How can customers get information into and out of our system?

That's just a start. I'm sure there are more (and probably better) questions to keep asking yourself. Answer these early and often (like you release) and let it affect your planning. Don't be tempted to just say, "we'll worry about that later" because by then it'll cost too much to change.

Share

Words to Live By

Sunday, January 29th, 2006

Executing

Guy Kawasaki has a great post on the art of execution. There are many good points in it and, besides being a guide to organizations just starting out, it would serve as a great refresher to organizations that are losing their way. The problem can often be convincing companies that they're doing something wrong in the first place. Often, a company experiencing some level of success is very reluctant to admit that they're screwing something up ("We're making money, so we must be doing things right"). I'm very fond of the idea that one of the greatest impediments to progress is your current level of limited success. Executives also always seem to get unnecessarily defensive about too many suggestions of how to improve the status quo. I digress.

Although every point is relevant and valuable, from my experience the most powerful points (and the ones most organizations fail at) in the article are:

  • set achievable goals
  • communicate the goals
  • measure progress on a weekly basis
  • establish a single point of responsibility
  • reward the achievers
  • heed your “Morpheus”

Achievable Goals

Nothing is more demoralizing than having a set of impossible goals. Managers sometimes mistakenly believe that having these difficult goals will motivate people. It doesn't. Usually, after the initial planning / kickoff meeting the people responsible for attaining those goals go back to their offices and immediately start telling each other how ridiculous the goals are. Then, throughout the rest of the release, they make only passing sarcastic remarks to the preposterous goals and accept the reality that they won't even come close to the goals set for them. They then work the normal amount (maybe even slightly less) and complain to each other in passing about their current work situation. Management is labelled as a bunch of out of touch idiots and this serves as another reason for people to get another job.

Communicate the Goals

Management usually does a mostly adequate job of communicating goals. I mention this item here mainly as an opportunity to increase the trasparency within an organization. Management should start a blog laying out the goals and post regular updates as to the progress toward those goals.

Measure Progress on a Weekly Basis

Despite being told that it's the ultra cool generation Y that likes getting regular reviews of their performance, I think everyone in the organization (regardless of their generation moniker) needs a regular update on progress. It's the only way to make the necessary course corrections along the way. It's also very agile, which is greatly in vogue at the moment (and rightfully so).

Establish a Single Point of Responsibility

This is probably one of the greatest failings of organizations of which I have been a part. I think it's essential to have a point person for any goal. Unfortunately, no one wants their ass in the fire when things go wrong. If that's the case, you probably have a company that engages in way too much CYA and blamestorming during a project. Accountability can be a good thing, but when people are held accountable for not meeting impossible goals with inadequate support / tools, then no one will want to be saddled with any kind of responsibility. And, if no one is willing to step up, nothing is going to get done.

Reward the Achievers

This mostly speaks for itself. Sometimes, though, rewards go to the wrong people or they don't trickle down from project managers to the people that really made things happen. I've also seen organizations that are scared to reward people because other employees will get jealous of those rewarded. If your reward policy is clearly defined and fairly applied, those people just need to get over it. If it's not, then of course you need to fix the system rather than "not reward" everyone equally.

Heed Your “Morpheus”

This is a reference to the Matrix and the fact that the character Morpheus gives Neo the pill that exposes the world for what it really is. These people are the ones in an organization that aren't "drinking the Kool-Aid." The risk is that they're simply labelled as bitchers and complainers then ignored. As these people often have valid points see also Cassandra. The role of the "monkey wrencher" is severely undervalued in many companies. Managers and fellow employees don't want to hear how things aren't going to work, especially if they're all powerless to fix things. You need to find a way to make this a constructive role and protect the person that's pointing out that the emperor has no clothes. Ignoring risks is not the same as managing them–when they're brought to the surface, make sure you heed the warning and don't shoot the messenger.

Do These Things

Do these things well and you increase your chances of success. You're also likely to be light years ahead of your competition, since most places don't apply these points very well, if at all.

Share

What a Tool

Thursday, January 26th, 2006

Humble Beginnings

Around ten years ago I was working for a pretty backwards IT department. We wrote and maintained an internally deployed call center application in Foxpro. We didn't use any source control. The shared libraries for the multiple, customer specific versions of the application were on a shared network drive. When you ran the build script, a new version of the application was instantly put on another shared drive and almost as quickly wound up on a phone agent's desktop.

Over a weekend, one of the lead developers decided that we should have some sort of source control. Rather than use any of the many mature (and free) versions of source control, he decided to write his own–in Foxpro of course. His version of source control simply made everything on the shared library drive read only. You then ran his program (named "vc") on some component and it would log that you had it checked out. When you checked it in, it made the shared version writable, copied your version over it, and logged that you checked it in. That's it. There was no tracking of revisions, no versioning, not much of anything really.

When the department finally decided that this solution wasn't good enough (a couple of years later) several people recommended using CVS. However, now that they were intent on improving their process, the management felt that CVS just wasn't good enough for our unique needs. Someone from outside the company sold them on using ClearCase at several thousand dollars per developer. Of course, we didn't use or need any of the supposed features of the product. In my eyes, ClearCase was/is no better than CVS. From the management point of view it was better because we theoretically got support, it better fit our "complicated" development process, and someone outside the company told them it was better.

New Company, Same Issues

A couple of jobs and many years later I found myself in a much bigger, more mature IT department. The new problem wasn't source control, it was the process. We were using a painful waterfall development process. The management over the IT organization decided we should switch to a more agile process.

Most of the developers I worked with viewed this as a very positive change. Up to this point, all project management occurred in Microsoft Project and Word files. Now that we were changing things up, we could probably use a different set of planning tools. One of my co-workers (Matt Ray) had used XPlanner at a previous job and felt it was "good enough." Somewhere in the decision making process it was decided we'd have Rally come in and teach us how to be more agile. We were also going to use their project management tool to organize our tasks. Being well outside the decision making process on this, I'm not exactly sure how much money all of this cost us, but I suspect it was quite a bit. Was Rally's mentoring valuable? I think that it was, but I think that its primary value was that it is always easier to listen to someone from outside of your own company rather than your own people. Is Rally's tool better than XPlanner? I think it is a better overall tool, but it doesn't really do anything significantly better. XPlanner really is good enough for what we needed in my opinion.

The Tools Aren't the Thing

I'm now at another job in a relatively immature IT department. We've started using an agile development process and are using XPlanner to plan our stories and tasks for each iteration. A couple of days ago, during iteration planning, one of the developers was wondering if Rally's product had a feature they felt was missing from XPlanner. In this particular case, it didn't, but it made me think about peoples' love of tools.

It never seems to fail–an IT organization decides that they're doing something wrong and the solution is to pay someone from outside the company to tell them what expensive product will fix it. Usually these outsiders tell you what the more competent people in your organization already know. However, it gives the upper management great peace of mind to have someone that makes a living telling people how to do things give them the answer to their problems. I can almost understand this but it still baffles me that, if you believe you've hired good people, you aren't listening to them. The much stranger thing is that so many people are willing to go from using a bad solution to shelling out big money for a tool with a set of features and support options they'll never use. They're convinced that one of the major problems with their process isn't the process itself–it's the tools. The same thing happens with people on a personal level. I could get buff if I had a gym membership and I could get organized if I had a PDA. This is the easy "fix." Just buy a silver bullet to make the problem go away. People and organizations need to learn that they need to address the root of the problem first, usually with free tools that are "good enough" (if not outright better than their commercial counterparts). Once you do that, you're free to buy some tools to eek out a few extra points of efficiency if you still feel the need. But, most of what you need to do is purely behavioral and largely free to start solving–at least in terms of money.

Update

I just saw another tool named ScrumWorks that apparently recently became free to registered users. If XPlanner doesn't do it for you, you might give this tool a whirl. I personally haven't looked at it very much.

Share

Stumble Stumble

Monday, December 19th, 2005

When I switched jobs recently, one of the reasons I cited was our frustratingly slow and/or incorrect adoption of an Agile methodology. I always have problems organizing my thoughts as to what we were doing wrong, but here's a good post the summarizes some common Agile stumbling blocks. I think we were having problems with variations on 2 through 5 primarily:

No Executive Sponsorship

Well, we sort of did. It was more of a lip service type of thing. Other developer experience certainly varied from mine, but my personal experience was, "if you see anything wrong please point it out so we can promptly ignore it." I think this is a consequence of the fact that it's a company that grew through acquisition (so there are a lot of turf related politics keeping things from getting done) and that it's such a large company. I was simultaneously amazed and disappointed at the rate of progress, if that makes any sense.

Offshore/Outsourced Developers

Well, we had some offshore QA team members in my case, not developers. But we did have developers in two cities. They were in the same timezone though, so I guess that's something. I feel that having everyone in the same office would have been a huge help. That notion of eliminating that wasteful communication and coordination layer by being able to meet face to face seems very appealing. Of course, when you're a big company, you use all these resources you already have regardless of whether or not they're the best location/configuration for the job. Kind of like using WWII surplus supplies during the Korean "conflict." You see, watching all of those MASH episodes finally paid off.

Lingering Waterfall elements

I think this is one of the biggest things we suffered from. Speaking only for the team I was on, we went into our Agile adoption with a strict set of deliverables and a hard deadline and very little flexibility with regards to either. We also did not have shippable software at the end of each iteration and we knew it. There also was no time in the schedule for going back to revisit/improve "completed" features after feedback from the product owner (who was never there).

No Customer

I would say this was a pretty big one in a lot of peoples' eyes. Our product manager / customer advocate was omni-absent and never really understood the product very well.

All's Well That Ends Well?

Now, the funny thing is, at my new company, I work as a third party developer using my previous company's software. I've told everyone I used to work with the same thing: when you work with this stuff away from the daily development issues, it's really pretty impressive. Especially so when you know the timeframe and numerous problems that were faced during its development.

That's where too much success can be a dangerous thing. I've been at several companies that had the attitude, "We're making money, so we must doing something right." It's hard to quantify lost opportunity in the presence of existing success. However, I believe if we had addressed the issues listed above, everything could have been better: software quality, employee happiness, turnover rate…We'll never know.

Share

New Year's Resolutions

Friday, December 16th, 2005

While at lunch the other day, the subject of new year's resolutions came up briefly. I always have some grand idea of what I want to accomplish in the coming year, which ultimately falls well short of the reality. I think the problem I have is that I use a waterfall methodology "planning" my coming year. I need to agile it up a bit. So, this year I think I'll list some big items and then try and set up some weekly or monthly milestones along the way, reviewing and adjusting as I go. One of the items I know will make the list is to increase the frequency with which I update my blogs, hopefully without lowering my already treacherously low quality of content. We'll see how it goes.

Share