Programmers in 2009....
Thursday, 11 June 2009
In case you don't know, by training and trade I am a programmer.  I've been doing it for more than 20 years, and I think I'm pretty good at it.  I am not saying that I write perfect code - if experience has taught me anything it is that most code,mine included, is far from perfect.  But I do pride myself on doing 2 things:  sticking to the task at hand and testing what I create.

I've recently had the opportunity to look at a third party piece of Christmas software, IE One that is not written by a lighting hardware manufacturer.  One small part of this software performs a similar task to something I was working on at the time:  converting a light show created in one software package to work in another.  This conversion is no simple task since the hardware that runs both is so different.  Unfortunately, this 3rd party program fails miserably at the conversion.  I don't think the problem stems from the ability of the programmer.  Instead, it's from bloat

In recent years, processing power, memory, storage, everything associated with computers has become cheap.  Newly minted programmers, or more likely the owner's or marketing departments of software houses, tend to over-stuff their software with features that have limited appeal.  These 'Gee Wiz!' features make for great headlines, but they do little to help with the core function of the program.  I am not saying we need to go back to command line programs that run under 4K, but when is enough too much? 

A car is a useful tool.  It transports you and some cargo from point A to point B on land.  Sure, it will have some nice additional features which are not directly related to driving:  A Radio, cup holders, cruise control.  But all of these things are ancillary to the MAIN function of the car ->  Get in.  Go to B.

Now, imagine if that car were also forced to perform other functions.  Lets even say that those functions have to be transportation related:  Move on water.  Move UNDER water.  Fly in the sky.  Go to the moon.  Cut your lawn.  Can it be built?  Sure, but would it be practical?  So why should a single software program try to be all things to all people? Shouldn't it do one, or at most a few CLOSELY RELATED, things well?

The second problem with bloat is it's impact on the most important part of programming:  The testing.  In the ever increasing pressure to add more and more into a program, testing falls by the wayside.   Each time new code is added to the release, the chances of breaking something increase as well.  The increase in complexity, and thus the amount of testing required is not linear, it is exponential.  As the time amount of time spent to shoehorn additional marginal functionality increases, time for testing is being reduced - at the precise time it needs to be expanding.

This cycle continues until NO testing is being performed.  Instead, companies will rely on 'user-testers'.  A limited release for testing given to users used to be for something called 'beta testing'.  Beta testing, as it's name clearly points out, should not be the initial primary testing of a system (beta is the SECOND letter in the Greek alphabet). 

A properly coded system should have gone through MANY layers of testing before being released to a user:  System Testing (by the programmer), Unit Testing (by a tester who was NOT the programmer of the application), Regression Testing (if it's an update to an existing system), User Acceptance Testing/Alpha Testing.  Only after the package has passed all these points should a system be released as a 'beta'.

Companies that are not doing any testing will release a new version of their software and call it a 'release candidate'.  They will then promptly ask for feedback.  Here comes the problem:  Users typically expect a product to remotely do what it is advertised to do.  With no (or severely limited) testing, at best the product either has major flaws or simply doesn't work, and at worst crashes or produces completely erroneous output.  Many users will simply give up.  Those that don't will post their findings - just as requested.

All of this public scrutiny eventually wears on the company.  They begin to feel attacked - that the users are not there to help them, but in fact are cutting them down because they don't want to see the company succeed.  Newsflash:  If you put out crap, then ask people to identify the crap, don't be surprised if what they find is CRAP.

Now, you may be thinking that because I am a programmer, I know what to look for and am really digging to find errors.  But let me ask you this:  If you start up a program and something says it's supposed to be green, if it's displayed in red (and white is displayed as purple, etc) can you with a straight face tell me the company did ANY testing at all?  If a bug this easy to spot made it into a 'release candidate', should you have any confidence in the more complex aspects of the system?