Tuesday, December 02, 2008

Why Automated Testing is Broken

Having been doing automated software testing for almost 10 years now, I am stuck somewhere between intermediate and guru on the subject. As such, I take the liberty to voice what I think based on my experience which has been wide and varied due to my consulting career.

There exists a chasm between the sales pitch for most automated testing tools and the reality they deliver. The chasm is wide mostly because the pressure to sell software is big and being brutally honest in the sales cycle is not conducive to closing deals. The vagaries of software sales is a subject better left to those more qualified than I, but suffice it to say that users are not prepared for what awaits them as the automation effort begins.

It all begins with the training class. The attendees all made it to the QA department from other various pursuits in the business world, from customer service to accounting. Some come with technical skills that include a possible class in programming and most have no scripting/coding experience at all. They feel quite comfortable for the first two days of training because it's all point and click, record and replay, and relatively easy subjects to digest.

Then something terrible happens - an eager student strays away from the "canned" training application and into the murky world of their "real" application that they work with every day. They quickly discover that the script might record but replay is sketchy if it works at all. The inevitable question arises for all the room to hear and from there the situation gets tense. Does the instructor dodge the issue hoping that it fades into oblivion or do they take aim and fire that terribly honest answer "Recording is only the starting point...". If the honest route is taken the discussion gets heated, questions arise as to why this tool was purchased and in some cases that will be the last time the users ever use the automation tool.

This situation is all to often what I have experienced. The QA team expected the automation tool be to be easy, plug and play, no programming necessary. The reality of this situation is that programming is exactly what is required to make the automated test work for more than a few iterations on the first day it was created. This is true for one simple fact, the applications we are paid to test are complex. Complexity in design, in execution, and most importantly the problems they were created so solve in the first place.

If we simply summed a few numbers up each day then a $5 desk calculator would be all the technology necessary to perform the task. Most modern companies today have very complex issues to deal with and therefore the need for software to perform those tasks in a reliable and speedy manner. This complexity is clear in how much time it takes to bring a new team member up to speed on how the system works and how they required to test it. It follows that the instructions we must give to the automated script are summarily complex and require good data in order for them to achieve their task.

In order to succeed with automated testing the personnel that create the scripts must have the same skill set as the developers of the software. Their tasks are nearly identical in every way to the tasks of the programmers on any software development team. This means that automated tests and the process of creating them must be treated and managed as yet another software development life cycle (SDLC). Choose your method of development, waterfall, XP, RAD, Agile (which is preferred here) and formalize it. The automation team should participate in design, code reviews, unit testing, and ultimately a release process.

In the last decade I have seen many automation efforts start up quickly only to be abandoned almost as quickly as they were started. The one common thread with these "failures" are that they have unrealistic expectations of what automated testing will deliver and how quickly. In one case I was on a team that was expected to automate the testing of an enterprise system in about 6 months, however the development of that same system took over 5 years to accomplish due to it's vast complexity. How then could the development of a true regression suite happen in 1/10th the time? The answer is simple, it could not and did not. What could have been done is to automate some of the most tedious and repetitive tasks that would free up the manual testers to do what they do best, find bugs. Over time a momentum can be achieved as the foundation of the test effort becomes more and more automated and hands off.

I will explore some other issues of why automated testing is broken and how to fix it in some future posts. Suffice it to say that the proverbial root of the problem is not in the tools but in the user of the tools. Hiring at least a few skilled programmer types to jump start the automation effort is mandatory for any lasting success. Failing to do so just creates tests that are fresh on day one but begin to rot over time and eventually become useless, sometimes in just a few months.

No comments: