Having been doing automated software testing for almost 10 years now, I am stuck somewhere between intermediate and guru on the subject. As such, I take the liberty to voice what I think based on my experience which has been wide and varied due to my consulting career.
There exists a chasm between the sales pitch for most automated testing tools and the reality they deliver. The chasm is wide mostly because the pressure to sell software is big and being brutally honest in the sales cycle is not conducive to closing deals. The vagaries of software sales is a subject better left to those more qualified than I, but suffice it to say that users are not prepared for what awaits them as the automation effort begins.
It all begins with the training class. The attendees all made it to the QA department from other various pursuits in the business world, from customer service to accounting. Some come with technical skills that include a possible class in programming and most have no scripting/coding experience at all. They feel quite comfortable for the first two days of training because it's all point and click, record and replay, and relatively easy subjects to digest.
Then something terrible happens - an eager student strays away from the "canned" training application and into the murky world of their "real" application that they work with every day. They quickly discover that the script might record but replay is sketchy if it works at all. The inevitable question arises for all the room to hear and from there the situation gets tense. Does the instructor dodge the issue hoping that it fades into oblivion or do they take aim and fire that terribly honest answer "Recording is only the starting point...". If the honest route is taken the discussion gets heated, questions arise as to why this tool was purchased and in some cases that will be the last time the users ever use the automation tool.
This situation is all to often what I have experienced. The QA team expected the automation tool be to be easy, plug and play, no programming necessary. The reality of this situation is that programming is exactly what is required to make the automated test work for more than a few iterations on the first day it was created. This is true for one simple fact, the applications we are paid to test are complex. Complexity in design, in execution, and most importantly the problems they were created so solve in the first place.
If we simply summed a few numbers up each day then a $5 desk calculator would be all the technology necessary to perform the task. Most modern companies today have very complex issues to deal with and therefore the need for software to perform those tasks in a reliable and speedy manner. This complexity is clear in how much time it takes to bring a new team member up to speed on how the system works and how they required to test it. It follows that the instructions we must give to the automated script are summarily complex and require good data in order for them to achieve their task.
In order to succeed with automated testing the personnel that create the scripts must have the same skill set as the developers of the software. Their tasks are nearly identical in every way to the tasks of the programmers on any software development team. This means that automated tests and the process of creating them must be treated and managed as yet another software development life cycle (SDLC). Choose your method of development, waterfall, XP, RAD, Agile (which is preferred here) and formalize it. The automation team should participate in design, code reviews, unit testing, and ultimately a release process.
In the last decade I have seen many automation efforts start up quickly only to be abandoned almost as quickly as they were started. The one common thread with these "failures" are that they have unrealistic expectations of what automated testing will deliver and how quickly. In one case I was on a team that was expected to automate the testing of an enterprise system in about 6 months, however the development of that same system took over 5 years to accomplish due to it's vast complexity. How then could the development of a true regression suite happen in 1/10th the time? The answer is simple, it could not and did not. What could have been done is to automate some of the most tedious and repetitive tasks that would free up the manual testers to do what they do best, find bugs. Over time a momentum can be achieved as the foundation of the test effort becomes more and more automated and hands off.
I will explore some other issues of why automated testing is broken and how to fix it in some future posts. Suffice it to say that the proverbial root of the problem is not in the tools but in the user of the tools. Hiring at least a few skilled programmer types to jump start the automation effort is mandatory for any lasting success. Failing to do so just creates tests that are fresh on day one but begin to rot over time and eventually become useless, sometimes in just a few months.
Tuesday, December 02, 2008
Tuesday, November 18, 2008
No Fluff Just Stuff conference was inspiring
The Rocky Mountain Software Symposium, or more aptly name "No Fluff Just Stuff" took place last weekend. This was my first time to attend and I was very impressed with the quality of the speakers and material covered.
The majority of the topics centered around Java development and what is happening in that world. As a QA professional that does some development I was in a little over my head, but that served only to make me even more inspired by what I saw and heard. Each one of the speakers was very knowledgeable, prepared, and worth listening to. I only wish I could have attended more of the sessions that I missed.
I decided to list here some the juicy nuggets of knowledge that I left the conference with.
First on the list was the talk on Test Driven Design (TDD) by Neal Ford (http://memeagora.blogspot.com/). Neal's presentation was very well done and opened my eyes to the way I should have been doing automated test development all along. (Automation in QA is really development in it's own right, a subject I will write more on later). The way in which the tests are written first, only to fail, then the code is written to answer the problem followed by the test passing is brilliant. I wish I had thought of it before now or at least read into it some time ago. Further into the presentation Neal showed how his code became more and more simple to read and execute while at the same time methods became more focused and specialized. This process made each method self documenting by the fact that they each were named well and did only one thing. I must read more about this as I integrated it into our process for automated test development. Neal himself is clearly knowledgeable and an excellent presenter.
Next most useful talk was by Ken Sipe (http://kensipe.blogspot.com) on Java Memory, Performance and the Garbage Collector. I have been load testing systems using the JVM on the server for many years and the information Ken showed would have been incredibly valuable in that process. In reality the application experts and java pros should have known their stuff about JVM tuning, but they did not and neither did I. Much guessing and voodoo was done to improve performance instead of accurate measurements like Ken showed us. I will be implementing what Ken shared with us and I expect to save a lot of time by either tuning the JVM or ruling it out as a performance bottleneck based on real data.
I attended another one of Neal Ford's talks, this time on code metrics. Again his presentation was very well organized, informative, and inspiring. For years we have been measuring our development work badly but with good intentions. Neal presented some great tools and ideas with an emphasis on not depending too much on one measurement. For instance cyclomatic complexity is tempting to use across the board but he showed that I does not tell the whole story. I especially liked what he said about "code smell" referring to overly complex routines, copy and pasting, and the lack of TDD.
Ken Sipe's talk on Career 2.0 had some very good food for thought. Some of the things he said that stood out were about increasing your digital footprint with blogging, writing, and so on. That footprint works in your favor to advance your career and expose you to others who may be interested in what you have to share. The other concept he talked about that stood out to me, was his statement about how the company managers a responsible for the company but you are responsible for your career. In many cases advancing your career also helps the company but even if it does not it's still up to you to make sure you are becoming more valuable in your career.
Last but not least, the introduction to groovy by Jeff Brown (http://javajeff.blogspot.com) was so inspiring I could hardly sleep. I was thinking about how I could speed up technical testing in the QA world by writing scripts very quickly. Aside from that immediate business need, Groovy promises to be very useful in so many ways that I am still thinking of them many days later. Jeff did a great job doing demos of how to use the tool and I felt like I could download it and get started right away. Once I get a break I will be diving into Groovy.
I highly recommend the NFJS meetings for anyone working with Java in their projects. I plan to attend the next time it comes up again.
The majority of the topics centered around Java development and what is happening in that world. As a QA professional that does some development I was in a little over my head, but that served only to make me even more inspired by what I saw and heard. Each one of the speakers was very knowledgeable, prepared, and worth listening to. I only wish I could have attended more of the sessions that I missed.
I decided to list here some the juicy nuggets of knowledge that I left the conference with.
First on the list was the talk on Test Driven Design (TDD) by Neal Ford (http://memeagora.blogspot.com/). Neal's presentation was very well done and opened my eyes to the way I should have been doing automated test development all along. (Automation in QA is really development in it's own right, a subject I will write more on later). The way in which the tests are written first, only to fail, then the code is written to answer the problem followed by the test passing is brilliant. I wish I had thought of it before now or at least read into it some time ago. Further into the presentation Neal showed how his code became more and more simple to read and execute while at the same time methods became more focused and specialized. This process made each method self documenting by the fact that they each were named well and did only one thing. I must read more about this as I integrated it into our process for automated test development. Neal himself is clearly knowledgeable and an excellent presenter.
Next most useful talk was by Ken Sipe (http://kensipe.blogspot.com) on Java Memory, Performance and the Garbage Collector. I have been load testing systems using the JVM on the server for many years and the information Ken showed would have been incredibly valuable in that process. In reality the application experts and java pros should have known their stuff about JVM tuning, but they did not and neither did I. Much guessing and voodoo was done to improve performance instead of accurate measurements like Ken showed us. I will be implementing what Ken shared with us and I expect to save a lot of time by either tuning the JVM or ruling it out as a performance bottleneck based on real data.
I attended another one of Neal Ford's talks, this time on code metrics. Again his presentation was very well organized, informative, and inspiring. For years we have been measuring our development work badly but with good intentions. Neal presented some great tools and ideas with an emphasis on not depending too much on one measurement. For instance cyclomatic complexity is tempting to use across the board but he showed that I does not tell the whole story. I especially liked what he said about "code smell" referring to overly complex routines, copy and pasting, and the lack of TDD.
Ken Sipe's talk on Career 2.0 had some very good food for thought. Some of the things he said that stood out were about increasing your digital footprint with blogging, writing, and so on. That footprint works in your favor to advance your career and expose you to others who may be interested in what you have to share. The other concept he talked about that stood out to me, was his statement about how the company managers a responsible for the company but you are responsible for your career. In many cases advancing your career also helps the company but even if it does not it's still up to you to make sure you are becoming more valuable in your career.
Last but not least, the introduction to groovy by Jeff Brown (http://javajeff.blogspot.com) was so inspiring I could hardly sleep. I was thinking about how I could speed up technical testing in the QA world by writing scripts very quickly. Aside from that immediate business need, Groovy promises to be very useful in so many ways that I am still thinking of them many days later. Jeff did a great job doing demos of how to use the tool and I felt like I could download it and get started right away. Once I get a break I will be diving into Groovy.
I highly recommend the NFJS meetings for anyone working with Java in their projects. I plan to attend the next time it comes up again.
Subscribe to:
Posts (Atom)