Wednesday, December 31, 2008

Metrics or a liquid in a container?

I was responding to a thread on "productivity in software - when the term is relevant" on test republic and posted following paragraph about quantification and metrics, which I thought would benefit larger audience ....

People (especially managers) when trying to make things that are not (easily, at least without changing the form of task or thing)quantifiable, quantifiable - end up in changing the core of the thing.... that is goal displacement. I once heard a manager asking us to change the way we do testing because the method we were using was not quantifiable (having more human elements etc). We then changed the approach to testing to meet the quantification needs of testing rather than original needs of "information" and "evaluation". Following this, the manager was happy, metrics were available, everything was quantifiable .. but testing suffered. When that happened, manager managed to shift the blame to something else (totally unrelated) and gotten away. He could do that and get away with it because metrics enabled him to do so ... metrics being numbers have no story or no descriptive form on their own. Given a metric, you can tell any story that you want to tell and manipulate the world.

Software Metrics are like gases or liquids, they do not have their shape or form, they take the shape of the container in which they are placed. Be careful about this when making or dealing with things in software that are "quantifiable" ...

Shrini

Quantum Theory and Software Testing - Any connection?

Atomic physics, Atomic structure and history of how all those brilliant scientists discovered (still discovering) atomic structure, electrons (dual nature of wave and particle) etc always fascinated me. More so now, than in my college days. Often I dream of going back to college days and debate with my teachers about stuff like atomic structure. It is tester in me, who curiously looks for finding some answers to questions that keep popping up in my mind.

I happen to pick up a book on quantum theory titled “In search of Schrodinger’s Cat” by John Gribbin. What attracted me is a quote attributed to famous scientist Niels Bohr, that appeared on the back cover of the book.

“Anyone who is not shocked by quantum theory has not understood it.”

I immediately applied it to current state of practice and perception about “software testing” and I would say

“Anyone who is not shocked by popular practice and theory of software testing has not understood it (especially the human element of it”.

I will be shocked when people make claims (what I call as "popular perceptions") about software testing such as “all testing should be documented, test cases are important for performing testing, test cases must be traceable to requirements, you cannot test without specifications, tester’s role is to find bugs, testing assures quality of the product, testing needs to be more process oriented than person dependent etc. I can say that those who are not shocked by such claims have not understood about software testing that relies on human elements (thinking, questioning, observations etc).

Quantum theory is fascinating so is software testing …. I am looking for more connections between these two … Can you help me? Or I am just dreaming?

Shrini

Wednesday, November 19, 2008

Can Software ever get IT right?

Matt Heusser wrote this beautiful piece of writing about software development practices – quoting another famous Blogger Joel Spolsky.

"... which has a programming method in which programmers code stories based on notes written by designers that are based on requirements documents created by analysts that are assessments of what the customer actually wants. It's practically designed to get everything wrong, to insure that, no matter how ignorant the analysts and architects are on an issue, they'll find someone who knows even less to write the actual code ..."

It is interesting that with so many "loose" ends and human elements (thinking, question, modeling and analyzing), many still fancy the changes of "zero defect software", compare software to manufacturing, glorify processes to fix the problems in software that is INHERRINTLY designed to GO WRONG. If you look at the chain of analyst ->Designer->developer ->tester -> Customer, each one works with less or totally different set of information than all others.

Notice what Matt has to say – to make sure that they get the final software wrong – they will find someone who knows probably the least to write actual code !!!!

How can this (software) EVER GO RIGHT? Can this?

Read entire blog post here

Shrini

Perils of Quantification – what harm metrics can do for you?

This is a hurriedly written post (just to make sure that I do not lose the thought – "fieldstone" in Jerry Weinberg's terminology) – I plan to use this as a place holder for expanding ideas on this topic … Please bear with me for a while with this "being cooked" idea.

I stumbled on something that Michael Bolton said about metrics – in response a Google group discussion thread. Michael mentions "What you want to beware of, in particular, is turning rich information (stories about bugs, problems, risks, value) into impoverished data (numbers).

I think that is a great way (rather an interesting way) to think about "software metrics". To me, software metrics are great way to "squeeze", heavenly simplify" and "horribly trivialize" rich information about bugs, test ideas, problems, risks, value about software. While they provide a simplistic view of rich and often qualitative/subjective data/information – there is huge danger of "oversimplifying and information loss".

Many people argue with me saying "quantification" – associating something that we try to understand in term of numbers – is essential for science and engineering. Some even quote "you cannot improve anything that you cannot measure". I feel that the "urge" for measuring, notion of be quantitative /objective is simply "over emphasized". Let us consider the perils (ill effects) of quantification. Some entities lend themselves for quantification – say counting. Counting people, counting vehicles on road, counting fruits on a tree, marks a student scores in an exam. Many entities that are related to humans especially are difficult to quantify – tend lose lots of information when quantification is attempted. This is very true with software.

Consider following quantified information – what do you think? What do you lose when you quantify …

  1. One tsunami
  2. 1 billion Indians
  3. 1.3 billion people in the world below the poverty line of 1$ /day
  4. 8 million people affected with AIDS disease
  5. Software Quality of Six sigma
  6. In 2003 there were 6328000 car accidents in the US.

    Finally

    6300 bugs in Windows 2000 ..

Notice that each of these numbers have rich information about loss of life, health of people, quality of life and so on. By squeezing rich information into a number, we lose the information. Numbers can be manipulated, argued in any way you want, they hide information, you can be cheated by numbers. Numbers are single dimensional where as information they tend to represent are often multidimensional.

"As proven by modern accounting scandals, you can make the numbers say whatever you want" – Mike Kelly

To be continued …

Shrini

Monday, November 17, 2008

A conversation on Automation ROI Part 1 …

When automation is required, either by contract or due to technical constraints, ROI computation may not be helpful. Intangible factors may constitute the bulk of the return, and thus arithmetic computations won't indicate the real value of automation. Fortunately in these situations we often aren't faced with questions about the value of automation because it must be employed regardless.

-Doug Hoffman


Here goes a conversation with a colleague of mine who wanted me to help him with some ROI calculation for an automation project.

Colleague: Do you have a formula or framework for calculating ROI from automation?

Me: I might … first let me understand what you are looking for.


Colleague: It is simple man … Here is a client who is looking for investing in automation and she is interested in knowing the ROI so that she can take it to her boss with business case.

Me: That is good. What are the elements of ROI you are interested in knowing now?


Colleague: What do you mean?

Me: To me, ROI has three elements – a notion of investment (effort, money, time etc – all these can be interdependent in some way), a notion of "return" (called as benefits – some tangible, meaning quantified in terms of numbers, some intangible – soft benefits – qualitative measures) and finally a timeline usually in terms of direct measures like calendar months, or work/effort months OR indirect measures like number releases, number of test cycles, number of platforms covered etc. Which one is of interest to you…?


Colleague: All three … of course!!!

Me: Then you have some hard work to do gather information, data, expectations, some historical and some current.


Colleague: Well... I thought it is easy to find out ROI … I was told that there are many freely available ROI calculators especially catering to automation … are you aware of them?

Me: Yes, I have seen few of them … not so impressed … One problem that I have with most (or all) of these calculators is that a) they use a highly simplified model of testing that is totally out of context. Meaning you can just apply that to any project, any technology, any tool … you will have some numbers coming out … That is too good to believe 2) They equate automation to human testing literally 1:1 … In my opinion, automation is a different kind of testing – remove the human being (to the extent possible) and introduce the machine (automation script) – then think (dream, pray and wish) that program does EXACTLY like a human.


Colleague: This is too confusing …. Let me try to explain my problem in a different way. Customer is investing x dollars for automation, she wants to know when will she be able to recover the investment and when she will start reaping benefits (possibly without investing anything incrementally). How can we help her?

Me : OK … that is fair … Here we come again to same structure – x dollars (investment), when will she recover the investment (time lines) and when/what benefits she can expect, without possibly not incrementally investing (returns). Let us attack one by one … How your client wants to recover the investment?


Colleague: that is silly question … she wants to save manual testing effort by automating all or whatever is technically feasible. So, cycle time reduction is what she is looking at.

Me: So, the questions are – How much will be the cycle time reduction (assuming that that is possible and worth pursuing), by when, that reduction will be realized? What are the incremental benefits till that point of time? Right? Anything else I am missing?


Colleague: Good … I think now you have understood my problem … what next?

Me: Are all cycles of the same size? What happens to application under test for all these cycles (meaning does it under go change or not?) What is the current test cycle time (manual)? What all happens in a cycle? What are things under tester's control and what are not? When do you repeat cycle (assuming that you repeat cycles)?


Colleague: Oh!! My God … my head is spinning … I will have to get all the information and data... Are you sure these are required for ROI calculation? Anything else?

Me: Yes, at least in my opinion, to give a reasonable picture of ROI where R = cycle time reduction – I would need these. There are some more things that I would require to complete the equation … but let us get started with this ….

BTW, what makes your client believe that machine replicate what humans do? There are things machines are good at and there are things humans are good at. No matter what you do … I think in the context of test automation – machine cannot do what a sapient human tester can do (unless, human by design behave as though they are brain dead and emotionless).

Colleague: No …. No … not again … Do not try to confuse me … I will get you the details that you asked for… then let us fit an ROI formula. Please put you're the tester in you to sleep till then …

Me: (smiled) OK … Please bear in mind that "you cannot compare even one cycle of automated execution to same cycle of manual execution".

While my colleague is out to get the data that I asked for … what do you think? What have been your experiences of calculating ROI with automation … how did you deal with "improbable" yet simplistic model of treating automation execution equivalent to what human tester does – hence talking about cycle time reduction etc? What other returns (benefits) that automation provides, have been successful with your clients? How did you quantify them?

I work in IT services industry. Day in and out, I hear people asking me such things - while I attempt to explain them the hazards of the simplistic model of testing and automation they use in ROI, need for business case to push automation (that requires numbers and quantified measures) makes me to look for innovative ways to articulate what I want to say but in way that "business" people can agree on …

To be continued ….

Shrini


Extraas:

Here are 3 useful and well written papers on "Automation ROI"

  1. ROI of Test Automation by Mike Kelly (2004)
  2. Cost Benefit Analysis by Doug Hoffman (1999)
  3. Bang for the Buck Test Automation by Elisabeth Hendrickson (2001)


Closing thought:

"… every time I hear "let's take a look at the ROI," or "it will increase your ROI," or "all we need to do is use the ROI calculator" some little part of me shrivels up and dies. It drives me insane. I refuse to believe that for products as complex and involved as automation and performance testing services (where you need to understand infrastructure, application architecture, business and use cases, deployment models, culture, risk tolerance, and the other aspects of the design, development, and testing taking place for a project) that you can so easily capture the ROI. If it were that easy you wouldn't be talking to me about it."

- Mike Kelly.

Thursday, November 13, 2008

2 Notorious “E”’s of Testing

Efficiency is doing things right; effectiveness is doing the right things – Peter Drucker

Let me make a dig on these two notorious and the most abused terms - testing effectiveness and efficiency. Raj Kamal has a post that discusses this aspect here.

To me, effectiveness has a notion of "degree of serving the purpose". For example, we can say "this measure" taken to curb the inflation has been effective (means it appears to have served its purpose). This medicine is effective in slowing down the disease. So when talking about effectiveness with respect to testing - we should map the results to the mission of testing and say the techniques, approaches that you have deployed served their purpose or not. Remember, as testers we serve our stakeholders. Different stakeholders have different expectations from testing. Testers form their mission to suit those expectations.

So, in order to be effective in testing, we need to understand the possible stakeholders, their expectations and which one to focus on. That would lead to testing mission. Any testing that happens, should serve the mission. Along the way, testers employ different approaches, techniques, tools and methods. Few of these would be "effective" in serving the mission and hence serve the stakeholder the information that they are interested in knowing, few many not. Therefore, if you are thinking about articulating about effectiveness in testing, think about stakeholders first, then their expectations, then the testing missions, then approaches, tools and techniques, finally link all of them to the results that you produce. I am not sure if a simplistic metric of an equation that counts "reified" entities like bugs and doing some math (like taking cube root of sum of all bugs and so on). Bugs are not real things but they are the emotions and opinions of frustrated "someone" who matter in your project context. Can you quantify frustration?

Also remember, since there could be multiple stakeholders (hence multiple testing missions), your testing (approach, tools and techniques) cannot be effective for all missions. Accept this fact and you don't have to be guilty about it. This becomes very visible when there are contradicting expectations and hence contradicting missions. Learn to negotiate with the stakeholders, try to iron out conflicts and state in your test strategy which missions you are focusing on and why.

Now, let me come to the term "efficiency". You might have heard people saying – "this vehicle is fuel efficient", "this equipment is energy efficient", "this worker is efficient". To me, the term efficiency is related to the notion of degree of conversion of deployed input (human and machine capital) to desired outcomes. Let us take the example of an internal combustion engine and put the definition of efficiency into perspective - The ratio of useful work to energy expended. Like effectiveness, identifying the best way to convert useful energy for testing to useful results that our stakeholders value – is never a simple task. There is no one right way to things also. To serve multiple stakeholders and testing missions, we as testers need to employ a diverse set of techniques, tools and methods. Hence there can be multiple ways to define "efficiency" with respect to software testing.

While Peter Drucker provides a simple framework for thinking about these terms, I would say it is too simplistic and rudimentary model to apply it to software testing. We neither have one "right" way of things nor a specific set of "right things to do". There are many right ways to do things and there are many right things to do. Who defines the notion of "right"? Our stakeholders. Therefore, it is very important to align our work as testers, to what stakeholder expect. First step towards this is to identify our stakeholders. Have you done that for your project?

I would like to highlight another thing here. Since the notions of efficiency and effectiveness as applied to software testing, are multi-dimensional and cannot be reduced simple set of numbers. Avoid temptation to simplify these parameters into simple metrics defined in terms of entities like bug counts and test cases counts etc. Think broadly and deeply, consider multiple stakeholders and testing missions.

In short, effectiveness deals with "fitness of approach/tool/techniques to serving mission" and efficiency deals with "conversation rate of deployed capital (humans and machines) to intended output". In other words, effectiveness is about "how powerful is your way of doing things" and efficiency is about "how well you do things". Both of these parameters are important indicators of testing work and are multidimensional in nature.

Shrini

Friday, October 31, 2008

Exploratory Testing - the state of the art, Evening Talk

I am delivering a talk on "Exploratory Testing - The State of the art" at STeP-IN forum. This talk is happening at Intuit Campus at Bangalore on Nov 6th.

Find the announcement for this evening talk here.

I plan to cover mainly the advancements, tools, trends in last few years in the field of ET and shed light on controversies and myths associated with exploratory testing. There would discussions on SBTM, ET cheat sheets (Elizabeth Hendrikson), thoughts of Jonathan Kohl (analogy to music), Cem Kaner's thoughts on "ET after 23 years", Works of James Lindsay among others.

Here is a most popular myth... (can you beat this?)

"Exploratory testing is a technique"

Any suggestions, ideas are welcome...

See you there !!!

In the mean time, here are few posts that you can read about exploratory testing ...

Exploratory Testing Shock

18 Myths associated with ET
ET challenged

Shrini

Saturday, October 18, 2008

Questioning Software - Bizzare?

Rex Black does not like James Bach’s definition of testing “Questioning a product in order to evaluate it”. I am not sure why. During test2008 conference, this issue was brought up during a discussion. Rex said (paraphrase) “Questioning a lifeless thing like software is bizarre. I cannot question my dog”. I attempted to catch-up with him later next day to see if I can know more about his views.  When I managed to get his few minutes, he took me to a pillar (painted with red) and  pointing his finger to the pillar, said “Are you red”? He continued  “I am asking the pillar. Am I getting an answer? “Questioning software is ridiculous and bizarre”.  I thought I would get a chance to react to what he said. Rex being a busy man and did not have time for “bizarre/meaningless” debates, excused himself and went away.

Is questioning software really bizarre? I don’t think so. When Rex walked up to that red pillar and asked “Are you red” – what was he doing? – questioning the pillar. Right? He did question the pillar. Through his eyes, he could figure out that it was a red pillar. What is the problem then? May be, he was referring to the inability of the “lifeless” pillar not to answer him back in some human language that he could understand. Well, that was answering part- not the questioning part.  Let us apply this to software, everything that we do as a part of testing can be thought of as a question that we ask (not necessarily in the same way as humans communicate) and  Software does answer (unlike the pillar) in a subtle way. Thinking about testing a questioning process is a strong and powerful way to organize the thought process about testing. As testers we must develop skills to question, skills to interpret the answers, skills to improvise questioning and skills to analyze the subtle answers given by the software.

What do you say Mr. Black?

[Update] More on questioning, meanings and various interpretations of "questioning" software by Michael Bolton is here.

Shrini

Sunday, September 14, 2008

Soft Part of Software Requirements ...

Michael Bolton in response to a discussion on software requirements mentioned this ...

"... There are many requirements that are matters of opinion, aesthetics, value, usability, compatibility that can't be subjected to a formula, can't be anticipated in advance, and which change over time "

Really true ... what we typically ignore about software requirements is that --

1. Requirements evolver over time consuming, human interactions and communications

2. Formal languages and domain vocabularies have a place in eliciting requirements but we should not confuse them to provide completely unambiguous, clear, testable requirements.

3. Sometimes these formal languages and domain vocabularies are costly, time consuming or simply not feasible.

4. A software specification should cater not only to software program that is being developed and also for the usage and all human interaction related that the program.

What do you say?

Shrini

Friday, September 12, 2008

10 Commandments for Test Automation Outsourcing

Most of my test automation experience and learning comes from working with IT groups and GUI based automation using COTS tools. Outsourcing of testing is a top agenda on today’s IT manager and automation happens to be one of the popular and “most frequently discussed” item.

I am regularly asked to formulate, present and consult automation projects in IT space. Based on this experience, I am attempting to formulate commandments - about 10 of them. I believe these are few considerations that an IT manager who is thinking about outsourcing test automation work.

Automation is not an answer to your testing problems such as limited testing bandwidth, limited time to test, poor application quality etc. These are true testing problems – one way to address them is to set your “manual testing” right then think of Automation. If your manual testing is poor, throwing automation in that will only cause that “poor” testing be completed quickly.

Automation is White Elephant While automation has clear benefits only when it is treated carefully. Automation is not a turnkey, as your application undergoes changes; automation solution needs to be maintained. Plus there are recurring tool costs, training costs and test case/project management over heads. Is your vendor telling about this sufficiently? Is your vendor downplaying this aspect? Watch out …

Anything that is quickly creatable – is quickly perishable Do not believe claims of “script creation in minutes”, “Automated script generation” etc. Any enterprise level serious automation is software development and requires clear attention and methodical approach to design/create and maintain.

Your business users can not create and own Automation solutions Let your business users/SME’s and domain experts do what they are best at – business support. Do not believe claims of business users creating and maintaining automation solutions. It requires good amount of testing and automation knowledge.

Judge your vendor by the questions they ask about automation Suggesting an automation solution requires thorough study of context of project, release cycles, current state of project, future expected life of application, business expectations, state of automation practice etc. Vendors who propose a “standard”, “gold plated” automation solution without asking questions to probe the context are “simply” selling you something – be skeptical about such vendors.
Higher % of offshoring in automation higher investments to make While moving work to offshore will give cost benefits, when it comes to automation there are number of factors to be considered while deciding how much of automation can happen at offshore. Key thing is about creation of application environment at offshore. If a local application environment can be created at offshore with tool licenses – higher degree of automation work can happen at offshore. It is important to note that for GUI centric/COTS automation tool based automation, both automation tool and application should reside on same machine. Hence pay attention to your application environment and feasibility of creating local environment at vendors offshore location before deciding the amount of work that can move to offshore. Even when such local environment is available at offshore, certain activities like acceptance testing, demo to users, and certain types of test cases will need to be done only at onsite. Be sensitive to these factors. Make sure your vendor asks about this and suggests suitable alternatives.

Pay attention to dependencies and Quality of current test Artifacts A successful outsourced automation requires that project dependencies are well understood by all stakeholders. Access of application from offshore, access to testers/developers/SMEs on test cases, Access to reviewers and code acceptance people are few dependencies that one needs to track as part of project. If automation is on the basis of existing manual test cases- make sure that these are detailed enough and available in a form that can be sent across to vendor offshore team.

Decide Acceptance criteria Formulation of Acceptance criteria is not item that is often given least importance while planning outsourced automation projects. Identify an internal owner in your organization who will accept the code/solution delivered by the vendor. Make sure that this person gets engaged in the project from the beginning and formulates the acceptance criteria along with vendor technical team. Failure to identify automation acceptance person from your end and getting a formal agreement on acceptance criteria can leave you “High and Dry” and leaves an open space for the vendor to deliver any “working” automation code but not necessarily the one that stays for long time.

Avoid linking an automation project (and its deliverables) to application release dates one common mistake committed by IT groups outsourcing automation is to link the automation deliverables to immediate ensuing product/application release and cut down time/effort for manual testing. One IT manager said to his project team “We are having a major release for application XXX in January and we have automation solution coming from a vendor by December. Since Vendor has promised that 90% of manual testing will be automated – let us plan to allocate 2 weeks of testing instead of 10 weeks planned earlier. As promised by vendor we can deploy automation and cut down testing by more than 50%”. What is the problem here? What if Automation delivery slips for the reasons beyond control of everyone? What if development delays? What if due to poor test case quality and fast changing application, automation scripts give inconsistent results? This will result in conflict - “believe automation” or “get some good manual testing done”? Such conflicts can severely impact your releases and hence business plans.

Record and Playback (RP) Automation is for Kids This one point can never be overemphasized. Many IT managers still feel that record playback features of industry standard automation tools can help them to create automation quickly and thereafter their own resources should be able to record and create/maintain automation scripts. However, the experience has shown time and again that RP approach is not beyond learning about how automation tool works with application and can not be used for real time, sustainable automation. If vendor proposes this as a part of the solution – you should be alert and suspect the abilities of this vendor to deliver automation solution

Do you agree with these commandments? Any different experience?

[update]
Some additional tips :
What a Vendor should ask you
- questions about your manual testing practice
- Your objectives of Automation and expectations
- Readiness to take up automation in terms of test cases, application state and environment
- Ask about your expectations on ROI

What vendor should suggest you
- Plan for automation maintenance when supplier is gone.
- Automation may or may not reduce cycle time – that depends upon nature of tests, application technology stack, nature of tool etc
- Automation may or may not reduce the cost of testing.

What to look for in an automation proposal
- Acceptance criteria
- Automation design details – how tests will be structured
- Environment related assumptions
- Pre-requisites about tool licenses, test cases, test data, access to developers, testers and business users (for clarifications about test cases)

You can not automate testing – all of testing as testing is intrinsically a human thinking and investigation activity. What you in reality claim to automate is some portions of testing – namely “test execution” of some select test cases. Note that activities like test design, bug investigation and logging, Test results verification etc are still to be done by a human tester. Automation will take you some places but not all. Unfortunately, the places where it does not take you – are the ones where real problems lie – only a skilled human tester can help.

Shrini

Tuesday, August 26, 2008

Automation reduces cycle time Part III

I wrote about it earlier here and here

“If automation can not reduce cycle time, save testing cost and reduce manual testing errors, why do automation at all” said a colleague of mine in a discussion over mail chain yesterday.

After all, automation and test cycle time (whatever may be the meaning and definition) are so “inseparable” – the theme often evokes deep emotional, business, technological thoughts. While I agree with my colleague on the last item he mentioned “Reduce manual testing errors”, for others I would say they are true under a highly restrictive and idealistic conditions. This is similar to the practice in automobile engineering of specifying mileage of an automobile under “standard road/test conditions”. Such claims are made by automobile manufacturers to push their brand as the fuel efficient vehicle. There also goes a disclaimer that says “Actual mileage may vary depending upon the prevailing conditions on the road, driving habits etc”.

This is very true with respect to test automation too. While in ideal conditions and specific types of testing, automation can help in reducing “test execution” (not testing) time, in reality actual benefits from automation may vary depending upon the project context, state of development and testing practice and others.

I see a future where Automation tool vendors are forced to add a disclaimer to protect them from getting sued by someone who just purchased the tool expecting return as claimed in sales and marketing materials of the automation software.

As I mentioned here, testing cycle time is a complex variable that depends upon several parameters and many of these parameters are out off control of automation and even testing. Establishing a straight linear relation between automation and testing cycle time, according to me is a “horrible” and “unrealistic” simplification.

What is (are) the (fundamental) problem(s) that in testing, the automation is expected to solve?

I am afraid, there are very few testers or test mangers out there that begin their exploration into automation with this question. For many, the notion of automation appears to be well understood I am still struggling to answer this question in a context free sense. How can there be a universal solution to a vaguely or incompletely specified problem?

Dear reader, what do you think are the problems that automation is expected solve?

Shorten test cycle time? Reduce manual errors? Help to kill boredom and fatigue due to repeatability? Save dollars of testing cost? Bring consistent testing results? To supplement human testing capabilities?

To be continued …

Sunday, July 27, 2008

Software - A game of questions and answers

"The most serious mistakes are not being made as a result of wrong answers. The truly dangerous thing is asking the wrong question."
— Peter Drucker

"Testing is a questioning process in order to evaluate software" - James Bach

"Computers are useless. They can only give you answers." - Pablo Picasso

"One who asks a question is a fool for five minutes; one who does not ask a question remains a fool forever." - A Chinese proverb

Other day I was discussing with one of my colleague … somehow our discussion went about interpreting in simple terms the whole “game” of software development and testing. Here is what and how ended up in agreeing on the “simple” model to describe software and software lifecycle …(not SDLC but SLC)


Software development is about coming up with answers (and demonstrating those answers with an example) to the questions raised by testers, end users and other stakeholders.

Software testers ask questions about claims and capabilities of what software is supposed to do, take the questions to developers and project mangers and ask for answers.

Project manager or project sponsors scan these questions and pick up those that they think are worth “answering”, prioritize them and pass on the developers for providing answers and ways to demonstrate the answers. Before releasing the software to testers, developer do some question-answer session with buddy developers and leads (peer testing, unit testing and code reviews)

Developers then get on mission to analyze questions and develop/construct answers in the form of capabilities in the software and “release” to testers to check to see the answers are “satisfactory”. When developers do not get answers or feel that it takes relatively long time to find the answers – they turn to project manager with their analysis as why answer can not be made available immediately. Project manager then takes the decision of “deferring” those “unanswered” question to be taken up in future releases.
At times Testers, act on behalf of end users and other

Testers verify those answers and check to see they are OK … some times there will be follow-up questions or new questions (regression bugs/issues) and they are routed to developers via the project manager. This cycle repeats until there are new questions to be answered by the developers.

So … as long as there are questions to be answered about software … there will be the need of developers (who will provide answers) and there will be need of Project managers (to prioritize and check which questions need to be answered) and hence a software development project …

Guess what … it is software testers who drive the whole thing by asking relevant and important questions about software – about it’s claims and capabilities …

So … as important trait of a tester is to practice asking “good” questions …

Shrini

Wednesday, July 23, 2008

Software - A machine or an organism or ?

The first obvious difference between machines and organisms is the fact that machines are constructed, whereas organisms grow. …

Whereas the activities of a machine are determined by its structure, the relation is reversed in organisms - organic structure is determined by processes

This is how Fritjof Capra opens up chapter 8 of his celebrated book “Turning point”
http://www.mountainman.com.au/capra_1.html

How do we understand software …? As a machine or an organism? Does software grow? How do we understand software by it structure or by observing (!!!) it behavior? What are useful models of software that help us to understand?

What kind of thing is this software? One thing is sure … it is not just code …

A general System …!!!! It is code and about various connected systems … can you think of all those systems connected with software system or systems?

Shrini

Sunday, July 20, 2008

Are all best practices "worthless"? Testing Best Practices

Other day I was quoting following from Jerry’s new book on testing to one of my colleague who is a “best practice” proponent.

…..The risks in these two situations are vastly different, so do you think I recommended the same testing process I used for finding a personal web-writing application? Do you imagine I recommended that my client install random freeware pacemakers into live patients until he found something he liked, or didn't dislike? Why not?

I took above sentences as reference and told him.. “Can you use software testing strategy that one uses for web application writing to that of an embedded software in a heart pace maker? Hence best practices are such a junk thing ...

To that he was silent for a while answered --- I agree with your point that test strategy or approach used for web application cannot be applied for embedded software in pace maker … How about picking the practice from a same field/domain – will that not save the time, energy and effort for my client ? Let us say I develop a list of practices for a given field (embedded software used in human bodies) and keep “selling” them as best practices (jump start kit) for those clients who deal with such software? What is your opinion? Would you still say … best practices (in a context) are junk?

I did not have a good answer for him …. Then we discussed about “universal best practices” (I am not sure if such phrase exists as all best practices are universal in nature by default and context less??) such as “walking is good for health”,”Test considering end user scenarios”, “Do unit testing” “Do code review”, “Aspirin is good for heart”, “Drunken driving leads to accidents”, “Do meditation to calm your mind” etc. I told him about at least 3 contexts for each of these best practices where following best practices can lead to harmful effects.

After listening to me … he said … Shrini … you appear to be "making up" all these contexts to prove your point …I want you to answer my question – Are all generic best practices recommendations are worthless or fake? When customers want something readymade that will help them to jumpstart the work, they would like to see if I, as a consultant, can bring some “best practices” from my previous similar experiments. Is that expectation unreasonable?

I am thinking ... I don’t have a good answer for him … do you? I hope Jerry would have some answer …

Are there any "universal best practices" or by default all best practices are universal and context free? Will a best practice cease to remain as bet practice once it comes with a context?

[update] Quoting from Jerry's book again - "As humans - we are not perfect thinkers, we are affected by emotions and we are not clones. We are Imperfect, irrational, value driven,diverse humans - hence we test software and test our
testing AND hence test "best practices" that sales and marketing folks associate with software testing.

Shrini

Exploratory Testing SHOCK ....

A colleague of mine other day expressed his struggle to make exploratory testing work for his time (scalability and making it as a best practice !!!) . He said "Exploratory testing is HIGHLY person DEPENDENT - that is the biggest problem for me ... Do you have any process document for doing best exploratory testing. I will have that included in our testing process framework. BTW that will help us in earning some kudos from our CMMI level 5 assessment team."

I said --" ... that is true ...why exploratory testing ... any good sapient testing is "person - human" dependent. A good testing requires thinking and inquisitive human mind. Are you planning to get testing done by machines, robots - so that your person dependency goes away? If yes .. kindly ask the CMMI team to order few robots for a Testing POC (proof of concept) "

He could not answer for a while ... then responded with low voice "I know you will say something like this only ... I have an answer… Automation!!!! I have raised a request for buying 10 licenses of this # 1 tool in the test tools market .. Howzzzzzzzzzzzzzzzt ?"

Now it is my turn to faint ....

Thursday, June 26, 2008

Side Effects of Metrics/Statistics

Jamie Dobson writes this piece of "reality" with respect to statistics/metrics and numbers.

“... that human beings will always work toward their defined success criteria."

True and very revealing for all metric enthusiasts. Just let people know what they will be measured on - they will modify their work pattern and output to suite positively on the measurement criteria. For example, a test team is measured on number of bugs the team logs, you will see more and more bugs and if a test team is measured on number of test cases they execute, you will see testers executing increasing number of test cases.

One thing that happens is what I refer as "goal displacement" - Goal of doing “required” work getting displaced by doing *that* work in *that* way as described/interpreted by measurement criteria. Can you see a problem here?

When working with social/cultural setup involving human beings, introduction of "monitoring/measuring", typically causes a "shift" in a overall behavior of the group towards "what is being measured" instead of "what is required". We tend to believe that people behave the same with or without a measurement system in place – we are wrong

This is the side effect that I am referring when a metrics program is introduced in a software project setup....

Are you aware of this? What steps can be taken to address the side effects?

Shrini

Tuesday, June 24, 2008

Software Testing certifications Part II

Dr Cem Kaner posted a note to Software testing yahoo group on the topic “software testing certifications”. I thought, it would really make lots of sense and value to a discussion about software testing certifications to share those views here.

Dr Kaner quotes following in his note.
http://www.channelinsider.com/c/a/Careers/VARs-IT-Certs-More-About-Marketing-Less-About-Skills/

Continuing my thoughts on certifications, here is something that I would like to add … on the basis of Dr Kaner’s notes.

1. Certifications have value as “marketing” aid and most confuse them to be as means of getting knowledge or experience or learning.
2. IT organizations and service providers use their “certified staff” as “proof” of their well trained staff to their clients.
3. Certifications do have place in hiring. Whether you like it or not, organizations still use certifications as main filtering mechanisms in hiring just like a college engineering degree.
4. Certifications matter for those who are in the initial stages of their career especially those who are looking to get their foot in testing field. Most of the time, certifications get them a call to the interview.
5. Certifications can get you an interview call, might even get you a job but there after it is your skill and work that “keeps” you on job. Do not mistake certification for life time warranty for the job.
6. One very common argument in favor of certification is that “certification help in knowing the testing vocabulary” – This is true to some extent. But while going for certification with this objective, keep in mind that - there are no universally accepted authorities that define and mandate testing terminologies and terms and practices vary across the board.
7. Certification enthusiasts claim that certifications are means of learning and gaining knowledge in the subject. WRONG … there are better ways of studying and learning than going for certification
8. With the help of internet, thanks for Google and other search engines, today the information is everywhere, just look around you can learn lot and gain knowledge by effectively searching the web, reading blogs, writing blog and engaging in conversation with other in the community.

Now about let me talk about those certifications that hold value in today’s world

• CISSP (Certified Information Systems Security Professional] To earn a CISSP, candidates must have five years of experience and endorsement from any professional certified by (ISC)2, the organization that awards CISSP certifications.
• CCNA (Cisco Certified Internetwork Expert]) – this one especially my favorite as it requires the candidate to demonstrate the knowledge as part of the exam in a lab environment – e.g fixing a faulty router.

To summarize, certifications tend to be of value for some (hiring managers and new entrants) and there are some examples of good certifications that test the skill of the candidate. Do not confuse certifications to “learning” and “knowledge” – most of the current software testing certifications are to be used as “marketing” tools.

Friday, June 13, 2008

What if Automation finds bugs ....? Good thing or bad thing?

Ryan in response to my post on "cycle time reduction and automation", mentions that "It is an accurate statement that automation will not improve cycle time if it finds bugs, where bugs would have otherwise gone undetected. However, I believe the claims automation vendors make, is based on the fact that manual testing is also going to uncover those bugs."

So, when automation finds bugs, your cycle time increases. When people claim cycle time reduction, they "overlook" this fact. why? Those bugs will be found by manual testing too. This may or may not be the case. Bugs discovered by human testing and Automation tend to be of different types.

Now, let us track that trail of what happens a bug is discovered -
In automation - situation could be bit tricky especially when the automation tests, logs are bigger. An error/bug reported by an automation bug needs to be checked to see if it is a bug in automation code or a bug in application or bug in data setup or some timing or synchronization related problem (in GUI automation scenario). Let us say you have 5-7 pages of log file - you will have to scan/read through the log file an locate the bug. You might have to do execute failed automated test manually (and corresponding data setup etc).
In manual testing, human tester can easily trace and follow the bug trail and document the bug. At a high level, bug investigation and isolation tasks tend to become relatively low.

Hence, when automation discovers a bug - things get really problematic.

If one were to cut down cycle time by automation or otherwise, they HAVE to make sure either "no bugs are discovered" or "any discovered bugs are IGNORED" or "bugs that are discovered, if fixed, not tested again and other regression testing is done ....

Can automation control or influence any of above events - prevents bugs being discovered or igonore the bugs if accidently discovered or mandate that bugs fixes will not be subsequently tested?

For the sake of argument, let us suppose that both human test cycle and Automation find same number of bugs ... and take out "bugs" portion of test cycle, how automation can save test cycle time? On what parameters this cycle time reduction by Automation depends ?

Type of test - nature of interactions between "test execution agent" (human or an automated script) and nature of verifications (during and post execution).
  • GUI Forms with lots of data input fields - can result in quick form fill tests when automated (zero think time).
  • Tests that require longer processing time can not gain from automation as automation can not speed up the processing time.
  • Tests that require visual inspection - window titles, error messages, color, tool tip and other GUI centric elements - are better tested manunally as programmatic tests would mean lots of investment. Human testes are quicker and cheaper in such cases.
  • Result verification that requires detailed analysis, text file processing, database check etc are good candidates for gaining cycle time.
Thus, there are parameters that are beyond the reach of automation ... hence the notion of cycle time reduction has to be really, really taken with "caution".

Shrini

A catalogue of Test Automation Benefits ...

Other day, some one asked me “in your opinion, what are the real benefits of automation”. That triggered ideas for me to write this post… Let me attempt to consolidate and list all the benefits people claim around “automation”

Real: In my opinion, these are “real” and achievable benefits.

- Consistent and Accurate Test results (nearly free from human errors) – when accuracy and accurate results are important? – Numerical calculations
- Untiring and can be run for long hours without any loss of efficiency
- Quick – No think time while running tests. (Computer program does not think – So not useful in those cases where you need to think as you execute. How does automation program help you to gain speed when thinking is required?)
- Supplements human ability to spot software bugs
- Helps to run big number of data combinations – can test robustness
- Helps in multi platform combinations (OS/browser/database and other application setups)
- Repeat the testing (test execution) done for configuration A in Configuration B also.
- Hence increased Test coverage
- Following non test execution tasks
- Generate and manage special set of test data
- Large Volume Test comparisons
- Automated workflows and alerts
- Environment setup


Fake: These benefits are on transition point – where the focus slowly starts drifting from “real” to “imaginary” and hence “fake”. These are false promises that are realistically not possible.

- Improvement in application quality (automation can not improve application quality – even testing can not … only developers and business analysts can)
- Improvement in Test process (Test process is pre-requisite for automation)
- Non technical people can do automation – no programming language required
Improved test planning (not sure how)
- By executing a set of test cases for a specified number of times – ROI from automation can be realized. Say in 10 executions –automation test pays for itself after that is “cost saving” all along


Conditional: These are the benefits that are realizable and could be reasonable but under a very strict set of conditions – “once in a while” cases. When these benefits are stated, mostly the conditions that must be fulfilled to realize these benefits, are not stated. This makes these benefits look as though there are real and universally applicable

- Simply – saves time.
- Test effort reduction
- Improved Time to market
- Improved test productivity
- 24x7 testing possible
- Knowledge retention


Totally outrageous: There are some really outrageous and are more or less like a typical sales pitch. People, who make and believe such claims, do not seem to understand human side of testing, testing itself and automation. They are just believe that machines are better than human testers.

- An automation test execution is equal to an hour of skilled human testing
- Automation can replicate human interactions
- Reduced dependency on human testers
- Solves problem of resource crunch and less time available for testing
- Reduced defect rate – fewer defects


Anything I missed?

Friday, June 06, 2008

What is a bug ... A new meaning ...

Let me give a try to this one-liner ...(short post)

"A software bug is a reflection of the mind of a confused human user"

Analyse this statement .....

[Some updates]
Above one-liner proposed by me, seem to have generated interest in some ... Let me clarify further ....
This one liner of mine is a beginning of an effort to understand human thinking process while he/she sees a bug. A human goes through a series of emotions while dealing with a software bug. A dominant emotion among these is "confusion" - state of perplexity, chaos, uncertainty.
Let us say you are running a test, observing what is happening and doing a quick comparison with what were expecting ... Suddenly something "unexpected" happens, your mind starts remodelling all that is happening, you were expecting "x" to happen, where as you are seeing "a", "b" and "c" are happening - that is contradicting your model .... your heart beat goes up, blood pressure goes up ... lots of physiological changes happen ... your brain tries to stabilize, create a new model on the fly, comprehend a things on the ground and after a while, you calm down and begin to put up an explanation of what is happening and why? A bug report comes out at the end ....

So Bhargavi --- the life cycle of a bug begins with a "confused" mind of a tester. A confusion is a mental state where you are not able to comprehend and reason to the situations and problems that you are subjected to (imagine a traffic cop at traffic jam on road, a kid on first day in a new school etc). It is the confusion that triggers the thinking in the mind of tester, tester follows the trail of the thinking and builds up a new model and finally when is mind is settled, comes up with a explanation. So when a bugs gets nailed down, mind becomes calm.

Even an obvious bug - when you look at it deeply is a result of thinking process triggered by "confusion" what is expected and what is observed. As a normal human reaction to a state of confusion, you reason out the things and make them clear - then bug becomes obvious ...
If an obvious bug is at level "0", confused state is at "-1" level. Also the "sense" of confidence is always followed by a subtle at times quick state of confusion. When try to open your car door in wrong way, open your cupboard keys in wrong position, when you are trying to open a door by pulling that opens only on "push" mode ... our mind goes through a quick state of confusion ... you quickly notice the situation and gain your calm.

It is deep cognitive process .... Psychologists will be able to explain better ... "Psychology of a BUG" ?

Shrini

Monday, June 02, 2008

Goal of Testing and a quotable quote

Michael Bolton in response to this post from Steve Rowe mentioned this gem …

“Do our automated tests take into account the notion that different people might value different things, and that one of the tester's primary goals is to recognize different constituencies and the ways in which their values might be threatened? “

Many people in our community think that as testers our goal is:
  1. To find bugs
  2. To prove that the application "works" as per the specifications.
  3. Run a bunch of tests and report the results,
  4. Develop some automation to speed up the execution of tests
But to “analyze”, “investigate” the value systems/perceptions of different stakeholders of a software product and explore various possibilities where stakeholder’s value is threatened. This involves among other things – finding some bugs, running some tests, writing some automation etc. Note that End user or customer of the software is an important stakeholder.

This is statement from Michael is an extension or probably logical conclusion of the famous statement that is often associated with testing “Be customer’s advocate” (or “think like customer”). Michael seems to suggest that it is not only customer whom we should consider, as testers we think about all stakeholders and explore what each of these stake holders value.

A stakeholder is a person is who is affected by the success or failure of a project or the Actions and inactions of a product or the effects of service – Cem Kaner

In one of the comments for the same post, I found another worth quoting statement from Ben Walther

“There will always be inputs into your system that violate the assumptions made in building it. A computer generated test will not be able to violate such assumptions.”

Very true … that is why testing is so challenging and exciting

Shrini

Thursday, May 29, 2008

Mission - Test Estimation model

It is very difficult to make a vigorous, plausible, and job-risking defense of an estimate that is derived by no quantitative method, supported by little data, and certified chiefly by the hunches of the managers.—Fred Brooks

Steve McConnell opens his famous book "Software Estimation: Demystifying the Black Art " with above statement ... Development community is fortunate to have some one like Steve McConnell to help them with solving this puzzle called "Estimation"

I am working on developing a test estimation model. I think in today’s testing world, this is a biggest and the most complex testing problem to be solved. As I am reading and researching about this topic and formulating my initial thoughts, I am thinking about testing, testing models, questioning, thinking, modeling, bug investigation, test-stop criteria, non-linearity, size of testing work and so on.

Following is a list of challenges/questions that I am searching answers for …

1. Model of Testing: Testing is an evaluation activity as against Development that is a construction activity. How to think about estimating a “evaluation/Analysis” based activity?

2. Sapient Testing model that requires critical thinking, questioning, modeling is essentially a non linear activity where as development can be a relatively bounded activity of starting from a spec to a working code. For example if a particular testing task takes x hours, 10 such units will take 10x hours … we can not make such extrapolation right?

3. Sizing testing activity is still a big problem – what is the unit of testing task? Atom? Or molecule? What is the building block?

4. Tasks like bug finding, investigation, test design are difficult to quantify?

5. When do we stop testing? What are exit criteria for test cycle? This will set upper limit for testing scope?

6. How many test cycles we need? How do we estimate?

7. What are the human factors involved the process of test estimation?

8. How do we address problem of "slipped bugs" in case of plain and straight forward "scripted testing" (write test cases and execute them word by word and log the bugs and end of the cycle). What about bugs or tests that come out of specs, exploratory tests? (Something that we worry heavily in IT services Industry – how much testing to do? There are penalty clauses for missed bugs)

9. What about productivity numbers used. In estimation -> we first talk about sizing the work, then apply productivity figures to arrive at effort then split the effort into schedule. How do you deal with numbers like – 10 test cases executed per person per day? Should you believe them or question what a test case is?

10. Is estimation a guess work?

11. Is estimation similar to the work of a fortune teller, weather forecaster, Election analyst, Stock Market analyst, A punter in horse race, or a gambler? After all in test estimation we tend to predict the future right?

Any views? Are these questions important while arriving at a test estimation model?

Interesting quotes on Prediction:

"Those who have knowledge, don't predict. Those who predict, don't have knowledge.” - Lao Tzu
“Trying to predict the future is like trying to drive down a country road at night with no lights while looking out the back window.” - Peter Drucker

Shrini

Tuesday, May 27, 2008

Can Automation reduce cycle time or improve time to market?


Continuing on this discussion on test effort and manual testing – there is another popular variation in which Automation tool vendors claim “improved time to market” and/or “reduced cycle time”. In this post, let me dissect these claims and see how true and credible is this claim.

First of all let freeze what one mean by “Time to market” and “Cycle time”. Let me define the terms as bellow in the context of a traditional/Waterfall model type of software development.

Time to market is a time window between the time your start development (requirements and so on) till you ship the product in market for general public consumption. Depending upon whether you are doing a major release or minor release, this window may span from few months to few years (as in the case of Windows VISTA)

Cycle (when used without any qualification indicates a cycle of development and Testing) time is a time window for a software product under development. The cycle time can be divided into Development time and testing time. A development cycle starts with deliberation of requirements, design of features to be implemented. A test cycle starts with a development team “releasing” the code for testing to begin and test cycle ends with test team completing the planned testing for that cycle. During this period development team can fix the bugs reported. Hence for all practical purposes, a cycle time implies a time window between start of design/requirements until test team completing the testing/development team is ready to release next build.

So it is apparent from above definitions that cycle time is a subset of time to market.


Automation can reduce cycle time (less of a problem) only if (check how many items or situations that “automation” can control)

  • Automated tests run without reporting ANY bugs on the software (A bug reported by automation means – some investigation and confirmation by manual execution that the bug reported by automation is indeed a bug with the application)
  • Automated tests DO NOT report any run time errors (A runtime error by a automated test means some investigation and re-run.)
  • Development team INSTANTLY fixes any bugs reported by Automation and these fixes are so well done that there will be no further verification (manual or automated) required
    Manual testing (a very small portion indeed) that happens in the cycle does not report any bugs. All the manual tests pass.
  • If manual testing reports some bugs those bugs will be fixed INSTANTLY without requiring any further verificationBug reporting time, triage, investigation (if any) is so small that they are negligible.

Automation can reduce/Improve time to Market only if –

  • All the items mentioned under “Cycle time” and
  • Business stakeholders do not worry about outstanding bugs. They take the decision to ship the product as soon as Automation test cycle is completed (because automation cycle is NOT expected to report any bugs). So the end of the automation test cycle, shipping the product is a logical thing to follow.


If you analyze these situations, you would notice that many of the factors that influence cycle time or time to market are not under the control of “test automation”. These factors are to do with development team, quality of the code, quality of requirements, number and nature of the bugs reported by both manual and automated test execution and above stakeholder’s decision about those bugs reported. One can not claim that there will be cycle time reduction or improved time to marked JUST because x% of test cases are automated. A big and unrealistic generalization - only an automation tool vendor can afford to make.

So next time when someone says automation reduces time (either cycle time or time to market) – do quiz them and say “What do you mean”?

Bonus Question : Can automation accelerate your Testing? If yes, under what circumstances?

Next Post : Can automation address the IT problem of Limited (human) resoures and tight deadlines?

Shrini

Saturday, May 17, 2008

When method dictates Goal – See Goal displacement in action - II

I wrote about Goal displacement part I here. Surprisingly .. no comments yet ... :(

Here is example # 2

Software Testing Certifications: A conversation between a certification enthusiast (CE) and a critique (CC – that is me)

CE : Do you know about this certification for software testing, this is very popular in Europe and US?
CC: Yes, I do but I am not sure if this really helps in evaluating skills of our testers.

CE: I think it does. I have heard about exam it seems pretty exhaustive and coves all aspects of testing.
CC: Umm … what do you think the certification is testing? What and how does it evaluate testing skill of the candidate?

CE: Certification tests the “knowledge” of the tester, familiarity to terms, definitions and experience of the candidate…
CC : Really? How? How a certification tests knowledge and experience of the tester?

CE : By carefully selected questions and evaluation by testing experts. The certification exam is based on a body of knowledge and questions – mostly objective type. I think one can rely on the exam.
CC : So … you are saying an objective (yes/no and multiple choice) type question paper is used to test the skill and experience of a tester. Does the exam allows theory type of questions? Can the tester debate on an issue? Does the exam involve any kind of putting the tester into real testing situation? Does it observe the tester in action ?

CE : Come’on, how it is possible? Certification bodies have their limitations, they can not set up “practical exams” to watch tester doing testing and then rate. Do you want certification body to set up a audio/video facility to allow for debates, questioning and real time situation simulation?
CC : Don’t you think, that you would right and reasonable way to assess the skill of a tester not using a fixed set of questions that emphasizes on memory recall, reproduction of text of study material?

CE: That is right … Look at feasibility of having such exams … what about cost of administering such tests? What about evaluation? It would be costly. That is why the certification bodies might have created a scheme of tests that are easier to evaluate and conduct on mass scale. This will enable a relative cheaper exam and allow more people to take the exam Right?
CC: Well …good point. But what is the goal of certification exams? What they attempt to achieve?

CE: Evaluate and Assess skills and experience of a software tester.
CC: But your current exam seems to be structured in such way that it is easier to administer and evaluate.

CE : Ummm… that is correct.
CC: See this is what I call as Goal displacement. Certification bodies wanted to design, administer and evaluate a system of exam to evaluate and assess skill of a tester. But they seem to have taken the path of designing an exam system that is easier to administer.
The goal of software testing certifications is to act as a mechanism of evaluating skills in software testing and provide a benchmark for the talent in that space. As the popularity of such certifications grow, so grows the need for system for mass administration and evaluation. This causes the change in the examination pattern and evaluation mechanism to facilitate the mass administration. Any part of the exam that is good from skill evaluation perspective will be dumped if that does not lend itself to easier evaluation of exam results.

Side Note:

Software estimation models (especially test estimation models) often suffer from Goal displacement problems. Other day I told my colleage " The reason why we struggle with test estimates is that we use simplified model of testing while estimating but reality is different hence estimates are typically go wrong". For that he said -- "OK .... we know that actual testing models are complex (non linear and involve critical thinking/Questioning etc). That is why we use simplified models of testing ".... How strange ..!!!!

This is another example of goal displacement right away ... To make estimation possible - change the testing model itself ... use a simple one. So the goal of using a good and resonable testing model that happens to be complex (and does not lend itself that easily to estimation) is replaced by a goal of finding a testing model that is simple and easier to estimate ...

what should be your goal - good testing or estimation?

Shrini

Can Automation reduce human testing effort ?

This post is for all those IT managers who are considering outsourcing testing and listening/talking to various vendor presentations on testing and automation.

I was in a conversation with a client other day. He asked me “Can you reduce the number of resources that you have deployed currently for regression testing of this application to say by half in next six months or so by using “Automation”?

I was not shocked to hear this since for past few years, I have seen many clients in IT application testing space have had similar expectations out of outsourced testing and hence on automation. In an Outsourced IT application testing domain, this is how a service provider positions Automation – A means to cut down the cost of testing (resources).

Client’s expectation of reduction in testing resources (with no reference to scope of testing) using automation in a specified time frame stands on following assumptions or beliefs.


- Regression testing is simply executing a set of test cases and reporting results nothing more than that.
- Regression testing gives the confidence that nothing is broken – all that was working previously is “intact”
- Automated test cycle means zero human involvement hence percentage of automation should directly result in proportionately reduction in human effort of testing. For example if 50% of test cases are automated, effectively manual test cycle time can be reduced to that percentage.
- Automation is turn-key. Once you have an automation solution developed, one can just keep using it unlimited number of times without any extra cost and effort
- Operationally, automated test cycle ALWAYS takes less (fraction of manual test cycle version) time – there would be no additional effort required to investigate results and chase script failures.


Let us look at costs of creating and owning automation (note that majority of these are of recurring types)

- Tool evaluation, proof of concept and other initial investigation costs
- Automation tool cost and recurring licenses/upgrade costs
- Resource Training costs
- Costs associated with automation environment (development and execution) – servers, applications, connectivity etc
- Costs associate with manual test cases cleanup, rework, test data, clarifications and any other effort to facilitate automation
- Automation development/ testing/review and acceptance costs
Costs of setting up and maintenance of automation execution environment
- Costs of investigation of automation results, failures
- Costs of re-runs (testing automation code fixes) and any required additional human testing
- Costs of maintaining automation code – design and structure of automation extent and frequency of application changes, automation tool changes (new versions etc) and change in application platform determine this cost component

How do these cost components compare with costs associated with human testing? Will an automated test cycle always be quicker than human testing cycle? Is speed of excution all that matters to you? Can these two versions of testing cycle be compared? Can one hour of automated testing (test execution) be compared with 1 hour of skilled human testing?

Test automation if not “appropriately” applied can be highly expensive and painful. If you are dependent on automation for achieving project GoLive timelines – be aware you could be risking the release. When someone says they can reduce testing effort by automation – be sure they are selling you something and they do not understand fully what is testing and what automation is.

Test automation is High risk item in business. Before you invest money or even before you make any business decision that based on automation capability – make sure you, as an IT manager is aware of darker side of automation.

If you are getting a sense that I am discouraging Automation and regression testing – Yes, I am. Considering the approach that today’s IT World is taking automation – I am against such approaches to Automation – but NOT to all other forms of automation that can happen.

So next time you sit in a vendor presentation and hear this "Test automation reduces manual test cycle time and effort" - Please ask "HOW" and "UNDER WHAT CONDITIONS". I am sure the presenter will have tough time in answering your question as these are not a common questions to these "automation snake oil" seller. In case if you get an answer, make sure to check relevance of those parameters to YOUR project and application context.

Shrini