Friday, December 15, 2006

Why counting is a bad idea

Let us consider a typical Test report - a report that is presented in a meeting for assessing the progress of testing, attended by key stakeholders and team members:

No of Test cases prepared: 1230
No of Test cases Executed: 345
No of Test cases Failed : 50
No of Bugs reported: 59
No of Requirements Analyzed : 45
No of requirements updated :50
No of Transactions covered in Performance Testing : 24
No of Use cases Tested : 233

Productivity

No of Test cases prepared Per Person Per hour = 5
No of Test cases executed per person per hour = 15


What do you see here?

Managers love numbers - Numbers give objective information, numbers quantify observations and help in taking decisions (??). Numbers simplify things, one can see trends in numbers.

You might have heard about one or more of above statements (mostly in review, progress meetings right?). When it comes to testing, followers of Factory approach testing, are comfortable in just counting things like test cases, requirements, use cases, Bugs, passed and failed test cases etc and take decisions about "quality" of the product.

Why counting (without qualifications) is bad idea in testing? What are disadvantages of such practice? Let us briefly take a look of few famous frequently *counted* things

Count Requirements (as in there are 350 requirements for this project)
Can we count?
How to count? Do we have a bulleted list of requirements? If not, what to do?
How to translate given requirements into "bulleted list"
How to account for Information loss, interpretation errors while counting requirements
Count Test cases ( as in test team has written (or designed or prepared) 450 test cases in last week)
Test cases are test ideas. Test case is only a vague, incomplete and shallow representation of actual intellectual engagement that happens in testers mind at the time of test execution (Michael Bolton, mentioned this in his recent Rapid software Testing workshop at Hyderabad)
How can we count ideas?
Test cases can be counted in multiple ways - more often than not, in a ways that are "manipulative" - count is likely to be misleading
When used for knowing or assessing Testing progress - likely to mislead the management
Count Bugs ( we have 45 bugs discovered in this cycle of testing so far)
The story or background about a bug is more interesting and valuable than the count of bugs ( this again I owe it to Micheal Bolton - "Tell me story about this sev 1 bug? would be more informative and revealing question than asking how many sev 1 bugs we have uncovered so far?
when tied to testers effectiveness - is likely cause testers to manipulate bug numbers ( as in Tester 1 is great tester as he always logs maximum number of bugs)
Let us face the fact of life in software testing - there are certain things in testing that can not be counted as we count no of cars in the parking lot, no of patients visited a dentist's clinic or No of Students in a school.

Certain artifacts like test cases, requirements and bugs are not countable things and any attempt to count them can only lead to manipulations and ill-informed decisions.

Wait --- Are there any things at all in testing that we can count without loss of effectiveness and usefulness of the information that a counted number might reveal?

Shrini

Wednesday, December 06, 2006

How can a software tester shoot on his/her own foot?

Would like to know the self destructive or suicidal type of notions of the today's tester? Would like to know how a tester can shoot his/her own foot?

There are many ways – one of them is by “declaring” or "asserting" that -

Software Testing is an act of (whole or Part) Software Quality Assurance.
Few variations of above –

Software Testing = Software QC (quality control) + Software QA
Software Testing = Verification + Validation.


As an ardent follower or disciple of Context Driven school of Testing – I swear by following definitions or views

• Quality – “Value” to someone (Jerry Weinberg)
• Bug (or defect or Issue) – “Something that threatens the Value” (I am not sure about the source)
OR “something that bugs somebody” (James Bach)
• Whatever is QA – that is not Testing – Cem Kaner
• Testing – Act of Questioning product with an intent of discovering Quality related information for the use of a stakeholder (James Bach /Cem Kaner – I attempted to combine definitions by both James and Cem)

OR
• An Act of Evaluation aiming at Exploring ways in which the Value to Stakeholders is under threat (this is my definition – which discovered quite recently – open for criticism)

• Stakeholder – is someone who will be affected in success or failure OR actions or actions of a product or Service (Cem Kaner)

• Management is the TRUE QA group in a organization (Cem Kaner)

Now let us see how notions that assert to Testing as QA or combination QA and QC roles are self destructive or similar to shooting on ones foot …

1. The terms like QA, QC were appear to have barrowed from Manufacturing Industry – Can you measure and assess the attributes of software in the same way as you do for a mechanical component like piston or Bolt?
2. You can not control or assure Quality in a software By Testing
3. It can be more dangerous or costly to claim as a Tester that “I assure or control Quality by Testing” as it can backfire when you don’t.
4. Unless your position in Organization hierarchy is very high – you as a tester can NOT TAKE decisions about
a. Resources and Cost that is allocated for the project (Budget)
b. Features that go into the product (Scope)
c. Time when product will be shipped out of your doors (Schedule)
d. Operations of all related Groups - Development, Business, Sales and Marketing etc.

When none or most of above not in your hands – How can you Assure or control quality?

5. When you claim that you assure or control quality – others can be relaxed – Developer can say – I can afford to leave bugs in my code – I have anyway some paid to do the policing job Or others will say “Let those Testers worry about Quality, we have work to do” – Cem Kaner

6. You will become scapegoat or Victim when you leak Bugs (or issues or defects) go past you. One of the stakeholders may ask – “you were paid to do the job of assuring or controlling Quality – how did you let this bug(s) to product”

An interesting and relevant is reference is mentioned in Cem Kaner’s Famous article
The ongoing revolution in software testing

Johanna Rothman Says (as quoted in Cem Kaner’s article) -

Testers can claim to do “QA” only if the answers to the following questions, is YES
• Do testers have the authority and cash to provide training for programmers who need it?
• Do testers have the authority to settle customer complaints? Or to drive the handling of customer complaints?
• Do testers have the ability and authority to fix bugs?
• Do testers have the ability and authority to either write or rewrite the user manuals?
• Do testers have the ability to study customer needs and design the product accordingly?

Clear enough?

What is the way out ---?

Treat software testing as a service to Stakeholder(s) to help them conceptualize, build and enhance the *value* of a product or a service.

Be a reporter or service provider – Don’t be Quality Police or Quality Inspector of an assembly line …

Thursday, November 30, 2006

Launching Indian Software Testing bloggers community ...

Are you someone from India or of Indian origin?
Do you work in/for Software Industry?
Do you do or have interest in Software Testing?
Do you read blogs on software Testing?
Do you blog?
Is Software Testing is your passion?
Do you believe in sharing knowledge in software testing community in India?


Friends – if ,answers to one or more of above questions is “Yes” – please send me an email – I am launching “Indian software Testing bloggers” community – a Platform all passionate Indian Bloggers out there. I need your support, energy, passion to build this community.


I have thought of one or two ideas as how do I host the community on the web, what is the charter of the community etc … please share your comments ….

Let us start this with a small step – who knows one day might take the shape of big “Revolution” in Indian software Testing.

I already have notable people like Pradeep S (who blogged recently calling Indian testers to starting blogging here) with me and guidance and blessings of world renowned visionaries like - James Bach, Michael Bolton.

Here are my contact details –

Shrini Kulkarni
Email: shrink@gmail.com
Cell: 91-9945841931

“What topic in Testing you want to blog today?”

Wednesday, November 29, 2006

Story of a Test case ....

A Test case or Test is an important entity that we as testers create/use/work with as part of our testing activities. What if a test case were to come Live like a living thing or Ghost and were tell it's story or it's life cycle ---

Here is how it MIGHT go ...


• Born – in word document or excel or in some text file or HTML form of web based test tool – very rarely in the minds of a tester.
• Some form of Reference document (requirements, design or Functional Spec) is considered to be my one parent while my other parent is Application behavior that I am suppose to check and verify.
• I exist for proving that my both parents are one and they don’t have conflicts.
• My body structure is such that when one of my parents changes it’s shape or form – I need to change otherwise I temporarily get dumped and do no exist –get invalidated.
• I live in different types - manual, automated, semi automated, documented, un documented, versioned and non versioned, In test management tool, in informal documentation.
• I am a countable *thing* though I differ from my siblings, cousins, friends in may ways.
• Named by a tester and pushed into some repository I have an ID
• I am referred in many ways like test, test case, Test idea, Test spec, Test procedure, Test pack, Test set
• Sometimes I am too detailed that a school kid can execute me by following the steps and sometimes I can be very tricky. Some time I become very lengthy and some times just a one liner – “Verify this….”
• Sometimes I have hard coded data and some times I will not have expected results. Some times my internal parts contradict each other.
• I get classified as “simple, medium and complex” so that people can measure time for creating or modifying, automating or executing me
• I get a graduate degree when I people start calling me as ‘Regression test” – I need to pass every time I get executed.
• Developers hate me when I fail and managers would like see only “passed” tests.
• Some times I am called unit test or some time end to scenario
• Some one will review to confirm that I am indeed born to my logical parents
• I tend to loose my identity when some one automates me and forgets me that I ever existed.
• Some tester adopts me and uses me and abuses me and cruelly compares me against my parents and declares that I pass or Fail.
• I just go into hibernation mode every now and then (when either or both of my parents change their form and shape)
• I am at mercy of tester to look for me changes that might be required to bring me back to life (from hibernation)
• When one or both of my parents change their form and shape OR when one or both of them die (Feature deleted form the application and reference document) – That is the end of my life – I get deleted.

Interesting Right?

What is the story of your Test case?

Shrini

Some more interview tips - Questions that you should ask to the interviewer ...

Continuing from my last post on this topic, I would like to touch upon an interesting and important aspect of a job interview from job aspiraint's point of view.

1. Asking questions about Employer and his company
(Demonstrate that you have researched about the company and (already)know any publically available information)

i)Nature of business, Size, office locations, Company's history
ii)Company's achievements in the recent past
iii)Company's Financials
iv)Organizational hierarchy and the where the position for which you are being interviwed fits
v)Information about competators
vi) Ask about customers and company's standing in it's operating domain
vii) Company's future plans about consolidation, diversification, expansion etc


2. Other high impact questions

i) What are the immediate challenges that you [the manager doing the interview] are going to face in the next 3 months? Are there ways that someone in the position you're hiring for could help address those challenges?

(source Johanna Rothman's this Blog Post:

ii) A year from now, how will you evaluate if I have been successful in this position? (Source : Louise Fletcher's Bluesky Resumes Blog)

iii) what is next step? Where will we go from here?


Suggestions, views and comments Welcome

Shrini

Thursday, November 09, 2006

Context Driven thinking in Testing ...

I have been discussion/arguing with BJ Rollison on the issues of "Schools of testing" here ...

http://blogs.msdn.com/imtesty/archive/2006/10/20/end-segregation.aspx

BJ is suggesting to end the seggration of four schools of testing - which I strongly disagree.

James Bach blogged on this here http://www.satisfice.com/blog/archives/74

and look at this simple explaination (by James again ) of equating adapting to a context to "parenting" here ...

http://www.satisfice.com/blog/archives/60#comments

"There’s only one context that matters– the one that you are in at the moment. If that changes, then you adjust accordingly. It’s like parenting. You don’t have to figure out how to parent every child, just the ones that belong to you. The context-specific attitude says adapt to your children and then stop adapting. The context-driven attitude, taken to its logical conclusion, is like a child psychologist’s approach. Child psychologists need to know how to adapt to any given child (normal and strange) who walks in the door.


For the same reason, if you figure out how to report coverage on your project in a way that works for you, you can’t assume that the same method will work for me, nor do you need to worry about whether it would work for me. You can’t tell me “James, I have discovered the right way and you should do it my way, too.” What you say, intead, is “James, would you like me to describe some experiences I’ve had with coverage reporting? I feel good about how I do it, over here in my project.
"

Expect more on this in coming days

Shrini

Hola SPAIN - QA&TS international conference and me ...

Hola SPAIN….

I was at Bilbao, SPAIN for a QA&Testing conference on embedded system – oct 18-20. I spoke about “Test case design and Automation” – a topic that I am working on since last 3-4 months. My talk and others in Test automation track were the ones that focused on “non embedded’ software systems. This paper was an initial attempt to explore the two big, complex and more of mis-understood concepts in software testing – “Test case design and Test automation”. I will be continuing to work on this topic and explore more about these. I was also given an opportunity to express my views about “Future of software testing and Challenges” at a round table discussion in the one the evenings at the conference.

I met few nice and interesting people in the conference – Paul Jorgenson, Scot Barber, Ray Arel, Doron. At lengths, I discussed with these people about various topics ranging from automation, Test design, Testing and automation in Embedded systems space.

Paul Jorgensen,(Author of the book "Software Testing A craftsman’s approach") - with his depth of experience in testing (both from Working Telecom industry and University experience), was a pleasure to listen and discuss. He gave me a patient hearing for my bugging questions on topics related to test design. His presentation on “All pairs testing” was one of thought provoking papers of the conference.

Ray Arell of Intel was very lively and quickly mixed up with group and I never felt that we were meeting for the first time. His presentation on “How to expand and improve your Test capabilities” was another great presentation of the conference. Ray is a highly experienced professional with about 21 years of exp and has to his credit a book on the topic “Change based Test Management”. Ray with his witty comments, kept the participants hooked up to him all the time. My discussions with him were related to testing and related practices in microprocessor/Semiconductor industries. We traveled together right from Bilbao to Bangalore.

Scott Barber (www.perftestplus.com) managed to be at the conference on the second day and was another interesting person to meet. Scott’s proximity with people like James Bach, Michael, Bolton, Cem Kaner, especially made me to spend more time with him and understand his current areas of interest. He is performance Testing Guru – gave lots of good tips and hints to me on the topics ranging from Performance testing, Test automation, Challenges in Independent Test consulting and future trends in Test automation – which prefers to call as “computer assisted Testing” (Which I agree with him). Thank you Scott - for all those valuable suggestions that you gave me.

One very pleasant side effect of my presence in this conference was ‘exploring” the beauty of city Bilbao. I am nature freak and love to hang around greenery, water bodies, lakes etc. Few locations in Bilbao provided me a perfect opportunity to be with the nature. I struggled with language at few occasions and that with the food – it was difficult to find a 100% veg food that I need. Conference organizers at every dinner and lunch – made special effort to some closest Veggy food for. They did great job at making this conference. In all, it was a very enjoyable and knowledgeable trip for me. I definitely look forward to participate in QA&Test Conference 2007 … bye Bilbao till then.

Monday, September 25, 2006

Bug or a Feature ?

Differentiating between bug and feature - more often than not is a result of someone (typically a developer or a tester) trying to prove some other person (typically again a developer or a tester)wrong. Somebody says "See this seems to be a bug to me - I trying to be like typical end user" where as somebody yells back "Look this is as per this document and no where it is mentioned that the feature should work like this".

In simple words, the difference between bug and a feature OR "Desirable" and "Undesirable" OR "Expected" an "Not Expected” - is with respect to some REFERENCE. What is that reference? Who defined it? How credible is it? The moment all the concerned parties and entities involved in Bug-or-Feature conflict agree upon this Reference - the distinction becomes very very clear.

In most of the cases - requirements document, market survey or some expert opinion is considered to be the reference. The confusion in most of times is because of lack of reference (Oracle) and still worse the lack of knowledge that “there is no Oracle".

If I find myself in such situation - I simply say "This is my observation on this feature. I think we should analyze and explore to see if anything wrong here. I would be happy to participate in this exploration. I would not get into issue of whether it is a bug or feature - I am not the right person to decide that. I only report on observations that I make with respect the features I test.

so next time when you hear an argument like this --- Just ask this question - "Can all of us first agree upon the reference?". I bet, you would sound lot intelligent in the crowd ....

Shrini

Tuesday, September 19, 2006

Are you making most of Test automation?

Test automation is a beautiful and a very handy concept. Making computer doing things while you watch all those things that you would otherwise don’t observe – can be a very powerful thing in Testing. Deploying Automation can magnify the reach of manual tests in lots of ways. Unfortunately, Automation in lots of places has been deployed with intention to replace (not supplement, enhance or extend) human Testing.

If you are using Test automation for following cases – you are making the BEST use the investment made in creating and maintaining automation suites …

1. Those tests that have a high chance of human errors, those features that under go minute changes frequently that a human eye is likely to miss.
2. Another extreme of (1) – Routine and mundane tasks like installation and smoke tests.
3. Frequently repeated, High volume repetitive Tests
4. For covering multiple platforms
5. Those cases where it would impossible to perform a test manually
a. Simulating some behavior that exposes a risk e.g. memory leakage


Less desirable or efficient Usages of Automation

1. Automating a set of low power manual tests just to cut down the cycle time of testing of those tests manually – without evaluation of those tests.
2. Automating every test that is possible to automate and aim at covering maximum of testing by automation – a notion of “more is better”

Shrini

Tuesday, September 12, 2006

Top burning issues in Software Testing ...

Top 10 Burning issues in Software testing ….

While thinking about asking a thought provoking question to make James Bach - write about it, I got this idea. Here is a post that lists top 7 (want to reach 10) burning issues in software testing today. I solicit other issues and thoughts about how to address them … Write to me, I will consolidate them post back on this blog …

1. External Issues

a. Business pressure (cost reduction, quick time to market, proliferation of computers –hence growing complexity of software systems hence growing complexity in Testing)
b. Tester’s place in overall Software Engineering eco-system (conflicting roles and responsibilities)
i. Developers
ii. Business analysts
iii. Sales and Marketing
iv. Stakeholders
v. End users
2. Project management
a. Predictability - questions like - when you will be done? How far to go? How much time it will take and how much does it cost?
b. Dependency on Business Analysts (specs) and Developers (code delivery) and Project manager expectations (deadlines)
c. Accountability issue – What if a bug is missed from testing?
d. Impossibility of 100% Testing
e. Testing resources are limited, added late in the project cycle
f. When development delays – Testing time gets chopped – Deadline remains – when the testing misses a bug – heads are rolled in testing.

3. Justification for Existence issue

a. Objective of testing
b. “Anyone can do” Notion
c. Testing as quality Gatekeeper
d. Awareness issue – how make others especially stakeholders understand value of testing.
e. Outsourcing

4. Hiring and Managing testers (Performance monitoring)

a. Skill issue: what is important skill to look for in a good tester – Technical knowledge, business domain knowledge, Test process knowledge (all folklore and legacy), Formal Techniques and methodologies in testing, Good learning and thinking capabilities and so …
b. Objective goals – How do you know tester has done a good job
c. Measuring the performance of testers by # of bugs logged
d. Notion of “Tester needs to have deep technical knowledge (not mere process stuff) so that he can get respect from development
e. Notion of “Testers needs to be a business domain expert “– Modern day interview question – “What all domains you have worked on “? Do you have any experience on “Telecom (that too billing) domain, health care, Do you have knowledge in Capital Markets? And so on…
f. Notion of “We need testers who can code”

5. Tools and mechanical Part of testing

a. Automated Testing (not Automated Test execution)
b. Regression Testing ( Repeatability Argument)
c. Testing is a branch of Computer science (yes or no?)


6. Philosophy of Testing Issue – what do you think a tester should do?

a. Handling Process and Metrics fanatics
b. Testing as Factor assembly line (test cases IN, Results/metrics OUT)
c. Find bugs – more bugs – better Testing
d. Prove that software works
e. Quality Assurance/Quality

7. Lack of Education and Skill development programs

a. General Awareness among the community
b. Problems with certifications
c. Formal university Programs
d. Research

Shrini

Thursday, September 07, 2006

Ask James Bach - a question on Testing...

James Bach has a post on his blog inviting questions on Testing. This open invitation comes with a rider ... only *interesting* questions will be answered, rest will ignored and best ones will be *awarded* with James writing a whole new blog post on it. I am still thinking on coming up with a question that will make James to write a seperate Blog post - that will be a real question.

Just to remind you the commenting policies on James' blog --- Any comments that makes to his blog post (after moderation) are considered to be useful to the readers of the blog (as endorsed by James himself).In my opinion that is a like "treat" to me when my comment makes it to comments list.

So what are you waiting for ... Just grab the opportunity ... Ask James a nice question on Testing ...

Thursday, August 10, 2006

A good bug report ...

Read this bug report for Mozilla Browser ---

https://bugzilla.mozilla.org/show_bug.cgi?id=154589

What makes this bug report a special example ...

1. Clearly explains the background of a bug - with example
2. Makes a strong case for Why bug is important from user perspective .
3. Persuasive enough for a stakeholder press for fix.
4. Examples of other sources that give references.

As Cem Kaner puts it -- a good bug always makes developers to fix it. If you have managed to draw attention of developers, PM and other stake holders - you have made a strong beginning - make the bug report look appealing

BTW, bugzilla.mozilla.org is a best place to learn

0. It is an open Bug database -- A huge knowledge repository.
1. Good bug reports - look for bug patterns - learn from them.
2. Know about security Vulnerabilities and Brower issues.
3. Learn and brush fundamentals of Web and standards that make "Internet"


Shrini

Sunday, August 06, 2006

Test Automation - Takes toll of Microsoft Testers ....

I read this old story (Reported by Seattle times in and around Jan 2004) about Microsoft Laying off 62 Testers in Windows group.

http://seattletimes.nwsource.com/html/microsoft/2002155249_mslayoffs20.html

Because (as reported by Seattle times - www.seatletimes.com)

1. They had automation so testers not required.
2. They need to cut cost - either send jobs to India (low cost option) or aggressively automate...

It is pretty sad to note that a company like Microsoft (I am an ex-Microsoftee) is taking step like this. Conventional wisdom and all classical/contemporary literature on Test automation makes it clear that "automation cannot replace human beings and human part of testing". I am at loss to understand why Microsoft (some groups in MS) thought that automation can replace Testers.

This is a story published about more than year ago and is not an official communication from the Redmond based Software giant.

But The Seattle Times, being the largest daily newspaper in Washington state and the largest Sunday newspaper in the Northwest. Well respected for its comprehensive local coverage, The Seattle Times, winner of seven Pulitzer Prizes, is also recognized nationally and internationally for in-depth, quality reporting and award-winning photography and design.

I am afraid it sends wrong signal -- Microsoft should have (might have) done something to set this right...

Anyone listening?

Shrini

James Bach on automation ...

James bach has posted following blog post on Manual Tests and automation.

http://www.satisfice.com/blog/archives/58

It is prety interesting stuff. Read the post and my comments on that post.

Especially Rules of Automation as per James

Test Automation Rule #1: A good manual test cannot be automated.

Rule #1B: If you can truly automate a manual test, it couldn’t have been a good manual test.

Rule #1C: If you have a great automated test, it’s not the same as the manual test that you believe you were automating.

Note the comments chain for that blog post --- Really thought provoking

More on this later ...

Shrini

Sunday, July 23, 2006

W3C Markup Validator ...

Michael Bolton (www.devlopsense.com) mentioned about this tool while we were having lunch together at Toronto. I was looking for ways to assess, report and hence improve the Testability of Web applications and hence improve automatability. I found this to be something that is close to what I was looking for.

http://validator.w3.org/

Using this tool we can verify the web pages of the application against W3C and other Web standards.In my opinion, confirmane of web applications to standards like W3c are useful from following aspects

1. Application upgrades and future enhancements in platform and core technologies - will become less painful and provide cost advantages. For example - using new web/application server, supporting new mobile platform, Technology upgrades in J2EE and .NET.

2. Improve Testability - This is a big issue in Automation. If the applications that are candidates for automation are not built for testability ( simple things like having unique IDs for gui controls and Windows so that automation tool can recognize them) - automation will be difficult and will cause lots of custom code to be written. At the end in terms of both development and maintenance of automation solutions for web applications.

In addition you can also find other free tools at

http://www.w3.org/QA/Tools/

shrini

Friday, July 14, 2006

Some web security related stuff

Information about 3rd party cookies

http://www.mvps.org/winhelp2002/cookies.htm


Using hosts file to block third party cookies

http://www.mvps.org/winhelp2002/hosts.htm


General Secutiy Issues in windows and IE

http://www.mvps.org/winhelp2002/security.htm#Firewall

Thursday, June 08, 2006

Adaptive Automated Testing ....

I came across this topic on automation
http://www.aberrosoftware.com/aat.html

Most of the White paper looked like a sales pitch for the Aberro Product but this seems to be a new and interesting concept.

A brief review by me ----

Objective of automation

 Provide high coverage of the application under test, cost effectively --- Accepted
 Enable early deployment in the development cycle when defects are less expensive to fix --- Does not come under Traditional Test automation that we know today. I would rather apply extensive skilled manual testing and reviews to limit the defect early in the cycle – Not by using automation.
 Be fast and inexpensive to develop and maintain tests – Fast - yes but “ inexpensive to develop” will potentially remain is “Forever wishlist”
 Eliminate the requirement for programming skills – Very incorrect advise and nearly impossible goal to achieve
 Adapt well to changes in application functionality - Very Good one and somewhat likely to achieve
 Enable fast, unattended test execution – Accepted and achieved in most of tools available in market today
 Provide strong verification capability – Verification capability is a strongly related to Testability of the application. Most of the tools in the market today don’t have this as key functionality



Comparison of Various Automation Techniques
Page 11 – good one.

What Adaptive automation means –

1. No Test authoring – No manual test cases required for automation
2. Can be adapted in any phase in development cycle – even when the application is unstable.
3. Almost in-sensitive to application changes – Tool adapts to the application.

I am not sure how it works, appears to be interesting.


Finally few Not-so-good points in the paper

1. The paper title has keyword “Automated Testing” instead of “Test automation” – there is a difference. In no sure terms, testing as an activity can not be automated. Only execution part can be automated. Anything that talks even remotely about “Automated Testing’ in software world is a suspicious and is a sales pitch.
2. Paper starts off selling “Testing” by quoting some famous quote describing 60 billion dollar loss by software defects. Do we need to sell testing by talking about some survey done in 2002 and by talking about defects? I am sure there is better way ….
3. Mentions that “Manual Testing is labor intensive and hence the cost” – So is quality. Even development is labor intensive – why there is no concept of “Automated development” – one that sells? Saying manual testing is costly hence go for automation is a bad argument in favour of automation. Further equating manual testing to “brute force” – is to insult the craft of Manual testing – I strongly object to this.
4. Mixes up QA and Testing all throughout the document.


Shrini

Wednesday, May 17, 2006

A note on Test estimation ...

I was discussion with one of my colleague about test case estimation where a question was raised about “how to handle creep in terms of number of test cases? Let us say initially if we estimate x number of test cases and this number at later point of time becomes 2x”

My response was -

Estimation is always an iterative process. You typically make estimation in terms of test case early in the test cycle - that is in planning phase. Make all the stake holders clear that "Estimates are based on current understanding of the application and test requirements and are likely to change". Have this as main "Disclaimer" in the test plan or estimation document.

This will give flexibility later in the cycle to ask for more resources and time. If you make your PM and other stakeholders crib or complain - Tell them as test team progresses, test team will get more understanding and initial estimates are likely (mostly will) change. Just say "I told you so" and show them the line in the test plan.

This is diplomatic way of handling future uncertainties in test estimates

Other side of this situation is that - test cases number increasing from 2000 to 4000 will happen in test design phase. So you can write all those 4000 test cases if time permits and execute only important ones during execution phase.

So far, as I have seen in this industry - Test effort estimation happens by experience, manipulation and intelligent planning. It is more of negotiation and communication skills than any science or proven method.

When development - in spite having about half a dozen estimation techniques, international bodies or knowledge, certifications like PMP - fail more often in estimating development effort, we, in test while estimating, can start with some good value and keep it open for future updates.

Please spare testing community from getting subjected to scrutiny for estimation... We are learning

Shrini

Wednesday, April 26, 2006

Windows Registry hacks ..

Here is a lazy post but quite useful ....

Your one stop shop for all registry hacks on Windows

http://www.winguides.com/registry/

My Fav registry hack is to block access to specific hard drive (let us say C) to unauthorised users ....

Other one is preventing Right click on Desktop....

Shrini

Wednesday, March 08, 2006

automationjunkies ....

Found this interesting site --

http://www.automationjunkies.com/index.shtml

Looks bit outdated (appears to be updated last in 2004. But is a good collection of resources related to automation...

Check it out ...

Shrini

Thursday, March 02, 2006

QA and Testing - Debate continues ...

Further to this discussion on QA vs Testing – Michael Bolton makes a very interesting statement about what QA can do which tester can not or not empowered to do.

For myself, I don't like the term "quality assurance", and will do what I can to make sure that I'm called a tester. Unless I have authority and control over schedule, budget, staffing, product content, and product direction, I don't have the ability to assure quality in a product. I can report on it, though--and that's what a tester does, in my view.

Here is the complete Google group discussion thread on the topic
http://groups.google.com/group/comp.software.testing/browse_thread/thread/71c357a1c4d1b342

Shrini

Wednesday, February 22, 2006

Estimation for Test Automation Part 1 ...

Test estimation itself is mystery or magic tool that every test and project manager is trying to master these days with very little success - I should say. Then Test estimation for Automation gets little more complicated as it involves other software - automation tool. It is like a estimating for full-fledged application development and testing project. Treat it like nothing less. Ask your manager or his/her colleague PM how they do estimation for complete project (dev and test) - Take some leaves out of their experience.

Some thoughts to get you started off :

1. Like software development project - automation also has its own similar lifecycle
a. Automation Planning
b. Test cases Analysis (= Requirements phase)
c. Automation Design (= Design Phase)
d. Coding and Unit Testing of Scripts
e. Automation builds
f. Source control and Testing, Defect management
g. Deployment on Test lab - try run - fix and re-run
h. Sign off
So be sure to factor in time for all these - just focusing on test case and their complexity is a surely leads to underestimation

2. Following are common and assumed to be factored by the PM - make sure you check whether your team is up there.
a. Required tools licenses
b. Team having training usage of QTP and some development experience
c. References like coding guidelines, Config Mgmt guidelines and other dos and donts kind of documents
d. Setting up of test environment - this is very imp. Mostly we assume that it is there and when you are starting the project you will spend more than you estimated time in getting Test environment up and running for whole group. Take this point little more serious if you are running an “offshore-onsite” type of automation.
e. Framework - supporting structure other than the tool - decide whether you need it or not
f. Other tools and software requirements like - VSS, Database, shared drive to keep common stuff etc

3. Test cases - you need to look at test case complexity from a different angle when you are automating. It does not help in classifying test cases as simple, complex etc. You should look it from over all development of common code perspective. Take a set of logically related test cases and think how many re-usable functions you will require - each for navigation, data input and verification. More the number of functions that a test case needs – more complex it will be for automate. Hence take more time. While estimating always consider a group of test cases not individuals. In a test case items that need to considered are - number of steps, number of inputs and number and type of verification points.

When you are asked for estimation for automation - ask for time to analyze the target test cases and then make judgment call. If you are asked to estimate in a quick and dirty way - shoot back asking - how much of error in estimation they are willing to take - 20-30%? Tell them you will refine your estimates once you have a complete look at test cases. As it happens in development you will revise your estimate after requirements (if it is allowed) - Do it after test case thorough analysis....

Rest in part 2 ...

Shrini

Monday, February 20, 2006

Pairwise Testing ...

This is an interesting topic in testing related to testing of a feature that is influenced bt multiple variables. Here is one blog post from Apoorva Joshi - which is like single reference that points out to several others notable one - each two Famous Michael's in testing community - Michael Hunter of microsoft and Michael Bolton.

http://criticsden.blogspot.com/2005/02/pairwise-testing.html

Right now, I am too anxious to make this post so that I can comeback later with my comments ...

Here are my quick questions about Pairwise testing
1. Why pair is imporatant? What about triplets or 4 variables at a time?
2. How is concept of "Orthogonal" or "Taguchi Arrays" related to Pairwise testing
I will study and come back on this ... Meanwhile enjoy reading above thread ...

Shrini

Server Virtualization

In computing, virtualization is the process of presenting a logical grouping or subset of computing resources so that they can be accessed in ways that give benefits over the original configuration. This new virtual view of the resources is not restricted by the implementation, geographic location or the physical configuration of underlying resources. Commonly virtualized resources include computing power and data storage.

A good example of virtualization is modern symmetric multiprocessing computer architectures that contain more than one CPU. Operating systems are usually configured in such a way that the multiple CPUs can be presented as a single processing unit. Thus software applications can be written for a single logical (virtual) processing unit, which is much simpler than having to work with a large number of different processor configurations.

Virtualization is about running an Operating System (the guest OS) on the top of another OS (the host OS). This technique enables running several virtual machines with different OSes at the same time on the same hardware. VMWare, MacOnLinux, and Xen are examples of virtualizer software. Virtualization requires guest OSes to be built for the host machine processor. It should not be confused with emulation, that do not have this requirement: When an OS runs on the top of a virtualizer, its code runs unchanged on the processor, whereas an emulator has to interpret to the guest OS code. MAME or Basilisk are examples of emulators. Binary compatibility is another different feature: it is the ability of an OS to run applications from another OS. For instance BSD systems are able to run Linux binaries.

Avenues for Virtualization software

1. MS Virtual Server 2005 - http://www.microsoft.com/windowsserversystem/virtualserver/default.mspx

2. VMWare – www.vmware.com
3. XenSource - http://www.xensource.com/ (from Xen Open Source community)


References:
http://www.virtualization.info/
A great article with an introduction to Virtualization [kernelthread.com]
Microsoft Virtual server Road Map :
http://www.entmag.com/reports/article.asp?EditorialsID=87


Advantages of Virtualization

1. Increase utilization of existing Server hardware
2. Easy maintenance
3. Help in Business continuity and Disaster Recovery initiatives


Shrini

Friday, February 17, 2006

Automation of Setup programs ..

Let me make a blanket statement - My first impression and view is that "Setup programs" in general are not suitable for automation. They are best tested manually - unless you work for company like InstallShield whose products themselves help in creating setup programs. Those seup programs having 5-6 steps and taking just folder name as input - are not going provide return on investment for automation.

You decide to play devil's advocate and say -"I don’t agree with you" and insist on automation - Here is one approach.

1. Identify how big is setup program - how many steps are there? 10? 20? or more than 20? what are the variations possible? 100+? If yes - automation will help you.

2. Now identify the most dense and cluttered screen/step in setup that takes large number of inputs. Automate that screen only and proceed and automate let us say top 5 critical screens with 5 scripts.

3. In my opinion - there is no need to automate the setup flow unless there are more than 20 steps and 100+ variations possible.

Look at the beauty and usefulness of such analysis - you are trying to do automation, in the process, you ask so many questions. Think and create those scenarios which otherwise would never have been explored if you are following a structured and scripted test plan. See the value here. At the end you may or may not automate all those scenarios but while trying to automate and tying to convince that automation is way to go - you have tested and enriched test scenarios.

To quote Michael Bolton (www.developsense.com), a friend and mentor - "Often automation, in itself may not lead to good testing or value directly but during the course of preparation whatever the analysis you do and questions you ask "How can I verify this"?”Why should I automate this?" "What can go wrong here" - are VALUABLE and should be done.

How? By always playing a devils advocate - "why" and "why not" If you stop questioning and just accept what is being told to you - you stop learning and cease to become a Tester...

Shrini

Sunday, February 12, 2006

Shrini at STEP IN Posted by Picasa

Tips for Developer Testing ....

Do stop by at this blog post to get Braidy tester Michael Hunter (Microsoft) tips for Developer Testing ...

http://blogs.msdn.com/micahel/archive/2006/01/25/TestingForDevelopers.aspx

Shrini

In quest of automation tools ...

As automation is catching up like wild fire in software testing field – people are frantically searching for cost efficient ways to do automation. Some cash rich companies are investing in industry standard and proven automation solutions from leading tools vendors like Mercury, Rational, Compuware and Segue – other “not_so_rich” companies are struggling around “open source” free tools. Some of the product companies like Microsoft, CISCO – invest in developing their own in house tools. So, broadly we have three categories of automation tools in testing – Commercial automation tools, Open source tools and in-house tools. The first two are available to testing community at large. The information about commercial tools is rather well known and is available at respective websites

Mercury - http://www.mercury.com

Compuware - http://www.compuware.com

Rational - http://www-306.ibm.com/software/awdtools/tester/functional/index.html
(It is quite surprising to see that the information about "once highly popular automation tool" Rational Robot - has gone so deeper into IBM site - it is hardly visible link on IBM main site)

Segue - http://www.segue.com


Here are some free tools on web (a partial list based on my own searching of such free tools).

1. Watir – Web application testing in Ruby http://wtr.rubyforge.org/
2. Watir Web Recorder - http://www.mjtnet.com/watir_webrecorder.htm
3. Open source testing tools http://opensourcetesting.org/
(Note that these are “TESTING” tools not “AUTOMATION” tools)
4. Selenium- A test tool for web applications- http://www.openqa.org/selenium/
5. TestMaker – Framework for Test automation of web based applications and Web services. http://www.pushtotest.com/Downloads/

This is ever growing list – Good thing is that more and more people are investing in developing Open source tools. This will build up pressure on Commercial automation tool vendor to offer tools that are cheaper price and rich in functionality.

Shrini

Further on the road in becoming a finest software tester ...

One of my blog reader asked my views on things that a good tester should invest on and personal traits and qualities of a good tester. I did write about it at my blog in this post. Here is a sequel to it

1. First important thing become a great testers is to question. Question around you anything - be it Door Knob, Watch, your vehicle, your access card, your rice cooker, Gas stove, TV, cell phone. Get curious about everything around you. Find bugs in everything that you see. World is a giant software and is full of bugs. Find bugs there. Then finding them in software will become fun.

2. Powerful observation: can you observe things that others don’t see? Can you notice that little color fading on the billboard hoarding? Can you find error or mistake in Prime ministers reported speech? How about in annual report of Infosys or IBM? Be aware of anything around see deep into it. Smart testers find bugs by observing carefully and are always curious about things - they never stop.

3. Memory and analytical skills. Take tests or training to improve memory. Solve puzzles, play chess, Suduko, jigsaw puzzles. Few of these tricks are mentioned in this blog post.

4. Becoming a good tester should be your goal - "logging tonnes of bugs" or "getting expertise in automation" - will come as side effects - they will be natural to you. Remember - there are no shortcuts - be lifetime student of learning, questioning and observation.

Have you read articles from James Bach and Michael Bolton on Critical thinking? If not, they should be good beginning. Don’t expect immediate results. Depending upon how committed are you and how fast you get into the mode of questioning and observation - it could take about 1-2 years to call yourself a descent tester. That is my rough estimate.

Current industry trends focus on process, factory thinking and want to make testing as routine and predictable - real testing is always interesting, instantaneous and dynamic. You can not do good testing by a following a “pre-laid out list” - You need to think.


Shrini

Wednesday, January 25, 2006

Testing ideas for Database Stored Procedures ...

I happen to see a query in Google group on software Testing regarding testing of stored procedures ( as a part of Database testing). Here are my views on this topic

In one way Stored Proc (SP) Testing is like a testing an API. So apply all the rules that you would apply for API - to SPs also. You can design test cases treating Stored Proc as black box - design all possible combinations of valid and invalid inputs and observer outputs. Be sensitive about the fact that when a SP is tested like an API – you will supply certain inputs which are otherwise not likely to be fed to it if used by a middleware component. So developer may reject a bug related to SP saying that SP will be never be called with such set of strange of inputs – client/middleware layer will filter all bad or unlikely inputs.

For structural Testing (white box) following things come to my mind...

1. You might want to consider measuring Cyclomatic complexity of the code (of loops in it). I believe there are some tools that measure this. Here are few links related to CC measurement. By the way this is also referred as "Code complexity". Higher the code complexity - higher the testing effort required to validate it.

http://www.sei.cmu.edu/str/descriptions/cyclomatic_body.html

http://www.linuxjournal.com/article/8035


2. Databases products like MS SQL server (oracle - not sure) provide monitoring / profiling tools to assist in the time taken for SP execution and other runtime parameters. This will give good idea about SP's Runtime performance. I used MS SQL server Profiler tool to monitor SP execution.

3. You could test SP for security/access control.


Shrini