Saturday, November 18, 2017

Computer does what programmer asks it do : why there are bugs?

A colleague of mine said something so extraordinary about software bugs that I have never seen anyone talking about software bugs that way.  The discuss was about how current technologies and advances in Big Data, Machine learning and AI have or will change the way we do testing and how these can help testers in testing.  One of the underlying applications of these technologies is two fold approach - one mimic human action (vision, speech, hearing and thinking !!!!) and then make predictions about what will happen next.

When it comes prediction and testing, obvious topic is "defect/bug prediction".  Bugs are hardest things to predict due their very definition and nature.  This colleague of mine said something that captures this sentiment very well - "There are no bugs in a sense that computer (he wanted to say software... these days it has become a fashion to replace the word software to machine at all possible instances) does not malfunction on its own (barring hardware/power failures etc). Computer does what programmer wants it to do or coded it to do. The problem then lies with human programmer's mind (or brain) that gave computer an incorrect instruction."

Where does this takes us to? It follows from my colleague's logic that the problem then lies with programmer's mind that gave computer the "wrong" instruction. Predicting a bug then would mean predicting when a programmer gives wrong instruction. This is a hopeless pursuit as guessing when human mistake is unsolvable puzzle - at the most you have some heuristics.

If we go back to the idea that software bug occurs when programmer gives a wrong instruction to computer. This line of investigation is remarkable -- First of all how to identify an wrong instruction?
It turns out that a wrong instruction cannot be identified using say an algorithm or mathematical approach. An instruction (such as open a file, send a message to an inbox, save a picture) becomes "wrong" not by itself but the context or logic or user need or requirement. This then takes us straight to mechanism using which we specify the context, need or logic. That is the realm of "natural language".

Software bugs happen due to programmer "wrongly" translating a requirement which is in natural language to a world of computer language.  If we were to predict bugs using likes of Machine learning or AI - we need tools to spot this incorrect translation.

Looks promising ... right? The state of the art in Natural Language Processing (NLP) is about how closely computers (software actually....) can understand natural language. There are  stunning applications of NLP already.

When NLP comes close to understanding human language fullest - we move a step forward in the puzzle of spotting incorrect translation of software requirement to a computer instruction. I hope so....

But then nature (human) leaps to next puzzle for computers... limit of human intelligence and vastness of human communication. With brightest of human testers, we often fail to spot bugs in software - how an approximate and "artificial" system that mimics a portion of human capability do better in spotting bugs? An area to ponder .....
BTW - was my colleague right in saying "computer exactly does what programmer has asked it to do" Really ?


Thursday, August 10, 2017

Machine learning and Software testing

Machines are learning - good for them. What about humans? Popular buzz around now is about machine learning and artificial intelligence. Never in the past, I think these terms intelligence and learning - have become so much importance and got prime time media coverage than now. Thanks, ironically to the qualifiers attached to these words - Artificial and Machine. Now days more engineers are investing time in learning how machines learn (what a paradox) and intelligence that is fake... sorry artificial gets more funding and attention. Has value and quality of human intelligence gone down or has human learning stopped ?

One of the common and popular use case or illustration of machine learning is that now a machine (a software program actually) can recognize picture of a cat or an apple, several types of apples and cats without being explicitly coded do that. Whats more ? As this program "sees"more and more apples and cats - it "learns" - gets better at accuracy at identifying objects. That's quick machine learning intro for you.

When someone takes this idea of identification of car/apple by machine and asks "why cannot machine identify a software bug - as this person does in introduction of this video (at 1:09) - a paradigm shift is needed.

Let us face it - what are in common between a program identifying a cat or an apple on the screen to some other program identifying a bug in a software ?

1. A program with its code and machine learning capability- does its job with relatively simple and formally defined model. There would be rules and patterns in the model to assist the identification. Where as when it comes to form, shape and identification marks for a software bug - you will really struggle to define it.A machine learning model that can recognize a software bug needs far deeper and complicated definition of bug.

2. Even if you concede - you have managed to define a model that can recognize a software bug, the real challenge would be identifying it in a real time when software is running.

Identifying a software bug in simple sense would need following
- Mechanism to generate loads of inputs and configurations of systems under test
- Mechanism to operate SUT with these data sets and observe potentially large number of possible software behaviors
- Among possible outcomes - identify the buggy behavior (Oracle problem)

In short - these are hard problems of software testing in the first place. How machine learning can help?

I like what Paul Merrill says at the end of this talk on youtube talk - "Machines are learning. Are we"(testers) ?



Hard Problems in Software Testing (2017) - Part 1

When I set to write the post with this title - I thought it must be first of its kind. It turns out there is a book written on this subject. The authors of the book list down a number of problems of testing and solution in the approach called "Testing as Service". In this post, I approach this topic from a totally different starting point.

Let me reflect on history of computing a bit to set context to software, software testing and the topic of hard problems.  The word computing refers to use of computers to solve or create systems to solve a range of problems in the areas of math, information science and like. Named after 9th century Persian mathematician, Al-Khwarizmi, the term algorithm gives a formal structure to problem solving approach. A step by step procedure or method to solve a problem is referred to as "algorithm". The program (or software) implements an algorithm and solves the problem. The algorithms can be represented in multiple ways through natural language, pseudo-code, programming languages, flow charts and control table etc.

In early 60's and 70's when computers developed as advanced calculators, math and logic enthusiasts pounced on these new creations to see if their long pending problems be solved. Few wanted to solve the problem of finding out if a given number if prime or not while others wanted to solve a shorted route for a traveling salesman. In these implementations - the program would run (in isolation - no network or internet in those days and no auto updates of OS or any other software) with an input set data set and would compute the "Answer" or "Solution".

Modern business software at the core level is built from the algorithms performing computation/information processing. In word processors, web browsers, camera app on mobile phones - you will see a culmination of work of several algorithms working in background. These algorithms solved basic problems like storing, sorting, classifying information.

Another thing that set the computational problems of 70's to that of business software of 90's and early 2000's is - introduction of Natural language (Likes of English) for specifications. The problems that algorithms solved in 70's were represented in formal mathematical notation. With the introduction of Natural language at one end and high level programming languages like COBOL, Fortran, Pascal, C, C++, Java - we created this problem of translating what is specified natural language to computer language. This created a division between those understand business domain (Natural Language) and those understand computer language (Programmers). This is first big problem of software development. By natural consequence, validating that the program did as per what is specified in natural language - also got complicated. Software Testing that branched off from software programming as a distinct activity from early 90's - has been trying to bridge the gap between programmers and business folks.

The field of computer science deals with solving computing problems and algorithms. The hard problems in algorithm world are classified as P or NP problem. Interestingly this classification is based on evaluating if the algorithm produces result (halts as in halting problem) in a polynomial time function of size of the input or not. Those problems where algorithm fails to halt or produce results in a polynomial times are referred as NP problems - Non deterministic Polynomial problems.

Where does software testing stand in this classification of P and NP problems? If an algorithm were to test a computer program - would it halt and produce answer in polynomial time? How would an algorithm approach the problem of testing software ?

Here is an attempt to list down the problems that characterize software testing as NP problem.


Each problem listed here shows an aspect of testing that makes it hard to have have an efficient, less error prone and cost effective solution. These problems are hard as solutions that we see in practice are sub-optimal and need constant refinement.

1. Problem of potentially infinite sets of Inputs
Unlike programs/algorithms of 70's - modern business software receives and processes a large set of variables and equal or more numbers of input values directly sent to the program. Also modern software is not an isolated desktop software running on one computer - but a combination of several stand alone components running on different computers connected together in a network. A software under test by virtue of this arrangement continues to receive multiple implicit inputs that influence outputs the software produces. Then we have the database/sets of data elements that are managed by the software - state of this database also influences the outcomes of software. There are internal (to the software) configurations that  allow software to be configured in many different ways.

The task of generating all or some "important" sets of direct inputs that are fed to the software while running and sets of all indirect inputs (database, network, internal product configs) - is one of the hard problem. 


2. Problem of operating the software (and its dependencies) under test through set of inputs
The largest chunk of time of testing is spent in operating the software once we have configured software under test and its dependencies. A simple and single thread of this "operation" is the part of a larger unit called as "test case" or "test" that additionally involves making observations and inferences about outcomes of the "tests". Given infinitely large number of inputs (direct and indirect) there are equal number of ways of operating the SUT. This is hard problem. How can we run these "tests" in a finite time and resources? Who would run these tests? Human tester?

Then we will have questions about how these tests be specified, in what language and how detailed. We have attempted to use in both natural language (manual test case/script) and software language (Junit class). How to run these tests - we have tried "interfaces" of the SUT for this purpose. Most popular interface - GUI created an industry of test automation tools and the paradigm of "record" and playback". Some geeky programmers used interfaces like web service to execute the tests in an non interactive way. Both of these approaches have met success to a degree but have left lot to be desired.

The task of running tests - operating the software through a large set of inputs/flows is a hard problem that we need to solve, solve well.

3. The problem of Observing direct and indirect outcomes/behaviors
While programs of 70's produced one or more distinct outcomes as solution for a given problem - we in today's world need to world need to observe software behaviors. It is funny that we use term "behavior" to inanimate object like "software".

Like direct and indirect inputs that the software takes while in operation - an important puzzle of software testing is about observing "all possible" outcomes. How do we do that? Again - there is a human way and an automated way. Continuing on the testing task of running tests - you might argue that making observations on outcomes is extension of executing tests. This is true by and large. The challenge is to specify what all to observe and how. An automated test  might say watch this space or this folder or look for this text message and so on. But that is only part of the test. Given a test, SUT shows many different behaviors and Capturing all of them is a hard problem. More than that - how do we know we have in our list all that we need to observe?

4. The problem of identifying correct and incorrect behaviors - problem of test oracles

On the contrary to what we believe, it is often not very clear as which software outcome is correct which one is a bug. To help in deciding, we use a reference or mechanism that can decide the correct behavior. Requirements specifications give first reference to what we should expect from software - in natural language. Given infinite sets of inputs and corresponding outcomes and behaviors - identifying the right and correct behavior requires a very large number of oracles.

More often than not, humans can and do act at live oracles - they use their own experience and some given references can identify correct behaviors. At times - data and captured behaviors or previous versions (assumed to be correct) of the application is used as test oracle.

5. Biggest of all - repeating all above many times, when software changes
Software is soft and when it is changed, many things change that are not expected to be changed. This is referred as regression. In the life of software, several times it needs to be changed, updated and new features and capabilities to be included - when such change happens, it is not enough to test and validate the changed areas/features - often we need to confirm that changes made did not break other working parts of the software. This means a continued effort and work testing software completely (almost) at all times when there is a change. To make matters worse, you need to do so called "regression testing" even when any external software (external to SUT) is changed. This is biggest problem we need to solve in testing - the burden continuous testing of entire application and its dependencies.

6. Problem of defining and quantifying value of Testing
Testing has no direct value for customer of end user who is interested in how and what features the product offers. Customer assumes that the delivered features work as expected. The value testing in the performance of the product in the hands of the customer is roped into the larger work by the team - mainly development team. The indirect nature of contribution of testing to overall product makes it hard for testing to assert itself and ask for due share in the success/failure of the product.

Our field is about half centuries old now. How would we approach these problems of testing software if we were to start all over today?

To be continued .... in part 2

  • Problem of quantification how much testing needs to be done and how much is done
  • Problem of estimation of testing required to be done given a scope
  • Problem of Skill/ mindset
  • Problem of expectations from Testing

Thursday, August 03, 2017

Testing Maturity - Dealing with grown up Kid

Several years ago, during my days as Software testing consultant (not a doer but a consultant) – one idea that repeatedly came up was “Testing Maturity”. Thanks likes of CMM, CMMI, TMM, TMMI, Six Sigma, TQM and others – IT world was (mostly “is” as well) obsessed with knowing what it is means to be a “mature” about just anything. Testing – being one of the most talked about maturity target.

I still remember of my first experience of with testing maturity models – when searched on internet, I did not find much “state of the art” stuff (about 10-12 years back). Then like many others – I set out to create my own “framework” for assessing testing maturity. Looking back – I see my attempt as very “immature”. It pretty much looked like any other similar framework, it had levels of maturity, key focus areas and some kind of recipes to move from level 1 to level x and so on. My bosses then liked it. It made some buzz with clients that I worked with. Now I wonder why created those things. I thought then, there must a model using which a testing group can be called mature or immature. The word mature was equated to "Good",  "Efficient", "Desirable" etc. I understood now that maturity is not about good or bad - its about ability to sustain and adapt with change. No model I know of and the ones I created took this approach to maturity.

Another way to look at maturity is how we deal with people. When we say about someone that he or she is mature - it means that person can deal with adversity better, can behave/react with patience and so on. We should apply same idea to software testing. 


Recently a friend of mine bought this idea and rekindled my thinking. Hence I am writing this post.
Most valuable suggestion when I was working my testing maturity model came from my mentor Michael Bolton – who suggested a remarkable thing about the idea of “maturity” (in general). I am going to expand on my renewed model of testing maturity on this interpretation of maturity. Michael suggested that one of the useful ways to define maturity to software (and testing) is to draw parallels with the idea of maturity in biological sciences. Charles Darwin in his theory of evolution – defines maturity as ability of species to tolerate and adapt to the changing surroundings. We all are familiar with tag line of Darwinian theory “survival of the fittest”.
So – my definition of testing maturity draws from this biological sciences idea – testing is considered as mature if it successfully adapts generations of changes happenings in its environment (business and market environment) and retains its relevance/importance. How do you identify such testing practice? Stakeholders are willing to pay for it (challenge me – if you find this statement problematic)
Let us now look at deeper. I think the idea of testing maturity can be applied to a specific “Testing team” (a group of people operating under a corporate structure) or a function or task that needs to be done as part of software making (simple term than saying SDLC that takes me to many other detours that I would like to avoid now). The software Services industry, System integrators, Big consulting companies would like to apply this term to “Testing Practice”. Though the term testing practice sounds very professional (likes of Gartner, Forrester would love) and appear to include both team and function – on the ground – it mainly implies team, structure and some rule book. In most of the cases, software testing maturity is applied to “independent” testing groups – needless to these groups want a label of “mature” so that they continue to live and get funding. Also note that aspects of maturity as it applies to team/structure and to testing as function are not mutually exclusive – there are some common elements.  One reason that I want to make this distinction is that many aspects of maturity take a different shape if I look at testing as group or structure rather than testing as something that a specific team does. You know where I am hinting to. Yes – Agile and DevOps world of software making.

Testing maturity as applied to team/structure
I look at Testing team maturity in terms of Leadership, Doers and testing culture.
A mature Testing leadership would ensure that testing team is responding the change in the ecosystem in which it operates and adapting itself to survive and succeed. A mature testing leadership brings about changes in the team as required and develop collaborative partnerships with developers, project managers, production support teams and stakeholders. A mature testing leadership would not hold its principles and policies as something cast in stone. A real test of maturity of testing leadership is when stakeholder question very existence of testing as a service that a given team can provide. Most of independent testing team have faced this test. A mature testing leadership would be more than willing to break the corporate structure of test team and will be ready to mixed or morphed into any other emerging structure of the organization – an act of self-sacrifice.  Call your testing leadership as mature if it can dissolve itself (the team structure mainly) for the larger interest of testing as function.
Let us now come to “Doers” – I deliberately use this term to indicate group people who do testing rather than the ones who “manage” or “coordinate” testing. Mature testers (doers) focus on constant learning and do not identify themselves with any specific domain, technology or tools or process or like. Mature testers understand the value of adaptation to changing ecosystem and work on acquiring skills to remain relevant in emerging situation. A mature tester thus can operate as effectively in any circumstances and be useful towards the goal that the broader team is pursuing.

A combination of mature testing leadership and mature tester gives an ability of “quick” yet thoughtful response to “change”.  James Bach characterize an expert tester (sorry If just moved from a mature tester to an expert tester – stay on. I hope to establish a connection) as someone who can test under any circumstance of time and other resources.  This ability to test “well” under any circumstances is what gives tester and testing leadership a crucial edge and ability to survive. Isn’t, thus a key aspect of maturity?
Finally – the culture. This is something that mature leadership and mature testers together demonstrate when they are in action. A mature testing culture does not whine about changes but strives to change itself to adapt. A mature testing culture manifests itself in terms of beliefs, collective thinking and set of written or unwritten rules about how testing should be conducted. On any question related to any tactical or strategic aspect of testing – testing culture helps testers (and leads) with “default” response. If watch a team of testers in action – you can distinctly notice the “culture” – if you cannot then probably the culture has not set in yet.
As testing as function continues to evolve and becomes something that needs to get done as part of software delivery – it would be appropriate to turn focus to “mature tester” – an individual. Here too, my definition of maturity is on the lines of “one who can continuously adapt to changes in the environment and evolve”.  Are you a mature tester ?

Saturday, April 01, 2017

Managed diseases and Failure of science

[off topic]

In my opinion, about 70-80% of ailments or diseases treated by doctors using so called evidence based medicines are in the category of “managed diseases” requiring the patient to take medicines life long in additions regular tests and medical consultancy. A very small portions of diseases today are actually curable. This is in spite of spectacular progress of science and technology. From knowing super fine structures inside atom’s nucleus to genetic code, from nano medicine particles to mechanical  heart, from feeling robot to cloned animals - we are the peak of our knowledge than any other generation or human race in the past. Yet more than 3/4 our diseases are incurable and we have darkness at the heart of intense light.  Why science and technology is failing to lift the sufferings of people from these “managed diseases” ? Why science has come to a poor second to nature and life?

I acknowledge role of science and technology in dealing with threats on the life from outside - like accident, fire, suicide etc. Tools and Methods of science have been life saving. Probability of saving life of an accident victim are significantly increased over last 100 years. That is really commendable job of science and medical world.

Coming back to managed diseases - why should we go to a doctor if he cannot cure a disease that has come from inside human body - likes of diabetes, blood pressure, asthama, thyroid and deadly cancer and AIDS? The experience of those who meticulously follow doctors prescriptions is not better barring few edge cases. People lose money, mental peace and suffer through pain while blindly believing modern science and evidence based medicines. Doctors on the other hand blame poor patients that she could not keep up with diet or exercise schedule. When a diabetic patient is about lose a toe due to high sugar levels - doctors would say, patient did not keep the sugar levels under control. Poor patient all that he can do - walk 1 hour day, forget sweets in the life, no fried, non veg or alcohol - probably lived only on salads or veg stuff. Still suffered from all consequences of this glorious managed disease.

Commercial angle of making money through these diseases - pharma, big hospitals, medical equipment manufacturers,  Doctors and institutions that produce doctors and all connected eco systems - is  difficult to miss.

Making money is fine - but cure the disease.

When any alternative medicine or mystic claims some cure - entire world of so called intellectuals, rationalists and supports of science/evidence based medicine pounce on that method and finish it off. Media plays hand in hand to portray anything other than “science” is essentially bad and unreliable.

How can we flip this ratio of managed diseases to curable(time bound) diseases ?  Can science accept its defeat humbly and make way for unconventional methods or new thinking about life ?

Sunday, March 05, 2017

There is no such thing called "Agile Testing" - Part II

My slides of ATDAsia keynote on this topic are here

Here are few key points that I have developed since part 1 of this topic.

1. The problem with current "Agile" is it is stuck and dying its death - in rituals and ceremonies. So called consultants and experts of "Agile" - appear to be pushing rituals and ceremonies without explaining the context and meanings behind them. I find it is very surprising to see people feel proud about following rituals in this rationalist, objective Engineering discipline.  Do not you find this term "rituals" as unacceptable in our field of software that stands as epitome of human knowledge ?

What happens when you do not know the reason and purpose behind a ritual and simply follow it? One - you will apply it wrongly or apply it (the ritual) correctly to wrong situations. When you do something as best practice - you forget the context in which the practice worked and how same or different is your context. The aura of best practice and cult of expert - just blurs your thinking and you get hypnotized. That's where problems start in Agile implementation.

2. There are many good practices in Agile - sorry -  practices that have emerged from the kitchen (not factory) of Agile. These are excellent examples of how smart people have solved the problems in their context. If you understand the context and how problem/solution aligned to the context - you have fair chance of learning, customizing and using the practice to your context. I find practices like lean documentation, dev/test pairing, continuous integration, focus on delivering working software, emphasis on right distribution of automation across technology layers - as good and worth studying. If you start asking - best practice, best tool, best framework, you will miss the background and end up in applying a practice wrongly.

3. Most agree on one thing about Agile - "culture". If you want to make Agile work in your context, you need a cultural change regardless of what is your current culture. This may sound counter intuitive - but it is true. For Agile to work you need culture change.

Here is my prophecy about Agile and Culture - "The culture change you are seeking for Agile to work IS NOT GOING HAPPEN". What is the basis for prophecy? I think culture is made up of people working in groups following rituals while setting aside mostly -  rationality. Humans are lazy, unpredictable, fearful, greedy. Humans want to make profits continuously through software. While not fully understanding "intelligence" - humans have set their eyes on "artificial" intelligence as future. Human for problems in culture - seek solutions in processes, frameworks and tools.

If you want Agile to succeed - take these problematic humans out of equation - with them goes need for this trouble of changing culture. Can you ?

What do you think let me know

Saturday, February 25, 2017

Coaching Testers : An approach for finding answers

Often, I get mails asking testers and budding testers asking questions and seeking my answers. Some of them are questions to something I wrote on my blog. Most of the questions are in the form "what is xxx" or "how to do yyy". 

Here is my advice/suggestion on how one should approach getting answers to the questions that they have on a given topic (this applies to any quest to know something).

Before I answer a question - I will ask you - what do you think? how will you find out? what information or facilitation you need to find answer to this question.

This is how James Bach challenged me when I used to ask him questions in the beginning. As James kept on pushing me back - I realized I must do some homework before ask. In the process, I learnt to find out myself some hints or pointers to question that I have and then seek help by asking "Here is a question" and "Here are my initial thoughts or pointers to this question". "Here is what I find contradicting or not-fitting in". "Here are the sources of information that I used". 

Most of the times - through this process of figuring out, you will get answers in 2-3 iterations without any external help. In this process of finding out - when you are stuck, ask yourself, what information do I need? how will get that information? 


Give it a try - you will learn to find answers to your questions yourself - that would be a fascinating journey.

Saturday, February 18, 2017

Automation takes away Jobs - A reality check

I am not talking about "test automation" here. There is media hype sweeping across these days on jobs being lost, people being fired, retrained on "cutting edge" technologies, re-assigned to new technologies etc . This quora question is an example of people's interest in this.

Let me do a deep dive into this topic

Its a media hype and sponsored Propaganda
If you read carefully into all such reports and media articles and some logic, analysis - it becomes clear that there is a hype and some group of people with vested self interest have been spreading the news. Most of these articles conclude with a call for the readers to do something to avoid "job loss" or any similar harm happening to them due to automation. It might point to learning some so-called "new tool" or "technology" or "take up a course (paid)" or "get a certification". So, commercial interest is apparent. For media, scaring people on some future danger has been a favorite tool to get its end meet. Be it in health care, business or Politics - spreading news about doomsday has worked well for media to form larger public opinion and even make public take actions. People rush to get themselves vaccinated or or buy a term insurance policy or get a health checkup or Hit Gym (commercial interest again) or take a training course - all such actions have a media negative propaganda in the background. As humans, through evolution we have in our blood, an affinity towards negative or bad news. We are likely to believe a prediction of a bad news than a more compelling good news. Media, Sales and Marketing folks exploit this. Can you see this in the tales about job losses through automation? They will scare you to core. When one is scared - rationality and judgemental faculties of human brain are at lowest level. Thus a bunch of scare folks first form opinions about a theme and almost act as expected by "scare-mongers".

What kinds of job are at danger through automation?
As compared to factories and manufacturing assembly line jobs needing human physical effort in addition to some cognitive efforts/skills - IT/Software jobs are/were considered as white color or brainy jobs. In IT and Software - jobs involve varying degree of human elements and intervention. Geniuses in IT services world, riding on outsourcing wave invented so called "low-risk" non strategic tasks such as  data entry and management.  These jobs were defined such that it merely required humans to follow some predetermined SOP (standard operating procedure) in a business process. When there is cost pressure, clients would ask service provider to bring in efficiency. How can one bring efficiency in such brain-dead jobs? Explore the option of reducing humans doing job that can be efficiently done by a machine or a software program. Enter "automation". Look around your business or place where you work - what are those jobs that do not require human intelligence and empathy? If you find such jobs - you can see them going away and given to robots of some sort.

In terms of software technologies side - people say older technologies are going away.  IT services companies providing outsourced technology services will need to support old technologies as long client pays for it. How long client will stay with old technology? That is a business and political question related to a client's business. Typically there is a huge cost to move from a legacy tech to a new tech - its is called "Migration" or "Re-engineering" program. Since such a "change" involves new learning for the staff, new infrastructure and cost of development/migration - businesses tend to stick around an old tech stack until a point when it absolutely becomes impossible to continue. When did businesses move from Windows XP to Windows 7 as desktop operating system ?  Around 2013 or so Microsoft announced end of support for Windows XP. This is an example of technology upgrade. As an individual - if you are stuck with an outdated technology- watch out.


Is this new?
What do you understand from the term "digital"? If it was early 90's - it would mean anything done using a "computer". Year 2000 onwards - it meant something done using internet. In last 6-8 years, it means "mobile". But at the core, in computing technology - the phrase "digital" compares with "analog". When did we last hear about "analog" computing devices? I had nice fun the other day arguing with a colleague on internet is as "digital" as mobile. She believed that qualifier "digital" applies to only "mobile". What will happen if quantum computers make way into mainstream computing - will those computers be called as digital?

Going digital for a business mean, in simple sense, a part or whole of business involve "mobile technology". This shift from desktop computers to internet to now mobile - has been causing many traditional jobs that were performed with "digital" technology - to go away. Just like digital camera era killed likes of photo film maker - Kodak.

Media propaganda makes one believe at first that such job losses are unprecedented and happening for the first time. In the past too - when computers first came, people who resisted them lost jobs as in some sense computer did the work better and cheaper than the humans. Some intelligent ones immediately re skilled themselves and embraced the change. These folks not only survived the technology change wave, some even flourished like never before.  Like biological evolution, business constantly keep looking for ways to make more money given constant or reducing capital and resources.

Your career is your responsibility
Software job, fortunately or unfortunately is not a job covered under an employee union (by and large there might be exceptions). When your company fires you without giving proper justification - you cannot knock some outside entity to get you reinstated. Businesses world wide using so called skilled and white collared jobs - can take liberty of downsizing workforce should going gets tough with falling revenues and profits. While on job, keeping one updated with skills in emerging areas of technology and business - becomes responsibility of the individual. 

In Infosys related quora post above - mentions that affected people are trained in "cutting edge" technologies. I ask - why do people do or get stuck in "blunt" or "old technologies" in the first place? Why do these folks (if at all they do) want their companies to take care of their careers or skills? Why cannot these folks keep improving the skills based on emerging market conditions? If a company displaces people working on a "blunt" technology due to low or no demand - should you blame the company? While keeping people working on some outdated technology might be a business imperative to companies - getting stuck in outdated technologies with or without knowledge at individual level is detrimental to one's career and society at large


 If you are happy with 9-5 cool job that does not require you to any great deal of application of skills or knowledge - be ready to have your job redundant any time. When jobs that do not require skills are lost - media might make noise about this. Again - if you see the vested interest behind these, it becomes obvious that it is an attempt to form public opinion in a specific one way away from the reality. You cannot depend upon your company to keep you in front-line tech or business work all the time. Its your job to be good at what is in demand and then have company to keep on fore-front.

When you hear "automation takes away jobs", ask "what kind of jobs" and what you are supposed to do ? Watch the reaction and share it with me. You should be able to smell vested interest behind such a claim.  Would you ?

Friday, February 10, 2017

Two important lessons for success of Test Automation

James Bach wrote this great article on how not to think about Test automation way back in 1999. Anyone starting into automation and those wanting to learn more about automation - must read this article. First of all automation is about testing. If you think narrowly about testing - your automation will be narrow.  Even today it is not uncommon for business leaders to say "do not have time or resources for testing - do automation". I hope some business leaders in IT, Software, Testing are reading this post and make amendments in their view.

I would like to put two key lessons that I learned in these years that you can use to make most of your money you are putting into automation

If a test (case) can be specified like a rule - that MUST be automated
Automation code is software - thus, obviously is built on some kind of specification. Most GUI automation (QTP, Selenium) is typically built based on so called "test cases" written in human language (say English). It is the first question that a automation guy will ask while starting automation - "where are the test cases?". In dev world - automation takes a different meaning. In TDD style automation (if you call TDD tests as automation) - test is itself a specification. A product requirement is expressed as a failing test to start with. The approach of BDD throws this context to other boundary, specify tests in the form of expected behavior. So, automated tests are based on specification that is a human language but expressed in business terms (mainly) and with a fixed format (Given-when-then).
Key lesson here is - if a test can be specified like a rule with a clearly defined inference to be drawn from the test - that should be automated. Automating a test means create a program to configure, exercise and infer results of what test is trying to validate. Michael Bolton calls such a test as a check - a meaningful distinction. If a test has human element in it for inference mostly - you cannot possible automate the test in its full form.
How do you implement this lesson in your daily life as tester? When designing a test - see if you can specify it like a rule.  If you can then explore ways to write a program for it. Then that test becomes automated. In this way when you are building a suite of tests - some are specified like a way that makes it easy to automate and some are specified in a way that a human tester need to apply her intelligence to exercise and infer.

Automated tests (checks) are like guard to product code
A child asks his father "what is the use of brake in a car". "it helps to stop the car" says father. Kid responds back "no.. I guess break helps driver to drive the car as fast he wants to as he has a means to to stop when needed". On the similar lines - having automated tests around a piece of code - literally guarding the code - empowers the developer to make changes to the code faster. More often than not - bigger speed breakers for development is fear of breaking some working code. Developers are mostly worried about large chunk of legacy code that one rarely understands fully. Having automated test as guard - what happens is test will flag change in the code via failing test. Armed with support of guarded code - developers can now make changes faster and can depend on tests to tell them if any of change made has broken some other "working" code.

How do you implement this lesson? Work with developers and help them creating tests that guard their code. These tests should work like "change detectors". Writing test automation would require knowledge of product code and principles of unit testing. Not for weak hearted GUI QTP/Selenium folks.