Software Testing is Dead, Long Live Software Testing

By Rajiv Jain (CEO, ThinkSys)

Forgive the rather corny headline (fans of The Phantom will know it’s origin) but as someone who’s been in the software testing business for years now, I’ve grown more than a little weary of these periodic announcements of the death of software testing. I saw a quote from James Bach somewhere that went, “Pretty good testing is easy to do. That’s partly why some people like to say ‘testing is dead’– they think testing isn’t needed as a special focus because they note that anyone can find at least some bugs some of the time. Excellent testing is quite hard to do.” Bach’s assessment is likely true, but I think it is also possible that these prophets of doom aren’t really pronouncing the death of testing, rather the end of testing “as they know it.” This is a statement I am much more likely to get behind. If you look at the testing world around you, it has changed significantly in the last 2 -3 years even. Let me take this chance to highlight a few changes that I have particularly been struck by.

  1. Agile and DevOps:
    Let me take the liberty of lumping both these software product development movements together for the purpose of this post. Shortening development and release cycles pushing into continuous development, continuous integration, and continuous delivery have shaken the fundamentals of how we used to test software. These are the days of continuous testing – no more waiting for the end of a release, testing it in some kind of staging area and then pushing it back to engineering with bug reports in tow. Apart from these changes to the way we work, one fundamental change this acceleration in the release cycles has wrought is one that I am very happy about. Today, all around we see testing becoming much more tightly coupled to the product and business goals and at a much earlier stage in the product development cycle. Testing podcaster Trish Khoo had said, “The more effort I put into testing the product conceptually at the start of the process, the less I effort I had to put into manually testing the product at the end because less bugs would emerge as a result.” Product owners too now recognize that if testing has to deliver in such a time-constrained environment, it has to be involved early, get visibility of the direction the product is likely to take, and the possible roadmap of development. This is great news!
  2. Developers Testing:
    I have expressed my opinion in the past on whether developers should test their own code. Well, my opinion is moot– the fact is a blurring of boundaries has already taken place between development and testing and who does what. DevOps teams are routinely staffed with developers, operations, and testing skills as the product rolls out full-speed ahead. The need for greater code coverage in a shorter time has driven even greater use of Test Automation and writing the code or the scripts for automating test cases is a task developers are well suited for. Joe Colantonio says, “I remember the days when QA testers were treated almost as second-class citizens and developers ruled the software world. But as it recently occurred to me: we’re all testers now.” No more are testers looked down upon as children of a lesser God, so to speak.
  3. The Role of Cloud:
    The outlook has turned positively cloudy over the last few years. More and more products, or Apps as they choose to be called now, are SaaS-based and consumed on-demand over the “series of tubes” that make up the web all around us. Testing for these products (ok Apps) is pretty different but most specifically in the emphasis that now has to be placed on security and performance. Your App is only as secure as the last hack so an almost unreasonable amount of the focus of the testing is on identifying vulnerabilities and in keeping the architecture and the internals of the app safe from all possible attacks. Then there is performance. The days of your enterprise application sitting on a dedicated server on your network seem to have gone for good – when the app is on a multi-tenant infrastructure that is being accessed by hundreds, possibly thousands of other users it can’t show any strain from the load and this becomes a key focus area for those testing it.
  4. Mobility:
    Mobiles have changed our lives so can software testing be immune? Software products today have no choice but to have a mobile strategy. That customers will access these products from their tablets and smartphones is a given. To those tasked with testing these products, this means a load of added complexity. Now your test plans have to factor in devices of varying form factors and differing hardware and software capability. There is the access speed and bandwidth to think about. The biggest change possibly, though, is in testing the user experience. The small screen presents differently and is used very differently and the software products that expect to run on that small screen have to be tested with that in mind. All of these are paradigm shifts from the things we used to test for even a few years ago.

Let me end with another quote, this one by Mike Lyles, “While we may understand the problem, it is critical for us to change our thinking, our practices, and continuously evolve to stay ahead of the problems we face.” The fact is the situation around us is always going to change – as capable software testers, our role will always be to find a way through, and to help build better software in the bargain.

Regression Testing in Agile Scenario

An increasing number of software development teams today favor agile software development over other development approaches. There’s no need to introduce Agile, but let’s revisit it anyway! We know that the Agile development methodology is adaptive in nature. It focuses on building a product based on empirical feedback gathered from frequent testing by breaking down the product development process into shorter development cycles or sprints. Key is that since agile development is iterative, all the aspects of product design are frequently revisited to ensure that change can be effected as soon as the need for it is identified. This is where we segue into testing – clearly in this scenario, developers need to test more and test fast. To do that, curating a comprehensive test plan based on the agile methodology is essential.

In agile development, testing needs to grow geometrically with each sprint and testers need to ensure that the new changes implemented do not affect the other parts of the application and existing functionalities – we know this type of testing as Regression Testing and this holds a very important place in the agile development scenario.

Regression testing essentially checks if the previous functionality of the application is working coherently and that the new changes executed have not introduced new bugs into the application. These tests can be implemented on a new build for a single bug fix or even when there is a significant change executed in the original functionality. Since there can be many dependencies in the newly added and existing functionalities, it becomes essential to check that the new code conforms with the older code and that the unmodified code is not affected in any way. In agile development, since there are many build cycles, regression testing becomes more relevant as there are continuous changes that are added to the application.

For effective regression testing in agile development, it is important that a testing team builds a regression suite right from the initial stages of software development and then keeps building on it as sprints add up. A few things to determine before a regression test plan is built are:

  • Identifying which test cases should be executed.
  • Identifying what improvements must be implemented in the test-cases.
  • Identify the time to execute regression testing.
  • Outline what needs to be automated in the regression test plan and how.
  • Analyze the outcome of the regression testing

Along with this, the regression test plan should also take into account Performance Testing to ensure that the system performance is not negatively affected due to changes implemented in the code components.

In the agile environment, Regression Testing is performed under two broad categories;

  • Sprint level Regression testing: This regression test is focused on testing the new functionalities that are implemented since the last release.
  • End to End Regression testing: This test incorporates end-to-end testing of ‘all’ the core functionalities of the product.

Considering that the build frequency in agile development is accelerated, it is critical for the testing team to execute the regression testing suite within a short time span. Automating the regression test suite makes sense here to ensure that the testing is completed within a given time period and that the tests are error free. Adding the regression testing suite to the continuous integration flow also helps as this prevents the developers to check-in new code before automatically evaluating the correct working of the existing functionality.

Regression testing can adopt the following approaches:

  1. The Traditional Testing Approach: In this method, each sprint cycle is followed by sprint level regression test. Post a few successful sprint cycles, the application goes through one round of end-to-end regression testing.
    This method allows the team to focus on the functional validity of the application and gives testers the flexibility to decide on the degree of automation they want to implement.
  2. Delayed Week Approach: In this approach, the sprint level regression test is not confined to a timeline and can spill over into the next week. For example, if the sprint level regression test that is supposed to be completed in Week 2 is not completed, then it can spill over to Week 3. This approach works well in the beginning of the testing cycles as the testing team at that time is still in the process of gaining an implicit understanding of the functionalities and the possible defects. Instead of parking the bugs/defects and addressing them at a later date, continuous testing lifts the burden of backlogs that build up during end-to-end regression tests.
  3. Delayed Sprint Approach: In this approach, the regression cycle is common and regression test cases that were employed for the second sprint contain the functionality stories that were a part of the first sprint. Since the regression cycle is only delayed by a sprint, this approach discounts the need for having two separate regression test cycle types. This approach also avoids a longer end-to-end regression test cycle. However, this approach has two challenges –
  • Maintaining the sanctity of the regression tests is difficult.
  • Maintenance of automation efforts increases considerably.

Organizations need to decide on their approach to regression testing keeping the continuity of business functions in mind. Regression tests target the end-to-end set of business functions that a system contains and are conducted repeatedly only to ensure that the expected behavior of the system remains stable. Our view is that having a zero tolerance policy towards bugs found in regression tests makes agile development more effective and ensures that the changes implemented in the product do not disrupt the business functions. We are not among those who agree with Linus Torvalds in his assessment, “Regression Testing? What’s that? If it compiles it is good; if it boots up, it is perfect.”

5 Types of Performance Testing You Need to Know

By slowing down the search results by only 4/10th of a second, Google found that it would reduce the number of searches by 8,000,000 per day! By it’s own calculations, 1 second of slower page performance could cost Amazon $1.6 billion in sales each year and for every 100ms of latency, it loses 1% of its sales.

People may not mind waiting in line in a real shop, but on the web, a delay of even 4 seconds makes 1 out of 4 customers abandon the web page. The importance of performance testing for your web application, therefore, cannot be emphasized enough.

In this blog, let us have a look at five different types of performance testing, the parameters to check for each and let us understand which type of testing is useful when.

  1. Performance Testing:
    The goal of the Performance testing is to test the application against the set benchmark. The aim is not to find defects but, instead, this testing validates the usage of resources, responsiveness, scalability, stability, and reliability of the application. It is important to note that the Performance Testing should be conducted on the production hardware using the same configuration machines which the end users will be using otherwise, there can be a great degree of uncertainty in the results. Performance testing helps businesses in capacity planning and optimization efforts.
  2. Load Testing:
    Load testing is carried out to validate the performance of the application under normal as well as peak conditions. It is conducted to verify that the application meets the desired performance objectives. Load testing typically measures response times, resource utilization, and throughput rates and identifies the breaking point of the application. It is important to note that load testing is designed to focus only on the speed of response. It is carried out to detect concurrency issues, bandwidth issues, load balancing problems, functionality errors which could occur under load, and helps in determining the adequacy of the hardware. Results of Load testing help businesses determine the optimal load the application can handle before the performance is compromised.
  3. Stress Testing:
    Stress testing validates the behavior of the application under peak load conditions. The goal of this testing to identify the bugs such as memory leaks or synchronization issues, which appear only under peak load conditions. Stress testing helps you find and resolve the bottlenecks. The results of stress tests highlight the components which fail first and these results can then help the developers in making those components more robust and efficient.
  4. Capacity Testing:
    Capacity testing is carried out to define the maximum number of users or transactions the application can handle while meeting the desired performance goals. The future growth in terms of users or data volume can be better planned by doing the capacity testing along with capacity planning. Capacity testing is extremely helpful for defining the scaling strategy. The results from capacity testing help capacity planners in validating and enhancing their models.
  5. Soak Testing/Endurance Testing:
    Soak testing is used to test the application performance and stability over time. It tests the application performance after enduring expressive load over an extended period of time. It is useful for discovering the behavior of the application under repeated use. These tests help in tracking down memory leaks or corruption issues.

Some of the basic parameters monitored during performance testing include processor usage, bandwidth, memory usage, response time, garbage collection, database locks, hit ratios, throughput, maximum active sessions, disk queue length, etc. One run of performance test can take anywhere between 1 day to 1 week, depending on the complexity and size of the application being tested. Apart from this, other aspects such as workload analysis, script designing, test environment setup etc. also need to be worked on.

We hope this blog was useful for you. How do you use performance testing in your testing plan? Do share your ideas and experiences in the comments section below.

How To Improve Productivity And Efficiency With Test Automation?

With each passing day, as the advancements in the IT industry go over the roof, so are the complexities involved in the working of software products. With new breakthroughs making headlines every day, the inherent intricacies of these applications are presenting never seen before challenges to the software testing community. Safe to say, manual testing is fighting a losing battle against Automation. We present you 7 reasons why Automation testing has already won the race against manual testing:-

  1. One Mistake And We Are Back To Square:
    This reminds me of the aphorism that one bad apple can spoil the whole good lot. Nowadays, mobile applications are built upon such feature sets. Every fortnight or so, each one of us is flooded with requests to update our apps from the Google play store. As the sophistication of everyday apps escalates, the requirement for new test cases increase with the same frequency. However, the reality is not so straightforward.

    Just a single update can cause the entire feature set to require a complete makeover. This also calls for the automated tests to be maintainable over an extended coverage of features affected, after every new update. If not, then it will require all the tests to be rewritten with an enormously large number of modifications. Such a scenario is humanly impossible for manual testers to fit into, and is evidently not a sustainable model of working.

  2. Sharpened Potency Of Test Cases:
    As test effectiveness is bluntly defined as the rate at which the testing methodology detects bugs in the development cycle of the product, Auto testing comes out as a clear winner over manual testing. This enhanced effectiveness results in a profoundly better quality of end product, thereby building on the all-important customer satisfaction and expansion of a loyal user base.
  3. Repeatability’s Cure Is Automation:
    Supporters of manual testing hide behind the banner of the low cost of testing. Though the situation is manageable for projects built on a small scale, the story is quite the opposite with applications suiting larger ambitions. Agreed, the use of customized automation tools is hard on the pocket, but a giant sized project with its innumerable tests that are repetitive in nature is a completely different proposition altogether.

    After judging on the effectiveness of re-usable automated tests that can be rerun ‘n’ number of times with no additional costs, it is quite hard to overlook the tremendous return of investment your project gets.

  4. Efficiency Across Different Platforms:
    With each new release of say a mobile application, its quality and ease of working should be replicated consistently over varied hardware configurations. This will want the source code to be modified and a consequent test run repeated each time. Manually completing these tasks for all development cycles would be cumbersome and hurt efficiency across all variants, for example, upgrading WhatsApp from Android devices to blackberry.
  5. Humans vs. Machines:
    It’s a no-brainer that when it comes to delivering accurate results with countless testing spanning over painfully long cycles, something very common with huge projects, Automated testing is unmatched. Manual testing will not be able to act as a substitute for avoiding errors and missing on any crucial fine details, even with the best of expertise.

    Also, with complex sophistication being the norm with most of the apps, Auto testing is some distance ahead of manual testing in finding out hidden information.

  6. Value Addition:
    Workers released from the task of carrying out repetitive manual testing can gainfully divert their creative energies towards building more robust test cases with even more innovative features. This will obviously lead to organizations reaping the twin benefits of ameliorated quality in products as well as skill up gradation of its testers.
  7. Combination with DevOps:
    With cent percent coverage and a quick scripting of test cases being the hallmarks of Auto testing, it’s not so difficult to find out why automatic testing is a natural attribute of a Devops Environment.


With so many factors having a having a deep bearing on the overall efficiency and the subsequent cost benefits to an organization, it wouldn’t be incorrect to say that Automated testing is now a veritable asset to have

The Role of Software Testing in a DevOps World

By Rajiv Jain (CEO, ThinkSys)

Mark Berry said, “Programmers don’t burn out on hard work, they burn out on change-with-the-wind directives and not ‘shipping’”. I don’t know about the first but the desire to “ship” seems to be a powerful motivation for the push towards the adoption of DevOps practices among software development teams and companies everywhere. That and the business benefits obviously. I remember a Puppet Labs survey from a couple of years ago that showed that of the organizations surveyed, those that adopted DevOps shipped code up to 30 times more frequently with lead times of as little as a few minutes. They also experienced only half as many failures and had the ability to recover from those failures up to 12 times faster. These are not just numbers, I recall reading about one of the early DevOps adopters, Etsy. Apparently they update their site every 20 minutes without any service disruption to over 20 million users – that’s a phenomenal competitive edge to possess. Small wonder then that the DevOps tide is rising.

Across most SaaS-based product development efforts, the approach seems to be to bring Development and Operations together in, a largely, automation-driven effort to keep pushing releases out, more or less continuously. Given the sheer number, it’s perhaps not even fair to term these as releases any longer! Coming from a company that has a significant Software Testing practice though an interesting facet of this partnership between Development and Operations occasionally occurs to me. It seems to me that a third, equally interested party, Testing and QA, may not be getting the attention it deserves in this discussion. In fact, the case has sometimes been made that the greater degree of automation and the souped-up release cycles imply a reduced role for software testing. I couldn’t disagree more.

Let’s go back to the Puppet Labs survey I quoted earlier – notice that one of the key benefits stated was the reduced rate of failures and the ability to recover from those failures quickly. Clearly the aim is not to ship code, it’s to ship market-ready product on a “Continuous” basis. To me it seems that would be impossible to achieve without an even greater emphasis on testing. Testing, while Agile had brought out the need to involve testing strategy at a very early stage of the product planning. How else to keep up with the short release cycles? DevOps with its Continuous Integration and Continuous Delivery is, in this context, like Agile on steroids and the need, thus, is even greater to get testing involved early in the product definition and planning stage. In many ways, the approach now has to be to plan to prevent defects rather than detect them.

So, in DevOps testing comes in early, plans for the way the product is going to pan out and prepares itself accordingly so that as code starts rolling out it gets tested. This is a significant change, at least of mindset. Earlier the development used to get done and the testing would start – today these seem to have to go, more or less, in parallel. Many have called this Continuous Testing – a perfectly valid term.
Another significant change that this “Continuous” model is engineering is the integration of functions. DevOps teams now tend to contain people from Development, Operations and, increasingly, Testing. Without that level of integration, getting quality code out in such short time frames would be impossible. This integration is also throwing up new roles for everyone, including testing. Carl Schmidt, CTO of Unbounce explains it well, “I’m of the mindset that any change at all (software or systems configuration) can flow through one unified pipeline that ends with QA verification. In a more traditional organization, QA is often seen as being gatekeepers between environments. However, in a DevOps-infused culture, QA can now verify the environments themselves, because now infrastructure is code.”

That statement points to a third significant change for testing in the DevOps way – what to test? Now that the Operations, i.e the nuts and bolts that get the SaaS product out to the millions of subscribers, is a part of the effort of getting the product out is it not necessary to test that too? There are also some subtle changes of emphasis that emerge here – functional testing is always important but now there is a premium placed on testing for load conditions and for performance. The environment is also, as much a part of the SaaS product now as the code. There is a role here that testing can excel in – clearly they know more than most about the difficulties of taking the application code, deploying it in test environments, testing it and then moving it out to production. The process may have become shorter and sharper but the skills are the same.

I like a quote, sadly unattributed, that I read somewhere, it goes, “Software testers do not make software, they only make them better.” In the context of DevOps it would seem that finally they are getting an opportunity to not only make software but also to make it better!

Top 4 Mobile App Development Mistakes to Avoid

Take a look at some of these fun statistics

  • There are 6.8 billion people on the planet. 5.1 billion of them own a cell phone Source: Mobile Marketing Association Asia)
  • It takes 90 minutes for the average person to respond to an email. It takes 90 seconds for the average person to respond to a text message. (Source:
  • 70% of all mobile searches result in action within 1 hour. (Source: Mobile Marketer)
  • 91% of all U.S. citizens have their mobile device within reach 24/7. (Source: Morgan Stanley

Today, mobile browsing accounts for anything from 1/3 to ½ of all of web traffic worldwide and mobile users spend over 80% of this time on mobile apps. Be it for social networking, emailing, texting, and gaming or for making purchases, having a great mobile app has become a business imperative. Hundreds of new apps hit the app stores every day. The Q is how can app developers ensure that their app is the best app to use? How can developers determine that theirs is not some buggy app that the user deletes almost as soon as it is downloaded? In this blog, we talk about four mobile app development mistakes to avoid to develop great mobile apps.

  1. Platform
    Not paying attention to the platform choice for mobile app development can easily be the biggest mistake developers make. The right platform choice for developing mobile apps is a critical contributor to mobile app success as it defines the approach to the development process. So should you go Native or Hybrid? A native app is developed specifically for a mobile operating system such as Java for Android, Swift for iOS etc. These apps are more mature and have a great user experience since they are developed within a mature ecosystem with a consistent in-app interaction. They are also easier to discover in the app store, have access to the device hardware and software and enable users to learn the app fast. Native apps are known for their great user experience.
    Hybrid apps, at heart, are websites packaged as native apps with the similar look and feel. Hybrid apps are built using technologies like HTML 5 and JavaScript. They are then wrapped in a native container that loads the information of the page on the app as the user navigates across the application. Unlike native apps, hybrid apps do not download information and content on installation. However, hybrid apps have one code base that makes them easier to port across multiple platforms and access easily through several hardware and software capabilities via plug-ins. These apps have cheaper origination costs and have a faster initial speed to market. Since hybrid apps are platform agnostic they can be released across multiple platforms without having to maintain two different code bases.
    Along with these are browser based apps. These applications run within the web browser with instructions written in JavaScript and HTML. These are embedded in the web page, these applications do not require any downloading or installation and can run easily on Mac, Linux or Windows PC.
    So how do you decide between these options? If you need to get your app into the market within a very short period of time or check the viability of the market then hybrid is the way to go. On the other hand, if you need an app that has a great user experience then you should be taking the native direction. The platform decision thus rests purely on the business requirement and the time that you need to move things to production.
  2. UI and UX
    Leaving app design as an afterthought can be a huge and costly mistake. High-quality design ensures higher engagement which translates to higher ROI. Strong design also makes sure that all future app updates can be done easily and have lower support costs.
    To build an app with a great UI and UX design developers first need to understand the behaviour of their target market to build the foundation of the apps functionality and then move to app design. The User Interface or UI defines how an app will look while the User Experience or UX design needs to ensure that the app is in fulfilling the emotional and transactional response it is supposed to generate. Developing functionality of the mobile app before developing the UI and UX can be counterproductive as this makes it difficult for mobile app developers to remain true to the overall UI/UX design and deliver great app experiences.
    To design a great mobile application, developers can first design UX Mock-ups first without worrying about the UI design. This enables them to ensure that app feels at home across platforms and form factors. Once this is done developers can move to UI design to build the visual appeal and usability of the app.
  3. Developing Bug Free Apps
    Since mobile app users are picky and impatient, they have zero tolerance for slow, buggy apps. While developers need to keep their eye on the ball and develop a great app, no app is truly great if it has not gone through rigorous testing to identify any shortfalls and bug fixes that the app might need. Along with this, it is equally essential to test the right things at the right time. For example, functionality testing should happen at the beginning during app development and should examine, amongst other things, all the media components, script and library compatibility, manipulations and calculations the app might require, check for submission forms and search features etc. Performance testing should not only focus on the load but also test the transaction processing speed of the app.Developers need to focus on User Acceptance Testing to ensure that the development happens in a timely manner. For this, User Acceptance Testing should be a part of the development process and should not be left until the last minute to build in feedback as it is received.
  4. It’s a mobile app NOT a mobile website
    Finally, a mobile application is not a mini website. Mobile apps are supposed to provide tailored, crisp and easy mobile experiences and hence need not have the same interface, look and feel as that of a website. This means that developers have to focus on sharper interface design, offer better personalization, send out the right notifications and also make the user experience more interactive and fun. Along with this, mobile apps should have the capability to work when offline. While apps might need internet connectivity to perform tasks, it is essential that they are able to offer basic functionality and content when in the offline mode. For example, a banking app might need internet connectivity for processing transactions. However, it can offer basic functionalities such as determining loan limit, instalment calculations etc. when offline.

With an increasing number of users turning away from websites and turning towards mobile applications, mobile apps act as gatekeepers of experience and engagement. App development thus has to be more than just building the app. Developers have to take a structured and strategic approach to app development to ensure that the app delivers on all the metrics required to further business goals by being responsive and reliable and fit in with user expectations.

Automation in Testing over Test Automation

Recent trends in the IT industry are quite indicative of the well known fact that test automation is becoming all too ubiquitous and that the skills which made the testing an investigative and unique profession (also associated with manual testing) are on the wane. However, our aim is not to focus on things which make automated testing gain an edge over manual testing. There is a stark differentiation between Automation in testing and test automation. How the two affect the development of a software product, what are the experiences encountered and the like ….this article aims to answer and highlight some of these queries.

The global big shots in the IT business, reason that the reputation of the company is at stake and hence, they have no alternative, than to blindly go for the automation test suites which will completely nullify the probability of product recalls, repairs and the ensuing erosion of market share and user trust. After all, it conveniently ticks the boxes of 100 % coverage and fast scripting of test cases.

The companies also come up with a genuine economic concern for lack of faith in putting automation into testing. They suppose that even if they somehow pool together the right king of skillful manpower to develop an automation suite, then it will more or less become a product in itself, with all its support and maintenance costs. What would then happen to the Return of Investment, they argue..

What the approach should be for Automation in Testing

  • When strategizing towards automating a test, there are a whole lot of preconceived notions related to additional costs, that we need to unlearn.
  • The whole line of thought process should revolve around questions like “How can i test faster?” How can i extend its reach for better coverage?
  • The functionality should be targeted for testing from the lowest possible level
  • To come up with a test tool that can combat with irregular tweaks in system, part of the approach should be to overcome the boundaries of development and testing
  • Some of the tools/ techniques which can be a friend in our endeavour are: data management, state manipulation, critical bug tracker and log file parsing.
  • Documents like old user manuals can also aid us to a large extent.

A jigsaw puzzle Architecture:

Just as there are disparate parts to a jigsaw puzzle set, all the aspects towards automating a test case need to be considered as individual parts of the same puzzle. For example, a testing code that manages data, a test code that creates browser, a code for managing user interactions on the page. In the end, a code that will handle reporting of results .Put together all these test codes and scaling up this approach can build the test tool we need. While we are at it, we need to carefully analyse the hurdles we encounter and go deeper into them. For instance, when we design a testing tool for API testing is taking too long, we need to delve into the causes which are impeding it’s quick response time.

Where test automation may not be the best fit?

  • It doesn’t make any sense to automate tests that require a single repetitions or a couple of iterations.
  • When the project requirements for an application are very dynamic. Since the design of an automated test suit requires sufficient resources and enormous labour, it goes against sound economic sense to plan one for a platform that requires regular changes and tweaks.
  • Unless the automation in testing is not started with close co-operation with the development team at every stage of the development cycle, it becomes difficult to come up with a complete and effective test tool. In that case why get into it in the first place?
  • Maintenance costs of these suits are quite significant and can be a dampener while gauging the overall economics of the project.
  • It can be time consuming to automate tests and therefore its usage should be limited to high priority ones.

Regression Testing vs. Retesting – Know the Difference

Regression testing vs. Retesting?

Is there a difference at all? Is regression testing a subset of re-testing? Quite a few times, testing teams use these two terms interchangeably. However, there is a vast difference between these two types of testing. Let us have a look –

regression testing vs retesting

Regression Testing


Regression testing is a type of software testing which is carried out to ensure that the defect fixes or enhancements to the application have not affected the other parts of the application.


The purpose of regression testing is to ensure that the new code changes do not adversely affect the existing functionalities of the application.


Regression testing is carried out in one or any of the following circumstances:

  • Whenever there is a modification in the code based on the change in requirements.
  • A new feature is added to the application.
  • Defects are fixed in the application.
  • Major performance issues are fixed in the application.

Regression testing can be carried out in parallel with re-testing. It is, in many cases, seen as generic testing.

Testing Techniques:

Regression Testing can be carried out with one of the following four techniques-

  1. Re-execute all the test cases in the suite.
  2. Select a part of the test cases which need to be executed with every regression cycle and execute only those.
  3. Based on the business impact, feature criticality, and timelines, select the priority test cases and execute only those.
  4. Select a sub-set of test cases based on frequent defects, visible functionality, integration test cases, complexity of test cases, sample of successful and failure test cases and then execute only such sub-set.
Test Cases:
  • Regression test cases are derived from the functional specifications, user manuals, tutorials, and defect reports related to the fixed defects.
  • Regression testing can include the test cases which have passed earlier.
  • Regression testing also checks for unexpected side-effects.
Role of automation:

Automation plays a very crucial role in regression testing because manual testing can be very time consuming and expensive.



Retesting is a type of software testing which is carried out to make sure that the tests cases which failed in the previous execution pass after the defects against those failures are fixed.


The purpose of re-testing is to ensure that the previously identified bugs are fixed.

  • Whenever a defect in the software is fixed, re-testing needs to be carried out. The test cases related to the defect are executed again to confirm that the defect has indeed been fixed. This type of testing is also referred to as confirmation testing.
  • Re-testing has to be carried out prior to regression testing.
  • Re-testing is a planned testing.
Testing Techniques:

Re-testing needs to ensure that the testing is executed in the same way as it was done the first time – using the same environment, inputs, and data.

Test Cases:
  • No separate test cases are prepared for Re-testing. In Re-testing, only the failed test cases are re-executed.
  • Re-testing does not include the test cases which have passed earlier.
  • Re-testing only checks for failed test cases and ensures that originally reported defects are corrected.
Role of automation:

There is no way to automate only the re-testing since it depends on the defects found during the initial testing.

A Cloud Migration Checklist

An increasing number of enterprises today are migrating to the Cloud. A survey conducted by RightScale, a cloud automation vendor, confirmed this trend and revealed that:

  • 93% of respondents reported that they are adopting the cloud.
  • 88% of the respondents reported using the public cloud.
  • 63% of the respondents use the private cloud.
  • 58% of respondents use both private and public cloud.

Migrating to the cloud presents enterprises with some obvious benefits. Increased availability, better performance and clear cost benefits are some of the obvious advantages of moving to the cloud. Research conducted by various agencies such as Gartner, Ovum, Forrester, International Data Corporation (IDC) and others agree – “the global SaaS market is projected to grow from $49B in 2015 to $67B in 2018, attaining a CAGR of 8.14%.” and by 2019 cloud applications will account for worldwide mobile traffic. Goldman Sachs also estimates that the “cloud infrastructure and platform market will grow at a 19.62% CAGR from 2015 to 2018, reaching $43B by 2018”

However, when migrating to the cloud enterprises have to ensure that their initial footprint in the cloud is compatible with the technology stack present in the cloud platform of their choice. They also have to ensure that the platform is able to scale comfortably to suit growing business and user requirements. Thus taking a strategic approach becomes an essential part of cloud migration and consider how the enterprise intends to do business so that this can become an inherent part of the cloud strategy.

Cloud migration is the process in which data, applications or other business elements are moved from onsite computers to a cloud infrastructure or are moved from one cloud infrastructure to another. In this post, we will shine the light on some essential components of a cloud migration checklist. We hope this will help enterprises looking to migrate to the cloud do so seamlessly and help them reap the real benefits of this move.

Network architecture:
To take complete advantage of the cloud, enterprises need to make sure that their network infrastructures are set up for this. Traditional network infrastructures may suffer poor application performance or even expose themselves to security vulnerabilities. Thus before making the move to the cloud enterprises need to make sure that their network is well-designed and cloud-optimized by ensuring routing optimization, reliability, and low latency in WAN performance, and ensuring device support. Taking a holistic approach to the network architecture thus, becomes the foundation of successful cloud migration.

Application architecture:
While moving applications to the cloud might look simple, in reality, this takes a lot of careful planning for great execution. Before migrating applications to the cloud, architects need to evaluate if legacy applications need to be replaced and assess which applications will get the most out of the cloud investment by doing an inventory assessment and then plan the application move. Typically, enterprises should avoid moving systems in large chunks and should ensure that these systems or application first-movers are not the most business critical and tricky. Mission-critical workloads, legacy application, sensitive data might not be the best first movers to a public cloud. Treating the cloud as a logical extension of the current landscape and assessing application dependency thus becomes an essential part of the cloud migration check-list.

Business continuity plan:
Having a business continuity plan should also form an essential plan of the cloud migration journey as vulnerabilities, natural or man-made (think the Japan earthquake or the Amazon outage in 2011) can sometimes disrupt business. Enterprises need to build diversity into the disaster recovery and business continuity systems and should be able to run on a number of different infrastructures. Evaluating options for business continuity and designing systems and configurations that can enable a high level of automation should find a significant spot on a cloud migration check-list.

Evaluating costs:
So, should you opt for a private cloud, public cloud or hybrid cloud? While cost efficiency is a big reason of why enterprises move to the cloud, it is important to remember that financial benefits differ from one application to another. Applications using legacy hardware can be more expensive to run in the cloud. Identifying the technical requirements, gathering performance data, and identifying if there shall be any hidden expenses when migrating to the cloud can help in planning the network and bandwidth costs and for deciding which cloud flavor will best suit the enterprise.

Governance and security:
Since traditional on-premise systems will not work as-is in the cloud, enterprises have to take a look at evaluating their governance approaches. As most of the governance responsibility rests on the cloud providers once the move to the cloud is complete, enterprises need to reshape their governance strategies to rely more on the offerings by the cloud than on their internal security. Assessing the cloud providers’ security certifications thus becomes important. Planning ahead for any fail overs, potential breaches and disaster recovery also becomes a critical part of a cloud migration check-list.

As enterprises assess the benefits and risks of a move to the cloud, it is important to note that cloud migration does not have to be an ‘all or nothing’ proposition. With careful assessment, enterprises can begin with first moving some applications and services to the cloud and continue to operate the rest on-premise. Once all the boxes in this check-list have been crossed then enterprises can avail rapid cloud transformation and embrace the power offered by the cloud.

Role of Test Automation three Testing Areas

As technology becomes more embedded into business processes, testing is becoming an area of strategic importance. While manual testing cannot be completely done away with, in order to ensure the effectiveness, efficiency and wider coverage of software testing, test automation is the way to go. Gartner notes that in order to be agile testing needs to be automated.

  • Unit Testing:
    Unit testing is a very important part of the software testing process since it is the small code fragments that are tested here. It involves checking the pieces of the program code in isolation and independently to ensure that they work correctly. Since unit tests mean checking the source code, the test suite should be a part of the build process and scheduled to run automatically every day so that testing happens early and often. Manually testing these pieces of code can be resource and time intensive and can lead to a number of errors. By automating unit tests, testers can ensure that the source code within the main repository remains healthy and error free and there is a line of defence built against bugs. While it might take 10-30% more time to complete a feature, automating Unit tests ensure that:
    1. Problems are found early in the development cycle.
    2. Ensures that the code works now and in the future.
    3. Helps developers become less worried about changing codes.
    4. Makes the development process more flexible.
    5. Improves the ‘truck factor’ of the project.
    6. Makes development easier by making it predictable and repeatable.
  • Functional Testing:
    Functional Testing, a test technique to cover the functionality of the software and ensure that the output is as per the required specifications. It also has to ensure that test scenarios including boundary cases and failure paths are accounted for. While functional testing is not concerned about the internal details of the application, it tests in great detail what an application ‘does’. During functional testing, developers need to set benchmarks that are developer-independent in order to identify what they have not achieved. So, as soon as a function is developed it needs to be tested thoroughly and this process has to continue until application development has been completed.Since the user ultimately shall be running the application on a system along with other applications and the application has to endure different user loads, the developers also need to make sure that every function of the application is crash resistant. Testing continuity, ensuring application grade tests are as external as possible, white box testing etc. are some of the areas that functional testing has to cover. Doing all of this manually can not only be time consuming but also leave room for error. A functional testing strategy should incorporate:

    1. Test purpose.
    2. Project creation.
    3. Test construction.
    4. Test automation.
    5. Test execution.
    6. Results check.

    Test automation makes it easier to perform powerful and comprehensive functional tests that will help develop a robust product.

  • Performance Testing:
    Performance testing assumes a big role in the testing and QA process as it helps in identifying and mitigating performance issues that can derail the entire application. Performance Testing can sometimes also be associated with stress testing, load or volume testing as this test aims to gauge the system’s ability to handle varying degrees of system transactions and concurrent users along with assessing its speed, stability, and scalability. Automating test process during performance testing is essential as the goals of a performance test comprise of several factors which include service level agreements and the volumetric requirements of a business. Testers need to test system transactions and user transactions concurrently, measure and reference system response time against the Non-Functional requirements, and also determine the effectiveness of a network, computer, program, device or software. During performance testing, the testers need to measure the response time or the number of MIPS (millions of instructions per second) at which a system functions. They also need to evaluate that the qualitative attributes of scalability, reliability, and interoperability to ensure the product meets with the specifications required. Understanding the content to assess problem areas, building test assets, running tests and analyzing the data to understand performance bottlenecks are some of the things that testers need to do.
    Since performance testing is intensive and exhaustive, testers should leverage test automation once the test strategy has been defined.

Taking a strategic approach to testing automation and viewing it as a part of the development process makes sure you have a product that is flawless and delivers on all the desired metrics. This ultimately leads to higher quality software, greater customer satisfaction, and higher ROI.

Choosing the Right Test Automation Outsourcing Partner

The bitterness of poor quality remains long after the sweetness of meeting the schedule has been forgotten.

The importance of software testing cannot be reinforced more. While testing is an essential part of software development, it is not the core competency of most of the companies which are into the development of the software. It involves a considerably wide range of activities such as test strategy definition, test case creation, test plan development, automation strategy, and of course, execution. Outsourcing the testing to the niche companies which have strong expertise in testing provide multiple benefits such as – faster time to market, improved product quality, and reduced testing costs. It, therefore, does not come as a surprise to know that the worldwide software testing outsourcing market is expected to grow from $30 Billion in 2010 to $50 Billion in 2020.

Choosing the Right Test Automation Outsourcing Partner

Test automation is a further niche area in the testing space where very few companies seem to have mastered the skills and expertise. Choosing a right test automation partner involves a little more careful effort. Before you finalize your test automation outsourcing partner, here are few things which you need to carefully evaluate:-

  • Business Understanding:
    Yes, testing automation involves technology but it is more of a business decision. Whether to go for Automaton, what should be the goal of automation, how does the strategy align with the business goals –  one needs to get answers to these questions while formulating the test automation strategy. Your outsourcing partners should be able to work with you to help you achieve your business goals and not just provide you “testing experts who are ready to work for 10 hours a day”.
  • Project Understanding:
    “Judge a man by his questions rather than his answers.”– Voltaire
    Every test automation project is different and it requires special consideration of various aspects such as project context, the current state of the application, future roadmap, business goals, the current state of automation etc. If a vendor comes back to you with a standard strategy, then you should be skeptical about such vendor. The right test automation outsourcing partner will ask you all the right questions to understand your project before suggesting a strategy to you.
  • Project Management and Processes:
    This might be the standard requirement for all the outsourcing projects but it is a crucial requirement for test automation outsourcing. Test automation requires proper planning and a very streamlined execution for it to be successful. When you are looking for a vendor, look for someone which has well-established project management tools and processes. The teams should be trained on those. You will need periodic reviews, regular status updates, and acceptance cycles – the vendor needs to have all the required setup in place to facilitate this.
  • Respect for Intellectual Property:
    Due diligence and respect for intellectual property is quite important for your testing outsourcing initiatives because you might share some testing artifacts, test cases, and sometimes, even the source code with the outsourced vendor. You must ensure that the vendor not only understands the importance of intellectual property protection but also has strict guidelines in place for adherence.
  • Experience:
    Nobody likes being a Guinea Pig. Evaluate the vendor for its experience of working on a variety of test automation projects and its experience of working with multi-cultural teams. We highly recommend asking for client references and case studies. Go for a vendor who has won the trust and praise of the well-known names in the industry. If you feel it necessary, go ahead and speak to their customers to understand their experience on delivery timelines, output quality, technology competency, costs etc.
  • Testing Experts:
    As mentioned earlier, test automation is a niche area and unless testing is your core competency, there is a little chance that you will be able to do a good job at that. Test automation involves definition of a strong strategy, business understanding, selection of right tools and technology, having the right team in place, expertise in test case creation and execution abilities. When you look for a test automation outsourcing partner, ensure that testing is the core competency of the company you choose- it cannot be “one of the many things they do”. Having a focus helps in the development of right processes, training, and team selection.
  • Scaling Up and Down:
    Instead of depending on one or two persons for the execution of test automation, you need a team which is well-trained on all the aspects of execution. Outsourcing needs to offer you the flexibility to scale up or scale down the team size at a short notice without a strong dependence on any particular people in the team.
  • Knowledge and Understanding of Tools and Technologies:
    Test automation requires knowledge and understanding of a variety of tools and technologies. Your test automation partner should be capable of selecting the right technology which is apt for “your requirements”. Just because they have expertise in one technology does not make that technology right for you. You must evaluate the expertise of the outsourcing vendor with various tools and technologies.
  • Know What NOT to Automate:
    More than “what to automate”, the real skill lies in deciding “what not to automate”. Look for a partner who can help you make these decisions. Sometimes, conducting certain tests manually makes more business sense. A knowledgeable outsourcing partner should help you make these decisions.

We strongly believe that the test automation outsourcing vendor cannot work in isolation. It needs to work with you as a partner to your company and needs to be invested in the project.

We hope these guidelines help you make the right decision in the selection of the test automation partner for your project.

Can test automation replace human testers?

The fact that Software testing is integral to the development and overall success of any product in the IT industry, puts Software development managers worldwide in an all too familiar dilemma. What testing strategy they should pursue? Do they hire the services of a professional testing team for their product verification? Or would they be better served using an automated tool. The debate on the use of Manual testing vs. Automated testing has been raging on for quite some time now. Before we judge the utility of either, it wouldn’t do any harm to briefly enumerate the pros and cons for both.

Automated testing

Test automation employs specialised software tools which run a set of repetitive tests with a predefined set of instructions, to compare a program’s expected and actual responses. If there is a perfect alignment between the two, then it indicates the bug free quality of the product and its readiness for shipping. If not, then the software needs de-bugging and a re-run of tests until all the glitches are rectified. Let’s take a look at some of the advantages of this approach:


 Runs quickly off the mark:
One of the biggest factors which give an edge to Automated testing over manual testing is of course, the speed. Best part is the re-usability of these tests, which is a much needed relief for those running regressions on a frequently changing code.
Automation testing can be done on different machines and operating system platforms on a concurrent basis.
It is very effective for Build Verification and Testing.
 Project size:
Automated testing is apt for large scale projects.


  •  Automation testing is not helpful with UI testing
  •  High initial cost of tools.
  •  When it comes to considerations requiring a human touch such as image contrast or text font size, automated testing cannot be trusted with the most accurate of solutions.

Top 10 Mobile app testing mistakes you must avoid

With India becoming home to the second largest user base of smart phones, the growth of mobile apps has shown an exponential growth over the past few years. Since the task of mobile app testing is as essential as the development of the app itself, it becomes fruitful to list down a few gaffes, which are routine with mobile app testing.

1. The looks are important, but then so are the uses:

Mobile app testers often tend to focus more on the UI/UX aspect of the app and forget about the basic functionality for which the app was developed in the first place! An app is only as good as its use. Agreed an eye catching user interface always attracts more users, but if it fails in providing solutions, then it’s nothing more than a dud. Therefore a balance perspective with equal focus on UI and functionality is the need of the hour.

2. Jumping to test without knowing the Sophistication involved:

More of an attitudinal thing than a flaw, it’s common to find testers run through the process of testing without arming themselves with a thorough understanding of the basic working logic in the app. A couple of tests on the UI, some feature tests here and there and ta da the report is ready for submission. Important thing before testing is to have requisite knowledge of the chief functionality, user end requirements, a vision for changes expected at the end of an update, the business requirements from the app, etc. This will eventually help in developing a wide coverage for all the intricacies in need of testing.

3. Web and mobile testing are as different as chalk and cheese:

To someone who is familiar with the world of web page testing, the approach to mobile app testing may sound similar. However, the two things are a world apart. While an average web browser may require an update once a year at the most, a mobile application requires a visit to the app store virtually every month. Therefore, a mobile app tester requires to incorporate an approach which will validate the functionality of an app in sync with its scheduled updates.

4. Online/offline plus networking issues:

While it is crucial to test the proper working of the app when the user is online and offline, a vital issue often overlooked, is its ability to work across different bandwidths. This becomes relevant for users residing in areas having a history of intermittent service provision of net/ wireless capabilities.

5. Utility of Crash logs:

Crashes are major irritants for users and one of the biggest factors contributing to a lower rating of the app. Hence, it pays to maintain a detailed report of all the crash occurrences, so that it becomes convenient for the tester to prioritise all the major bug issues and prevent them happening post lunch.

6. We can test everything:

It goes beyond the realm of possibility to test, say 100 case scenarios for an app, over 10 different devices, operating under 10 diverse working environments. The problem is compounded when the date for the launch of the app is ever so near. Common sense advocates prioritising a few critical test case scenarios based on recent market trends for devices which are most in vogue, studying user behaviours and making use of google analytics.

7. Inadequate reporting of bugs:

Due to miscommunication, inadequate knowledge or time constraints related to launch deadlines, the testers are not able to send a complete report of all the repetitive and critical bus which hampers an app’s working and consequently the developer team is unable to resolve all the issues cropped up at the time of testing. As a result, the app is released into the market replete with glitches.

8. Complexity in the name of sophistication:

Many users find it difficult to navigate their way in an app because of a congested or sophisticated user interface. Therefore, it pays to have an app that is easy on the eye for the customers and moreover, the test cases involved for such interfaces is always easy and less time consuming.

9. One tester can do it all:

It is always beneficial to have professionals having experience in working on different environments on your team. By tapping into their knowledge, a greater number of potential flaws in the app can be detected at half the time, before the pending launch.

10. Importance of customer feedback:

You have tested the app and it has been launched and your work is over, right?, not quite. The tester must make it a point to gaze through the user reviews for the products similar to the ones he tested. These are readily available at app stores such as Google play store, Mobogenie, etc.

The ubiquity of a smartphone is testament to the fact, our lives will now be increasingly be controlled by mobile apps. The importance of mobile testing can thus never be overemphasized.

Does Test Automation Have a Role in the MVP Way of Life?

By Rajiv Jain (CEO, ThinkSys)

Us veterans of the software product development game may be forgiven a bit of confusion these days. From the old days of the waterfall model of development, we are now faced with choosing between Agile development methodologies, DevOps and Continuous Delivery or perhaps rushing to churn out a Minimum Viable Product. Of these, the last, the MVP approach, is particularly favoured by startups or enterprises who seek to emulate the nimbleness of the startup. In a nutshell, the MVP is a kind of early Alpha of the product that is done fast and released early with a view to gathering customer feedback and either gain early market traction or fail fast enough to allow recovery. Steve Blank explains it well, “You’re selling the vision and delivering the minimum feature set to visionaries, not everyone.” The aim is to release, iterate and release again in short cycles. I could not find any reliable stats but anecdotal evidence would suggest that approach is rapidly gaining ground in product development – frankly, I’m not surprised considering how it gives maximum bang for the effort and money invested.
Does Test Automation Have a Role in the MVP Way of Life?
Among the MVP approaches, there are those that seem to focus on the “minimum” and are thus concerned only with seeking market validation of an idea through landing pages or crowdfunding campaigns. Ignoring those let’s focus our attention on those that focus on the “viable” and release actual products into the market. These products are generally single-feature or window-dressed versions or limited-feature versions of the vision. As such there is software design, architecture and development effort involved and of course where there is development testing has to follow. An interesting question to ponder is does test automation have a role to play in this kind of development?

But before that, one, only slightly, philosophical question is about the MVP itself. In many ways, the MVP itself is a kind of trial balloon, a test of an idea. The limited objective of the MVP is to seek customer validation and only if it passes that initial test is it on to the next stage. This suggests some strong factors in favour of test automation and a couple against it too.

While early customers do expect some rough edges to the MVP they are unlikely to forget or forgive poor quality and such early impressions could kill the product before it really lives. This means testing and QA cannot be given the short shrift even in the MVP. This then brings into focus the speed aspect. The product has to be released quickly, to start with as well in succeeding iterations as customer feedback piles up. Speed has always been one of the chief reasons to pitch for Test Automation. A natural fit it would seem!

Well, not so fast. There are a couple of other important factors that may make you change your mind. For one, the MVP approach does not lend itself well to long-term planning. This approach, by definition, does not provide any great visibility into the final shape the product is likely to take or even into upcoming versions. That’s not great news for those looking to build an effective test automation strategy given the lag time between defining what should be automated and deploying a well put-together automation suite. In the time, it takes you to develop an automation suite the test cases you are automating for may no longer exist! The other issue is applicability The shape of the product changes quickly from version to version so it would seem that many of the test conditions may not occur more than once. Clearly there’s not much value to automating test cases that will be used only a handful of times. So the prospects of test automation in the MVP scenario don’t appear that bright.

That being said though I see one possible connection, and it could be a key one. If you look at the MVP approach as a series of empirical experiments intended to define the final shape of the product, then this can become a wonderful early input into the automation strategy. In this approach, the product plans and, in turn, the test automation strategy can be much more “frozen” since the likely changes and iterations can be identified and accounted for very early in the process. This allows the test automation team more elbow room – they can plan better, design better and presumably execute better on those plans. This could be a big boost to the automation efforts.

So, my own feeling is that the process of building an MVP is perhaps too scrappy to allow for decent degrees of automation but that being said the end product that the approach itself seeks to build could benefit tremendously from a better-planned test automation approach. Let me turn to the leading light of the Lean Startup and MVP movement, Eric Ries for support. He said, “All innovation begins with vision. It’s what happens next that is critical.” Fair enough!

Should Developers Test Their Own Code?

I don’t care if it works on your machine! We are not shipping your machine!” – Vidiu Platon

I can almost see the testing manager talking (negotiating?) with the developer while going over the bug report and making this, rather exasperated, statement. Those of us who have been in software development long enough are quite likely to have encountered such intense “developer v/s tester” situations somewhere along the way. Of late the roles, or should I say battle lines, have blurred somewhat – don’t believe me? Think about how often you now come across the term Software Design Engineer in Test (SDET) – much more frequently than was the case even a couple of years ago right? In the language of car companies, the SDET is a “crossover” – someone who spans the software development and testing worlds and has bits, pieces and features of both in his job description. The SDET is a visible symbol of a discussion we are having quite often now – should developers test? Also by extension, should developers test their own code?
Should Developers Test Their Own Code?
At one level, especially for smaller software development teams, start-ups or even larger companies where software is not the core function, the attraction of having a smaller self-contained team of developers who do their own testing and validation is obvious. Costs are contained, as is the management overhead and the demands on their, more limited, internal technical bandwidth are less onerous. At the other end of the spectrum are larger software development efforts where more development-friendly test functions like test automation demand the presence of software developers on the testing team. We have always made the case that in the case of test automation the test strategy has to be much more closely coupled with the development strategy and architecture – again a function where developers could play a key role.

That being said, I believe that testing is a specialized task in itself that needs a focused testing team to action effectively. There are some key reasons why I think developers would not be quite as well suited to the task of testing their own code.

  1. Mindset:
    Most people will agree with the statement that the job of a tester is to break the software. A quality tester will try every use case and condition possible to try and make the software fail. They look for complex situations, combinations of situations, repeated applications and even heavier than expected load conditions to make the software break. In many ways, this is a completely opposite mindset to that with which the developer approaches his task – essentially the developer is always trying to make it work and that may not be the best approach for testing.
  2. Starting right:
    It’s not unusual to encounter development efforts that have been built on an inaccurate understanding of the initial requirements. A developer testing the code would generally be less inclined to check if the foundation itself is incorrect. Essentially the “bug” is the place where the design started.
  3. Proprietary interest
    It’s a rare developer who doesn’t get attached to his (or her) code. It’s something they have sweated over and as a result, to some extent, they would start testing from the position that the code works. This is likely to lead to short cuts, assumptions about simple or “trivial” things and a possible tendency to skip things that they “have fixed during coding”. Obviously, these are the kind of things that turn around and bite back later in the cycle.
  4. Big picture view:
    Most developers will have a reasonable view of the code they are working on – that unit or that piece of the puzzle. Even if that works fine in itself no software works in isolation. A Tester will usually be able to turn a wider scope and test – look at how the code works when fully compiled? Or how it works in a simulated, or real, user environment.
  5. Tricks of the trade:
    This is, of course, apparent. Testers have much more experience with the act of testing, the tools and techniques, how to log and report bugs and even common faults and the reasons why they occur. A developer would have to reinvent that particular wheel to become as efficient at the role.

So what role can developers play in the testing process? Well for starters it is apparent that they would be extremely well suited to unit testing and, at least, validation testing of their code. That apart there seems to be a good case for pairing developers and testers – the model works well in the Agile context due to the short sprints and multiple iterations. This, though, is the subject of another post. In closing let me share what Brian W. Kernighan said, “Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.” I’m not saying I agree – just that he’s the computer scientist, not me.