Considerations for Automating Mobile UI Testing

What is a successful application? Our view is, a successful application is one which responds in a functionally correct manner with the user and satisfies his (or her) need. At the same time, the application has to be simple, bug-free and easy to use. Given that the affinity towards mobile apps is growing, apps that have a killer user interface and easy usability are the ones surviving the mobile app tsunami.

This brings us to the topic of UI testing. UI testing of mobile applications tests that the user interface of the application works in the desired manner. This type of testing has to make sure that menu bars, icons, and buttons designed to make the app fun and easy to use behave correctly. Quite obviously, testing each of these aspects is a time-consuming, expensive and, sometimes even, a tedious process. This is where automation comes into play and presents an opportunity to eliminate the need to manually verify all aspects of the user interface and document all the errors noted.

Testing, any form of testing, of mobile applications, is a relatively more complex process than testing web or desktop applications. To begin with, desktop applications are usually tested against one dominant platform. In the case of mobile apps, there is no one dominant platform. Mobile apps, unless otherwise specified, have to be tested on iOS, Android (and its various versions) and now maybe Windows platforms too. Along with this, the different device form factors and device diversity make mobile app testing a far more extensive process. For example, the official Android device gallery has over 60 devices of various screen sizes, form factors, and resolutions. As mobile apps become more sophisticated the need for deeper testing increases. So just like we automate unit tests and functionality tests to ensure that the app ‘function’ like it should, we can also automate UI tests to ensure that the app ‘looks’ the way it should.

  1. UI validation
    A mobile UI needs continuous validation during the building stage. It is essential to check size, colors, positions etc. of every single artefact on every individual view. Doing this exercise manually can be an extremely tedious process. Sometimes it can be tempting to skip the views that seem simple and have been developed a while ago and move to the most recent work that has been done. Automating the process of collecting the snapshot of the preview images becomes very helpful to discover UI mistakes and deliver greater value.
  2. Considerations for UI Test Automation
    There are certain aspects of the UI tests that lend themselves to automation easily. However, it also has to be understood that describing tests at only the technical level can lead to brittle tests. Changes in workflows, requirement changes etc. are to be considered when writing tests for mobile applications. Considering that business rules change quite frequently as per user requirements, it makes sense to write the UI tests that are closer to the business rules and describe them with as much clarity and efficiency as possible to what has to be done.
  3. Selecting a UI Automation Tool
    Testers have to make sure that the test automation tool that they employ tests not only the latest OS versions but also test earlier versions. For example, when testing an Android application, the test automation tool should ideally not only test the app UI for the latest Android version but test for the early versions and the sub-versions of Android. A test automation tool should allow testers to elaborate additional program modules that can be utilized during the late development cycles. Testers also have to look out for testing tools that do not have to deal with the source code to reduce the test complexity. Test automation becomes a tedious process during UI testing if testers have to write scripts for each individual device and the tests have to be adjusted each time the UI of the tested program is altered.

During UI testing testers have to make sure that the automation tool that they choose allows then to reproduce the complex sequences of user actions. Clearly selecting the right test automation tool is a great contributor to UI testing success. Using the right test automation tools will enable testers to focus on the testing system, spot defects, and regression errors to ensure that the new code does not break the current functionality, fix broken tests and adapt them to the testing system. It will so help them generate better, more detailed and informative reports and ensure that the testers spend more time in ‘testing’ and less time in setting up the tests.

Automating UI testing does require an initial investment. However, the ROI of the same is realized when the same test is applied time and again at a negligent incremental cost. When testing UI, testers, thus need to ensure that features that can change in the immediate future in terms of the UI flow, need real-time data from multiple sources, have technical challenges etc. should ideally not be considered for automation as these add to the testing costs. Being judicious about what to test, considering connectivity options and target devices, using Wi-FI networks in combination with network simulation tools work towards making automating mobile UI testing time efficient and cost effective.

Conclusion
All said and done, UI testing for mobile apps is challenging but it is also clear that utility exists when deploying it selectively for specific purposes.

Why TechWell’s StarWest and StarEast are Becoming my Favorite Conferences?

By Rajiv Jain (CEO, ThinkSys)

“No great idea was ever born in a conference, but a lot of foolish ideas have died there.” – F Scott Fitzgerald

Talk about damning something with faint praise. I am not sure what conferences Fitzgerald attended – I myself have been to more than a few, many focused on software testing, and with some admittedly mixed results. This post though is not about those events that didn’t work, it’s about two events that most definitely do work – Techwell’s StarWest and StarEast conferences. I got back a couple of weeks ago from StarEast in Orlando and now that I am done catching up with post event follow-ups it seemed right to put this down.
rajiv jain representing thinkSys at stareast
First, as you know, there is no dearth of Testing focused conferences. Googling “software testing conferences” spits out over 67.3 million page results. Clearly us software testing pros like talking shop. Among those results, you will find some good ones and some extremely nondescript ones but conventional wisdom has been that the stars of the show are TechWell’s StarEast and StarWest (yes, I know I’m not the first one with that pun). So convinced are we of the utility of these events that Thinksys elected to be a Platinum Sponsor at the most recent edition of both these conferences. So what drives this confidence?

1. The Speakers and the Sessions:
I can’t think of any other place that can boast of all-star line-ups including speakers like James Bach, Michael Bolton, and Joe Colantonio among a host of other gurus and expert-level users. The range of topics covered is also staggering – everything from testing and test automation best practices to the emerging trends not yet fully visible as they come down the turnpike. There is so much to learn from. This year I particularly liked Isabel Evans’ StarEast Keynote, “Telling our Testing Stories”. Her thoughts on the miscommunication between IT and Rest of the World resonated with me. I agree that we have to present our work as a testing organization in words and stories that are understood outside of our group if we have to show the value of our work.

2. The people:
The range the conferences span is vast – everybody from a junior tester looking to build a shining testing career to those tasked with software quality in some of the biggest companies in the world attend. As a company, access to such a highly targeted audience is extremely helpful. Over the two events this year, the companies we met spanned all parts of the QA spectrum. There were those focused on doing manual testing well to organizations who are seeking to make QA an integral part of the product planning process, involved from concept to delivery – a change we see happening more and more.

3. A bellwether:
This is the place to be if you want to get a sense of where the QA world is going in the near future and even beyond. The speaker sessions and even the conversations over coffee give you so much information to chew over. For instance, our meeting with the TechWell program chair, Lee Copeland was quite interesting. He is seeing a change in the role of QA and in the skills needed today, driven by the adoption of Agile and Continuous integration. We fully expect these trends to be transformational for testing. Testing could become a very different organization from what it is today.

4. A platform:
In many ways, this is where we get the real ROI as a sponsor. Our objective obviously is to get in front of our target audience and showcase who we are and what we have to offer. Over the two events this year we focused on our test automation tool, Krypton. I have to say that overall Krypton was quite well received. As a new tool, what was perhaps even more valuable for us was the feedback we got from those that saw what it could do. Many people liked our adoption of the Open Source initiative for this tool and many had valuable suggestions to offer on features, usability and so on. It does appear that they would like to see the tool made more visible and marketed more openly. On a personal note, I got an opportunity to talk about something close to my heart – what makes test automation projects fail. I have long believed that it is essential to know what could go wrong before you can work to ensure that everything will go right. The response to my talk on the subject was, frankly, quite overwhelming. At StarEast, the room was overflowing with attendees and the feedback during the talk and even later from people who stopped by the Thinksys booth was extremely gratifying. Clearly, this is something I should do more often!

Conclusion:

In closing, let me share something that Ryan Holmes said about SXSW, but is accurate enough about our conferences in general too. “One of the ironies of a conference dedicated to all things digital and virtual is that the best ways to connect with people are surprisingly old-school. Social media tools can improve the odds of a serendipitous encounter at SXSW, but old-fashioned hustle, palm-pressing and – above all – creativity go a long way.” I have come to believe that in the software testing world the place to meet people is StarWest (& StarEast). Assuming you agree- will you be at Anaheim, come October?

Things one should know while E-commerce Testing.

On observing the fast-changing retail landscape, Jeff Jordan of Andreesen Horowitz said: “We’re in the midst of a profound structural shift from physical to digital retail.” Today, eCommerce is a $341 billion industry and growing at the rate of approximately 20% each year. It is estimated that eCommerce sales will cross $414.0 billion in sales in 2018 according to Forrester.
Things one should know while E-commerce Testing.
While the popularity of eCommerce continues, the past two years also have been witness to many high-profile website glitches and crashes, especially during the high-traffic holiday season. From Walmart, Belk, Tesco to the mighty Amazon, all have experienced major glitches at crucial times, resulting in heavy losses, disgruntled customers, and some serious media bashing.

The expectations that people have of these eCommerce websites are increasing. Today, eCommerce websites need to not only look great but have to also ensure that they are user-friendly, easily navigable and do not take much time to load. Thus e-commerce site/website testing has emerged as a crucial component of e-Commerce business success. The need is to ensure that all the parts of the website function in harmony and that performance and security issues do not lead to bad press. For this to happen, testing cannot be treated as an afterthought and should ideally be built into the project from the very beginning. In this blog, we identify things to test when testing an eCommerce website.

  1. Product Page and Shopping Cart Behavior:
    While testing eCommerce website, the product page is equivalent to the shelving and the goods available in a brick-and-mortar store. Thus testing that the products display correctly and properly on the site is a given. Considering that the product page displays a lot of information such as product image, description, specification and pricing etc. it is critical that all this information and the associated value proposition display correctly whenever a customer logs in. Additionally, you have to check if the shopping cart is adding the products and the associated pricing correctly. Testers need to add multiple items to the shopping cart and then remove them to see if the price alterations during the changes are correct. You also have to ensure that special deals, vouchers, coupons etc. process correctly all the way through the checkout. Further, testers need to ensure that the cart remembers the items that have been stored when the browser is closed suddenly and then restarted.
  2. Category page:
    Category pages have a lot to convey so testers need to pay a great deal of attention in testing the category pages and must ensure that sorting, testing and pagination (SRP) making testing the search result page essential for eCommerce success. Considering that the search form is present in most of the pages, testers must make sure that when a customer goes to the SRP page the relevant products, product information, and the number of products per page display correctly, and all items on the next page and that there are no duplications in the next page. Ensuring that the sorting parameters work correctly and the sort order remains as chosen even when you paginate is important. Further, testers need to play close attention to filtering options and ensure that filtering and pagination work harmoniously. Finally, testers have to check if sorting and filtering options are both applied they remain the same as we paginate or if new products are added.
  3. Forms and Account creations:
    Optimizing forms in eCommerce with the help of testing can help increase conversion rates. Since forms are a key talking point, whether it is to sign up for the site newsletter or to create an account or at the checkout, testers need to make sure that these forms function correctly. Testers need to make sure that the information gathered in these forms is being stored, displayed and used correctly. If the customer creates a new account, then testers need to check the login behavior and ensure that the customer is connected to the right account. Testers also need to check the login and logout sessions, the login redirects and finally, check if the purchased items get added to the correct account. If the customer is proceeding to check-out as a guest, then testers need to make sure that they, the customers, get the option to create an account when the order is placed.
  4. Checkout and Payment Systems:
    Testing checkouts and payment systems are critical to the success of an eCommerce site. Lengthy checkout procedures and complicated payment systems can increase the chances of shopping cart abandonments. Testers need to ensure that the checkout and payments process is smooth and glitch free. For this taking a close look at the checkout process to assess that the final amount to pay is correct after all other charges such as VAT, delivery charges etc. are levied. Testers also need to check that final updates after alterations such as changes in products being ordered, change of delivery address etc. reflect correctly. Testers need to also check the payment systems using each payment method that is on offer. Debit cards, credit cards, PayPal, mobile payment options, all should be checked individually to check if the systems work correctly and also ensure that confirmation emails are sent correctly.
  5. Performance and Load testing:
    Testers need to pay special emphasis on performance and load testing of eCommerce sites. Almost 18% of shopping carts are abandoned because of slow eCommerce websites. For an eCommerce website that makes $1 100,000 per day, a 1-second page delay could potentially cost $2.5 million in lost sales each year. Testers also need to make sure that the website can handle high traffic and assess how the website performs when instead of two, 200 people log in simultaneously on their website without slowing down.
    Apart from this, testers need to check web browser compatibility, ensure that cookies are audited and that there are no broken links. Further, they need to also check if the website has mobile device compatibility considering 7 out of 10 users access eCommerce websites from their mobile devices.
  6. Security Testing:
    Testers need to focus on the security testing to safeguard customer data and ensure that the customers privacy is not compromised. Testers thus need to check the penetration and access control, check for insecure information transmission, web attacks, digital signatures etc. They also have to ensure that the application handles incoming and outgoing data securely with penetration tests and identify vulnerabilities that can cause a security breach and jeopardize client information.

Conclusion:

Testing an eCommerce website requires careful planning, meticulous execution and an eye for detail. However arduous this task may seem, it is a critical element that contributes significantly to the success of an eCommerce website.

Software testing for Microservices Architecture

Over the last few years, Microservices has silently but surely made its presence felt in the crowded software architecture market. The Microservices architecture deviates from the traditional monolithic application built where the application is built as a single unit. While the monolithic architecture is quite sound, frustrations around it are building especially since more and more applications are being deployed in the Cloud. Microservices architecture has a modular structure where instead of plugging together components, the software is componentized by breaking it down into services. The applications, hence, are built like a suite of services that are independently deployable, scalable and even provide the flexibility for different services to be written in different languages. Further this approach also helps enables parallel development across multiple teams.
testing microservices architecture
Quite obviously, the testing strategy that applied to monolithic needs to change with the shift to micro services. Considering that applications built in the micro services architecture deliver highly on functionality and performance, testing has to cover each layer and between the layers of the service and at the same time remain lightweight. However, because of the distributed nature of micro services development, testing can often be a big challenge. Some of the challenges faced are as follows:

  • An inclination of testing teams to use Web API testing tools built around SOA testing which can prove to be a problem. Since the services are developed by different teams, timely availability of all services for testing can be a challenge.
  • Identifying the right amount of testing at each point in the test life cycle
  • Complicated extraction logs during testing and data verification
  • Considering that development is agile and not integrated, availability of a dedicated test environment can be a challenge.

Mike Cohn’s Testing Pyramid can help greatly in drawing the test strategy to identify how much of testing is required. According to this pyramid, taking a bottom-up approach to testing and factoring in the automation effort required at each stage can help address the challenges mentioned above.

  1. Unit Testing
    The scope of unit testing is internal to the service and is written around a group of related cases. Since the number of unit tests is higher in number they should ideally be automated. Unit testing in micro services has to amalgamate Sociable unit testing and Solitary unit testing to check the behaviors of the modules by observing changes in their state and also look at the interactions between the object and its dependencies. However, testers need to ensure that while unit tests constrain the ‘behavior’ of the unit under test, the tests do not constrain the ‘implementation’. They can do so by constantly questioning the value of the unit test in comparison with the maintenance cost or the cost of implementation constraint.
  2. Integration Testing
    While testing the modules in isolation is essential, it is equally important to test that each module interacts correctly with its collaborator and test them as a subsystem to identify interface defects. This can be done with the help of integration tests. The aim of the integration test is to check how the modules interact with external components by checking the success and error paths through the integration module. Conducting ‘gateway integration tests’ and ‘persistence integration tests’ provide the assurances help in providing fast feedback by identifying logic regression and breakages between external components which ultimately helps in assessing the correctness of logic contained in each individual module.
  3. Component testing
    Component testing in microservices demands that each component is tested in isolation by replacing external collaborators using test doubles and internal API endpoints. This provides the tester a controlled testing environment and helps them drive the tests from the customers perspective, allows comprehensive testing, improves test execution times and reduces build complexity by minimizing moving parts. Component tests also identify if the microservice has the correct network configuration and its capability to handle network requests.
  4. Contract Testing
    The above three tests provide a high test coverage of the modules. However, they do not check if the external dependencies support end-to-end business flow. Contract testing tests the boundaries of the external services to check the input and output of the service calls and test if the service meets its contract expectation. Aggregating the results of all the consumer contract tests helps the maintainers make changes to a service, if required, without impacting the consumer and also help considerably when new services are being defined.
  5. End-to- End Testing
    Along with testing the services, testers also need to ensure that the application meets the business goals irrespective of the architecture used to build it and test how the completely integrated system operates. End-to-end testing thus forms an important part of the testing strategy in micro services. Apart from this, considering that there are several moving parts for the same behavior in micro services architecture, end-to-end tests identify coverage gaps and ensure that business functions do not get impacted during architectural refactoring.

Conclusion
Testing in micro services has to be more granular and yet, at the same time, avoid become brittle and time-consuming. For a strong test strategy, testers need to define the services properly and have well-defined boundaries. Given that the software industry is leaning in greatly towards micro services, testers of these applications might need to change processes and implement tests directly at the service level. By doing so, not only will they be able to test each component properly, they will also have more time to focus on the end-to-end testing process when the application is integrated and deliver a superior product.

Role of DevOps in QA

DevOps, a compendium of development and operations is an organisational strategy, focusing on a close collaboration and communication of the software developer team with the other professionals belonging to the testing and releasing teams. The planning employs automated processes in a symbiotic environment which ultimately results in building, testing and releasing the software with clockwork and guaranteed reliability.

How does QA benefit from DevOps:

Some ten years back, QA was seen as a group disparate from the Developers’ teams, with different skill sets and responsibilities and management. Fast forward into the DevOps age, and things are quite different today. Here’s how we look at QA through the glasses of Devops…

1. Automated Deployment:

The conventional approach of a software “release” is now passé with devops facilitating the delivery of the product on a monthly, weekly and even hourly basis into the market through automated processes. This has been made possible through a continuous cycle of improvement where the developers, testers and operations people all working in sync and moving in the same direction.

2. Environment is now a part of the product:

Traditionally here’s how the flowchart used to be like….you create a software, get it verified in a testing environment of the QA team and when the litmus test is over, so to speak, unleash it into the big bad world of the user. If anything then went wrong, it was the problem of the operations teams. Not any more….As is evident from the success of Google’s Unbounce , the QA verifies the environment with their chief enabling infrastructure being the code itself. At the occurrence of any change/problem, the QA team initiates the requisite deploys, examines that the intended change, functions as expected and move over to the latest deployed code with the option of a roll back if needed.

3. Prevention is better than discovery:

In a devops environment, the priority for QA is prevention of faults and not just finding them. As opposed to let’s say ten-fifteen years ago, the QA teams of today have the luxury of pushing the code on when it’s fully functional and rolling it back when things go awry. This has positive ramifications in that, the QA team can continuously track the quality of the product. Thus the QA team has a profound influence on the development and operational phases of the software.

4. Less of Human error:

Devops enables more of automated testing in QA thus reducing the glitches due to fatigue associated with manual testing. This also ensures 100% code coverage and quick scripting of test cases.

5. Greater teamwork and rapport:

At the individual level, the testers and the operations team get a chance to be on the same page and level as the developers. This improves coordination ticking the boxes for higher market outreach and efficiency gains

Software Testing is Dead, Long Live Software Testing

By Rajiv Jain (CEO, ThinkSys)

Forgive the rather corny headline (fans of The Phantom will know it’s origin) but as someone who’s been in the software testing business for years now, I’ve grown more than a little weary of these periodic announcements of the death of software testing. I saw a quote from James Bach somewhere that went, “Pretty good testing is easy to do. That’s partly why some people like to say ‘testing is dead’– they think testing isn’t needed as a special focus because they note that anyone can find at least some bugs some of the time. Excellent testing is quite hard to do.” Bach’s assessment is likely true, but I think it is also possible that these prophets of doom aren’t really pronouncing the death of testing, rather the end of testing “as they know it.” This is a statement I am much more likely to get behind. If you look at the testing world around you, it has changed significantly in the last 2 -3 years even. Let me take this chance to highlight a few changes that I have particularly been struck by.

  1. Agile and DevOps:
    Let me take the liberty of lumping both these software product development movements together for the purpose of this post. Shortening development and release cycles pushing into continuous development, continuous integration, and continuous delivery have shaken the fundamentals of how we used to test software. These are the days of continuous testing – no more waiting for the end of a release, testing it in some kind of staging area and then pushing it back to engineering with bug reports in tow. Apart from these changes to the way we work, one fundamental change this acceleration in the release cycles has wrought is one that I am very happy about. Today, all around we see testing becoming much more tightly coupled to the product and business goals and at a much earlier stage in the product development cycle. Testing podcaster Trish Khoo had said, “The more effort I put into testing the product conceptually at the start of the process, the less I effort I had to put into manually testing the product at the end because less bugs would emerge as a result.” Product owners too now recognize that if testing has to deliver in such a time-constrained environment, it has to be involved early, get visibility of the direction the product is likely to take, and the possible roadmap of development. This is great news!
  2. Developers Testing:
    I have expressed my opinion in the past on whether developers should test their own code. Well, my opinion is moot– the fact is a blurring of boundaries has already taken place between development and testing and who does what. DevOps teams are routinely staffed with developers, operations, and testing skills as the product rolls out full-speed ahead. The need for greater code coverage in a shorter time has driven even greater use of Test Automation and writing the code or the scripts for automating test cases is a task developers are well suited for. Joe Colantonio says, “I remember the days when QA testers were treated almost as second-class citizens and developers ruled the software world. But as it recently occurred to me: we’re all testers now.” No more are testers looked down upon as children of a lesser God, so to speak.
  3. The Role of Cloud:
    The outlook has turned positively cloudy over the last few years. More and more products, or Apps as they choose to be called now, are SaaS-based and consumed on-demand over the “series of tubes” that make up the web all around us. Testing for these products (ok Apps) is pretty different but most specifically in the emphasis that now has to be placed on security and performance. Your App is only as secure as the last hack so an almost unreasonable amount of the focus of the testing is on identifying vulnerabilities and in keeping the architecture and the internals of the app safe from all possible attacks. Then there is performance. The days of your enterprise application sitting on a dedicated server on your network seem to have gone for good – when the app is on a multi-tenant infrastructure that is being accessed by hundreds, possibly thousands of other users it can’t show any strain from the load and this becomes a key focus area for those testing it.
  4. Mobility:
    Mobiles have changed our lives so can software testing be immune? Software products today have no choice but to have a mobile strategy. That customers will access these products from their tablets and smartphones is a given. To those tasked with testing these products, this means a load of added complexity. Now your test plans have to factor in devices of varying form factors and differing hardware and software capability. There is the access speed and bandwidth to think about. The biggest change possibly, though, is in testing the user experience. The small screen presents differently and is used very differently and the software products that expect to run on that small screen have to be tested with that in mind. All of these are paradigm shifts from the things we used to test for even a few years ago.

Conclusion:
Let me end with another quote, this one by Mike Lyles, “While we may understand the problem, it is critical for us to change our thinking, our practices, and continuously evolve to stay ahead of the problems we face.” The fact is the situation around us is always going to change – as capable software testers, our role will always be to find a way through, and to help build better software in the bargain.

Regression Testing in Agile Scenario

An increasing number of software development teams today favor agile software development over other development approaches. There’s no need to introduce Agile, but let’s revisit it anyway! We know that the Agile development methodology is adaptive in nature. It focuses on building a product based on empirical feedback gathered from frequent testing by breaking down the product development process into shorter development cycles or sprints. Key is that since agile development is iterative, all the aspects of product design are frequently revisited to ensure that change can be effected as soon as the need for it is identified. This is where we segue into testing – clearly in this scenario, developers need to test more and test fast. To do that, curating a comprehensive test plan based on the agile methodology is essential.

In agile development, testing needs to grow geometrically with each sprint and testers need to ensure that the new changes implemented do not affect the other parts of the application and existing functionalities – we know this type of testing as Regression Testing and this holds a very important place in the agile development scenario.

Regression testing essentially checks if the previous functionality of the application is working coherently and that the new changes executed have not introduced new bugs into the application. These tests can be implemented on a new build for a single bug fix or even when there is a significant change executed in the original functionality. Since there can be many dependencies in the newly added and existing functionalities, it becomes essential to check that the new code conforms with the older code and that the unmodified code is not affected in any way. In agile development, since there are many build cycles, regression testing becomes more relevant as there are continuous changes that are added to the application.

For effective regression testing in agile development, it is important that a testing team builds a regression suite right from the initial stages of software development and then keeps building on it as sprints add up. A few things to determine before a regression test plan is built are:

  • Identifying which test cases should be executed.
  • Identifying what improvements must be implemented in the test-cases.
  • Identify the time to execute regression testing.
  • Outline what needs to be automated in the regression test plan and how.
  • Analyze the outcome of the regression testing

Along with this, the regression test plan should also take into account Performance Testing to ensure that the system performance is not negatively affected due to changes implemented in the code components.

In the agile environment, Regression Testing is performed under two broad categories;

  • Sprint level Regression testing: This regression test is focused on testing the new functionalities that are implemented since the last release.
  • End to End Regression testing: This test incorporates end-to-end testing of ‘all’ the core functionalities of the product.

Considering that the build frequency in agile development is accelerated, it is critical for the testing team to execute the regression testing suite within a short time span. Automating the regression test suite makes sense here to ensure that the testing is completed within a given time period and that the tests are error free. Adding the regression testing suite to the continuous integration flow also helps as this prevents the developers to check-in new code before automatically evaluating the correct working of the existing functionality.

Regression testing can adopt the following approaches:

  1. The Traditional Testing Approach: In this method, each sprint cycle is followed by sprint level regression test. Post a few successful sprint cycles, the application goes through one round of end-to-end regression testing.
    This method allows the team to focus on the functional validity of the application and gives testers the flexibility to decide on the degree of automation they want to implement.
  2. Delayed Week Approach: In this approach, the sprint level regression test is not confined to a timeline and can spill over into the next week. For example, if the sprint level regression test that is supposed to be completed in Week 2 is not completed, then it can spill over to Week 3. This approach works well in the beginning of the testing cycles as the testing team at that time is still in the process of gaining an implicit understanding of the functionalities and the possible defects. Instead of parking the bugs/defects and addressing them at a later date, continuous testing lifts the burden of backlogs that build up during end-to-end regression tests.
  3. Delayed Sprint Approach: In this approach, the regression cycle is common and regression test cases that were employed for the second sprint contain the functionality stories that were a part of the first sprint. Since the regression cycle is only delayed by a sprint, this approach discounts the need for having two separate regression test cycle types. This approach also avoids a longer end-to-end regression test cycle. However, this approach has two challenges –
  • Maintaining the sanctity of the regression tests is difficult.
  • Maintenance of automation efforts increases considerably.

Organizations need to decide on their approach to regression testing keeping the continuity of business functions in mind. Regression tests target the end-to-end set of business functions that a system contains and are conducted repeatedly only to ensure that the expected behavior of the system remains stable. Our view is that having a zero tolerance policy towards bugs found in regression tests makes agile development more effective and ensures that the changes implemented in the product do not disrupt the business functions. We are not among those who agree with Linus Torvalds in his assessment, “Regression Testing? What’s that? If it compiles it is good; if it boots up, it is perfect.”

5 Types of Performance Testing You Need to Know

By slowing down the search results by only 4/10th of a second, Google found that it would reduce the number of searches by 8,000,000 per day! By it’s own calculations, 1 second of slower page performance could cost Amazon $1.6 billion in sales each year and for every 100ms of latency, it loses 1% of its sales.

People may not mind waiting in line in a real shop, but on the web, a delay of even 4 seconds makes 1 out of 4 customers abandon the web page. The importance of performance testing for your web application, therefore, cannot be emphasized enough.

In this blog, let us have a look at five different types of performance testing, the parameters to check for each and let us understand which type of testing is useful when.

  1. Performance Testing:
    The goal of the Performance testing is to test the application against the set benchmark. The aim is not to find defects but, instead, this testing validates the usage of resources, responsiveness, scalability, stability, and reliability of the application. It is important to note that the Performance Testing should be conducted on the production hardware using the same configuration machines which the end users will be using otherwise, there can be a great degree of uncertainty in the results. Performance testing helps businesses in capacity planning and optimization efforts.
  2. Load Testing:
    Load testing is carried out to validate the performance of the application under normal as well as peak conditions. It is conducted to verify that the application meets the desired performance objectives. Load testing typically measures response times, resource utilization, and throughput rates and identifies the breaking point of the application. It is important to note that load testing is designed to focus only on the speed of response. It is carried out to detect concurrency issues, bandwidth issues, load balancing problems, functionality errors which could occur under load, and helps in determining the adequacy of the hardware. Results of Load testing help businesses determine the optimal load the application can handle before the performance is compromised.
  3. Stress Testing:
    Stress testing validates the behavior of the application under peak load conditions. The goal of this testing to identify the bugs such as memory leaks or synchronization issues, which appear only under peak load conditions. Stress testing helps you find and resolve the bottlenecks. The results of stress tests highlight the components which fail first and these results can then help the developers in making those components more robust and efficient.
  4. Capacity Testing:
    Capacity testing is carried out to define the maximum number of users or transactions the application can handle while meeting the desired performance goals. The future growth in terms of users or data volume can be better planned by doing the capacity testing along with capacity planning. Capacity testing is extremely helpful for defining the scaling strategy. The results from capacity testing help capacity planners in validating and enhancing their models.
  5. Soak Testing/Endurance Testing:
    Soak testing is used to test the application performance and stability over time. It tests the application performance after enduring expressive load over an extended period of time. It is useful for discovering the behavior of the application under repeated use. These tests help in tracking down memory leaks or corruption issues.

Conclusion
Some of the basic parameters monitored during performance testing include processor usage, bandwidth, memory usage, response time, garbage collection, database locks, hit ratios, throughput, maximum active sessions, disk queue length, etc. One run of performance test can take anywhere between 1 day to 1 week, depending on the complexity and size of the application being tested. Apart from this, other aspects such as workload analysis, script designing, test environment setup etc. also need to be worked on.

We hope this blog was useful for you. How do you use performance testing in your testing plan? Do share your ideas and experiences in the comments section below.

How To Improve Productivity And Efficiency With Test Automation?

With each passing day, as the advancements in the IT industry go over the roof, so are the complexities involved in the working of software products. With new breakthroughs making headlines every day, the inherent intricacies of these applications are presenting never seen before challenges to the software testing community. Safe to say, manual testing is fighting a losing battle against Automation. We present you 7 reasons why Automation testing has already won the race against manual testing:-

  1. One Mistake And We Are Back To Square:
    This reminds me of the aphorism that one bad apple can spoil the whole good lot. Nowadays, mobile applications are built upon such feature sets. Every fortnight or so, each one of us is flooded with requests to update our apps from the Google play store. As the sophistication of everyday apps escalates, the requirement for new test cases increase with the same frequency. However, the reality is not so straightforward.

    Just a single update can cause the entire feature set to require a complete makeover. This also calls for the automated tests to be maintainable over an extended coverage of features affected, after every new update. If not, then it will require all the tests to be rewritten with an enormously large number of modifications. Such a scenario is humanly impossible for manual testers to fit into, and is evidently not a sustainable model of working.

  2. Sharpened Potency Of Test Cases:
    As test effectiveness is bluntly defined as the rate at which the testing methodology detects bugs in the development cycle of the product, Auto testing comes out as a clear winner over manual testing. This enhanced effectiveness results in a profoundly better quality of end product, thereby building on the all-important customer satisfaction and expansion of a loyal user base.
  3. Repeatability’s Cure Is Automation:
    Supporters of manual testing hide behind the banner of the low cost of testing. Though the situation is manageable for projects built on a small scale, the story is quite the opposite with applications suiting larger ambitions. Agreed, the use of customized automation tools is hard on the pocket, but a giant sized project with its innumerable tests that are repetitive in nature is a completely different proposition altogether.

    After judging on the effectiveness of re-usable automated tests that can be rerun ‘n’ number of times with no additional costs, it is quite hard to overlook the tremendous return of investment your project gets.

  4. Efficiency Across Different Platforms:
    With each new release of say a mobile application, its quality and ease of working should be replicated consistently over varied hardware configurations. This will want the source code to be modified and a consequent test run repeated each time. Manually completing these tasks for all development cycles would be cumbersome and hurt efficiency across all variants, for example, upgrading WhatsApp from Android devices to blackberry.
  5. Humans vs. Machines:
    It’s a no-brainer that when it comes to delivering accurate results with countless testing spanning over painfully long cycles, something very common with huge projects, Automated testing is unmatched. Manual testing will not be able to act as a substitute for avoiding errors and missing on any crucial fine details, even with the best of expertise.

    Also, with complex sophistication being the norm with most of the apps, Auto testing is some distance ahead of manual testing in finding out hidden information.

  6. Value Addition:
    Workers released from the task of carrying out repetitive manual testing can gainfully divert their creative energies towards building more robust test cases with even more innovative features. This will obviously lead to organizations reaping the twin benefits of ameliorated quality in products as well as skill up gradation of its testers.
  7. Combination with DevOps:
    With cent percent coverage and a quick scripting of test cases being the hallmarks of Auto testing, it’s not so difficult to find out why automatic testing is a natural attribute of a Devops Environment.

Conclusion:

With so many factors having a having a deep bearing on the overall efficiency and the subsequent cost benefits to an organization, it wouldn’t be incorrect to say that Automated testing is now a veritable asset to have

The Role of Software Testing in a DevOps World

By Rajiv Jain (CEO, ThinkSys)

Mark Berry said, “Programmers don’t burn out on hard work, they burn out on change-with-the-wind directives and not ‘shipping’”. I don’t know about the first but the desire to “ship” seems to be a powerful motivation for the push towards the adoption of DevOps practices among software development teams and companies everywhere. That and the business benefits obviously. I remember a Puppet Labs survey from a couple of years ago that showed that of the organizations surveyed, those that adopted DevOps shipped code up to 30 times more frequently with lead times of as little as a few minutes. They also experienced only half as many failures and had the ability to recover from those failures up to 12 times faster. These are not just numbers, I recall reading about one of the early DevOps adopters, Etsy. Apparently they update their site every 20 minutes without any service disruption to over 20 million users – that’s a phenomenal competitive edge to possess. Small wonder then that the DevOps tide is rising.

Across most SaaS-based product development efforts, the approach seems to be to bring Development and Operations together in, a largely, automation-driven effort to keep pushing releases out, more or less continuously. Given the sheer number, it’s perhaps not even fair to term these as releases any longer! Coming from a company that has a significant Software Testing practice though an interesting facet of this partnership between Development and Operations occasionally occurs to me. It seems to me that a third, equally interested party, Testing and QA, may not be getting the attention it deserves in this discussion. In fact, the case has sometimes been made that the greater degree of automation and the souped-up release cycles imply a reduced role for software testing. I couldn’t disagree more.

Let’s go back to the Puppet Labs survey I quoted earlier – notice that one of the key benefits stated was the reduced rate of failures and the ability to recover from those failures quickly. Clearly the aim is not to ship code, it’s to ship market-ready product on a “Continuous” basis. To me it seems that would be impossible to achieve without an even greater emphasis on testing. Testing, while Agile had brought out the need to involve testing strategy at a very early stage of the product planning. How else to keep up with the short release cycles? DevOps with its Continuous Integration and Continuous Delivery is, in this context, like Agile on steroids and the need, thus, is even greater to get testing involved early in the product definition and planning stage. In many ways, the approach now has to be to plan to prevent defects rather than detect them.

So, in DevOps testing comes in early, plans for the way the product is going to pan out and prepares itself accordingly so that as code starts rolling out it gets tested. This is a significant change, at least of mindset. Earlier the development used to get done and the testing would start – today these seem to have to go, more or less, in parallel. Many have called this Continuous Testing – a perfectly valid term.
Another significant change that this “Continuous” model is engineering is the integration of functions. DevOps teams now tend to contain people from Development, Operations and, increasingly, Testing. Without that level of integration, getting quality code out in such short time frames would be impossible. This integration is also throwing up new roles for everyone, including testing. Carl Schmidt, CTO of Unbounce explains it well, “I’m of the mindset that any change at all (software or systems configuration) can flow through one unified pipeline that ends with QA verification. In a more traditional organization, QA is often seen as being gatekeepers between environments. However, in a DevOps-infused culture, QA can now verify the environments themselves, because now infrastructure is code.”

That statement points to a third significant change for testing in the DevOps way – what to test? Now that the Operations, i.e the nuts and bolts that get the SaaS product out to the millions of subscribers, is a part of the effort of getting the product out is it not necessary to test that too? There are also some subtle changes of emphasis that emerge here – functional testing is always important but now there is a premium placed on testing for load conditions and for performance. The environment is also, as much a part of the SaaS product now as the code. There is a role here that testing can excel in – clearly they know more than most about the difficulties of taking the application code, deploying it in test environments, testing it and then moving it out to production. The process may have become shorter and sharper but the skills are the same.

I like a quote, sadly unattributed, that I read somewhere, it goes, “Software testers do not make software, they only make them better.” In the context of DevOps it would seem that finally they are getting an opportunity to not only make software but also to make it better!

Top 4 Mobile App Development Mistakes to Avoid

Take a look at some of these fun statistics

  • There are 6.8 billion people on the planet. 5.1 billion of them own a cell phone Source: Mobile Marketing Association Asia)
  • It takes 90 minutes for the average person to respond to an email. It takes 90 seconds for the average person to respond to a text message. (Source: CTIA.org)
  • 70% of all mobile searches result in action within 1 hour. (Source: Mobile Marketer)
  • 91% of all U.S. citizens have their mobile device within reach 24/7. (Source: Morgan Stanley

Today, mobile browsing accounts for anything from 1/3 to ½ of all of web traffic worldwide and mobile users spend over 80% of this time on mobile apps. Be it for social networking, emailing, texting, and gaming or for making purchases, having a great mobile app has become a business imperative. Hundreds of new apps hit the app stores every day. The Q is how can app developers ensure that their app is the best app to use? How can developers determine that theirs is not some buggy app that the user deletes almost as soon as it is downloaded? In this blog, we talk about four mobile app development mistakes to avoid to develop great mobile apps.

  1. Platform
    Not paying attention to the platform choice for mobile app development can easily be the biggest mistake developers make. The right platform choice for developing mobile apps is a critical contributor to mobile app success as it defines the approach to the development process. So should you go Native or Hybrid? A native app is developed specifically for a mobile operating system such as Java for Android, Swift for iOS etc. These apps are more mature and have a great user experience since they are developed within a mature ecosystem with a consistent in-app interaction. They are also easier to discover in the app store, have access to the device hardware and software and enable users to learn the app fast. Native apps are known for their great user experience.
    Hybrid apps, at heart, are websites packaged as native apps with the similar look and feel. Hybrid apps are built using technologies like HTML 5 and JavaScript. They are then wrapped in a native container that loads the information of the page on the app as the user navigates across the application. Unlike native apps, hybrid apps do not download information and content on installation. However, hybrid apps have one code base that makes them easier to port across multiple platforms and access easily through several hardware and software capabilities via plug-ins. These apps have cheaper origination costs and have a faster initial speed to market. Since hybrid apps are platform agnostic they can be released across multiple platforms without having to maintain two different code bases.
    Along with these are browser based apps. These applications run within the web browser with instructions written in JavaScript and HTML. These are embedded in the web page, these applications do not require any downloading or installation and can run easily on Mac, Linux or Windows PC.
    So how do you decide between these options? If you need to get your app into the market within a very short period of time or check the viability of the market then hybrid is the way to go. On the other hand, if you need an app that has a great user experience then you should be taking the native direction. The platform decision thus rests purely on the business requirement and the time that you need to move things to production.
  2. UI and UX
    Leaving app design as an afterthought can be a huge and costly mistake. High-quality design ensures higher engagement which translates to higher ROI. Strong design also makes sure that all future app updates can be done easily and have lower support costs.
    To build an app with a great UI and UX design developers first need to understand the behaviour of their target market to build the foundation of the apps functionality and then move to app design. The User Interface or UI defines how an app will look while the User Experience or UX design needs to ensure that the app is in fulfilling the emotional and transactional response it is supposed to generate. Developing functionality of the mobile app before developing the UI and UX can be counterproductive as this makes it difficult for mobile app developers to remain true to the overall UI/UX design and deliver great app experiences.
    To design a great mobile application, developers can first design UX Mock-ups first without worrying about the UI design. This enables them to ensure that app feels at home across platforms and form factors. Once this is done developers can move to UI design to build the visual appeal and usability of the app.
  3. Developing Bug Free Apps
    Since mobile app users are picky and impatient, they have zero tolerance for slow, buggy apps. While developers need to keep their eye on the ball and develop a great app, no app is truly great if it has not gone through rigorous testing to identify any shortfalls and bug fixes that the app might need. Along with this, it is equally essential to test the right things at the right time. For example, functionality testing should happen at the beginning during app development and should examine, amongst other things, all the media components, script and library compatibility, manipulations and calculations the app might require, check for submission forms and search features etc. Performance testing should not only focus on the load but also test the transaction processing speed of the app.Developers need to focus on User Acceptance Testing to ensure that the development happens in a timely manner. For this, User Acceptance Testing should be a part of the development process and should not be left until the last minute to build in feedback as it is received.
  4. It’s a mobile app NOT a mobile website
    Finally, a mobile application is not a mini website. Mobile apps are supposed to provide tailored, crisp and easy mobile experiences and hence need not have the same interface, look and feel as that of a website. This means that developers have to focus on sharper interface design, offer better personalization, send out the right notifications and also make the user experience more interactive and fun. Along with this, mobile apps should have the capability to work when offline. While apps might need internet connectivity to perform tasks, it is essential that they are able to offer basic functionality and content when in the offline mode. For example, a banking app might need internet connectivity for processing transactions. However, it can offer basic functionalities such as determining loan limit, instalment calculations etc. when offline.

With an increasing number of users turning away from websites and turning towards mobile applications, mobile apps act as gatekeepers of experience and engagement. App development thus has to be more than just building the app. Developers have to take a structured and strategic approach to app development to ensure that the app delivers on all the metrics required to further business goals by being responsive and reliable and fit in with user expectations.

Automation in Testing over Test Automation

Recent trends in the IT industry are quite indicative of the well known fact that test automation is becoming all too ubiquitous and that the skills which made the testing an investigative and unique profession (also associated with manual testing) are on the wane. However, our aim is not to focus on things which make automated testing gain an edge over manual testing. There is a stark differentiation between Automation in testing and test automation. How the two affect the development of a software product, what are the experiences encountered and the like ….this article aims to answer and highlight some of these queries.

The global big shots in the IT business, reason that the reputation of the company is at stake and hence, they have no alternative, than to blindly go for the automation test suites which will completely nullify the probability of product recalls, repairs and the ensuing erosion of market share and user trust. After all, it conveniently ticks the boxes of 100 % coverage and fast scripting of test cases.

The companies also come up with a genuine economic concern for lack of faith in putting automation into testing. They suppose that even if they somehow pool together the right king of skillful manpower to develop an automation suite, then it will more or less become a product in itself, with all its support and maintenance costs. What would then happen to the Return of Investment, they argue..

What the approach should be for Automation in Testing

  • When strategizing towards automating a test, there are a whole lot of preconceived notions related to additional costs, that we need to unlearn.
  • The whole line of thought process should revolve around questions like “How can i test faster?” How can i extend its reach for better coverage?
  • The functionality should be targeted for testing from the lowest possible level
  • To come up with a test tool that can combat with irregular tweaks in system, part of the approach should be to overcome the boundaries of development and testing
  • Some of the tools/ techniques which can be a friend in our endeavour are: data management, state manipulation, critical bug tracker and log file parsing.
  • Documents like old user manuals can also aid us to a large extent.

A jigsaw puzzle Architecture:

Just as there are disparate parts to a jigsaw puzzle set, all the aspects towards automating a test case need to be considered as individual parts of the same puzzle. For example, a testing code that manages data, a test code that creates browser, a code for managing user interactions on the page. In the end, a code that will handle reporting of results .Put together all these test codes and scaling up this approach can build the test tool we need. While we are at it, we need to carefully analyse the hurdles we encounter and go deeper into them. For instance, when we design a testing tool for API testing is taking too long, we need to delve into the causes which are impeding it’s quick response time.

Where test automation may not be the best fit?

  • It doesn’t make any sense to automate tests that require a single repetitions or a couple of iterations.
  • When the project requirements for an application are very dynamic. Since the design of an automated test suit requires sufficient resources and enormous labour, it goes against sound economic sense to plan one for a platform that requires regular changes and tweaks.
  • Unless the automation in testing is not started with close co-operation with the development team at every stage of the development cycle, it becomes difficult to come up with a complete and effective test tool. In that case why get into it in the first place?
  • Maintenance costs of these suits are quite significant and can be a dampener while gauging the overall economics of the project.
  • It can be time consuming to automate tests and therefore its usage should be limited to high priority ones.

Regression Testing vs. Retesting – Know the Difference

Regression testing vs. Retesting?

Is there a difference at all? Is regression testing a subset of re-testing? Quite a few times, testing teams use these two terms interchangeably. However, there is a vast difference between these two types of testing. Let us have a look –

regression testing vs retesting

Regression Testing

Definition:

Regression testing is a type of software testing which is carried out to ensure that the defect fixes or enhancements to the application have not affected the other parts of the application.

Purpose:

The purpose of regression testing is to ensure that the new code changes do not adversely affect the existing functionalities of the application.

When:

Regression testing is carried out in one or any of the following circumstances:

  • Whenever there is a modification in the code based on the change in requirements.
  • A new feature is added to the application.
  • Defects are fixed in the application.
  • Major performance issues are fixed in the application.

Regression testing can be carried out in parallel with re-testing. It is, in many cases, seen as generic testing.

Testing Techniques:

Regression Testing can be carried out with one of the following four techniques-

  1. Re-execute all the test cases in the suite.
  2. Select a part of the test cases which need to be executed with every regression cycle and execute only those.
  3. Based on the business impact, feature criticality, and timelines, select the priority test cases and execute only those.
  4. Select a sub-set of test cases based on frequent defects, visible functionality, integration test cases, complexity of test cases, sample of successful and failure test cases and then execute only such sub-set.
Test Cases:
  • Regression test cases are derived from the functional specifications, user manuals, tutorials, and defect reports related to the fixed defects.
  • Regression testing can include the test cases which have passed earlier.
  • Regression testing also checks for unexpected side-effects.
Role of automation:

Automation plays a very crucial role in regression testing because manual testing can be very time consuming and expensive.

Re-testing

Definition:

Retesting is a type of software testing which is carried out to make sure that the tests cases which failed in the previous execution pass after the defects against those failures are fixed.

Purpose:

The purpose of re-testing is to ensure that the previously identified bugs are fixed.

When:
  • Whenever a defect in the software is fixed, re-testing needs to be carried out. The test cases related to the defect are executed again to confirm that the defect has indeed been fixed. This type of testing is also referred to as confirmation testing.
  • Re-testing has to be carried out prior to regression testing.
  • Re-testing is a planned testing.
Testing Techniques:

Re-testing needs to ensure that the testing is executed in the same way as it was done the first time – using the same environment, inputs, and data.

Test Cases:
  • No separate test cases are prepared for Re-testing. In Re-testing, only the failed test cases are re-executed.
  • Re-testing does not include the test cases which have passed earlier.
  • Re-testing only checks for failed test cases and ensures that originally reported defects are corrected.
Role of automation:

There is no way to automate only the re-testing since it depends on the defects found during the initial testing.

A Cloud Migration Checklist

An increasing number of enterprises today are migrating to the Cloud. A survey conducted by RightScale, a cloud automation vendor, confirmed this trend and revealed that:

  • 93% of respondents reported that they are adopting the cloud.
  • 88% of the respondents reported using the public cloud.
  • 63% of the respondents use the private cloud.
  • 58% of respondents use both private and public cloud.

Migrating to the cloud presents enterprises with some obvious benefits. Increased availability, better performance and clear cost benefits are some of the obvious advantages of moving to the cloud. Research conducted by various agencies such as Gartner, Ovum, Forrester, International Data Corporation (IDC) and others agree – “the global SaaS market is projected to grow from $49B in 2015 to $67B in 2018, attaining a CAGR of 8.14%.” and by 2019 cloud applications will account for worldwide mobile traffic. Goldman Sachs also estimates that the “cloud infrastructure and platform market will grow at a 19.62% CAGR from 2015 to 2018, reaching $43B by 2018”

However, when migrating to the cloud enterprises have to ensure that their initial footprint in the cloud is compatible with the technology stack present in the cloud platform of their choice. They also have to ensure that the platform is able to scale comfortably to suit growing business and user requirements. Thus taking a strategic approach becomes an essential part of cloud migration and consider how the enterprise intends to do business so that this can become an inherent part of the cloud strategy.

Cloud migration is the process in which data, applications or other business elements are moved from onsite computers to a cloud infrastructure or are moved from one cloud infrastructure to another. In this post, we will shine the light on some essential components of a cloud migration checklist. We hope this will help enterprises looking to migrate to the cloud do so seamlessly and help them reap the real benefits of this move.

Network architecture:
To take complete advantage of the cloud, enterprises need to make sure that their network infrastructures are set up for this. Traditional network infrastructures may suffer poor application performance or even expose themselves to security vulnerabilities. Thus before making the move to the cloud enterprises need to make sure that their network is well-designed and cloud-optimized by ensuring routing optimization, reliability, and low latency in WAN performance, and ensuring device support. Taking a holistic approach to the network architecture thus, becomes the foundation of successful cloud migration.

Application architecture:
While moving applications to the cloud might look simple, in reality, this takes a lot of careful planning for great execution. Before migrating applications to the cloud, architects need to evaluate if legacy applications need to be replaced and assess which applications will get the most out of the cloud investment by doing an inventory assessment and then plan the application move. Typically, enterprises should avoid moving systems in large chunks and should ensure that these systems or application first-movers are not the most business critical and tricky. Mission-critical workloads, legacy application, sensitive data might not be the best first movers to a public cloud. Treating the cloud as a logical extension of the current landscape and assessing application dependency thus becomes an essential part of the cloud migration check-list.

Business continuity plan:
Having a business continuity plan should also form an essential plan of the cloud migration journey as vulnerabilities, natural or man-made (think the Japan earthquake or the Amazon outage in 2011) can sometimes disrupt business. Enterprises need to build diversity into the disaster recovery and business continuity systems and should be able to run on a number of different infrastructures. Evaluating options for business continuity and designing systems and configurations that can enable a high level of automation should find a significant spot on a cloud migration check-list.

Evaluating costs:
So, should you opt for a private cloud, public cloud or hybrid cloud? While cost efficiency is a big reason of why enterprises move to the cloud, it is important to remember that financial benefits differ from one application to another. Applications using legacy hardware can be more expensive to run in the cloud. Identifying the technical requirements, gathering performance data, and identifying if there shall be any hidden expenses when migrating to the cloud can help in planning the network and bandwidth costs and for deciding which cloud flavor will best suit the enterprise.

Governance and security:
Since traditional on-premise systems will not work as-is in the cloud, enterprises have to take a look at evaluating their governance approaches. As most of the governance responsibility rests on the cloud providers once the move to the cloud is complete, enterprises need to reshape their governance strategies to rely more on the offerings by the cloud than on their internal security. Assessing the cloud providers’ security certifications thus becomes important. Planning ahead for any fail overs, potential breaches and disaster recovery also becomes a critical part of a cloud migration check-list.

Conclusion:
As enterprises assess the benefits and risks of a move to the cloud, it is important to note that cloud migration does not have to be an ‘all or nothing’ proposition. With careful assessment, enterprises can begin with first moving some applications and services to the cloud and continue to operate the rest on-premise. Once all the boxes in this check-list have been crossed then enterprises can avail rapid cloud transformation and embrace the power offered by the cloud.

Role of Test Automation three Testing Areas

As technology becomes more embedded into business processes, testing is becoming an area of strategic importance. While manual testing cannot be completely done away with, in order to ensure the effectiveness, efficiency and wider coverage of software testing, test automation is the way to go. Gartner notes that in order to be agile testing needs to be automated.

  • Unit Testing:
    Unit testing is a very important part of the software testing process since it is the small code fragments that are tested here. It involves checking the pieces of the program code in isolation and independently to ensure that they work correctly. Since unit tests mean checking the source code, the test suite should be a part of the build process and scheduled to run automatically every day so that testing happens early and often. Manually testing these pieces of code can be resource and time intensive and can lead to a number of errors. By automating unit tests, testers can ensure that the source code within the main repository remains healthy and error free and there is a line of defence built against bugs. While it might take 10-30% more time to complete a feature, automating Unit tests ensure that:
    1. Problems are found early in the development cycle.
    2. Ensures that the code works now and in the future.
    3. Helps developers become less worried about changing codes.
    4. Makes the development process more flexible.
    5. Improves the ‘truck factor’ of the project.
    6. Makes development easier by making it predictable and repeatable.
  • Functional Testing:
    Functional Testing, a test technique to cover the functionality of the software and ensure that the output is as per the required specifications. It also has to ensure that test scenarios including boundary cases and failure paths are accounted for. While functional testing is not concerned about the internal details of the application, it tests in great detail what an application ‘does’. During functional testing, developers need to set benchmarks that are developer-independent in order to identify what they have not achieved. So, as soon as a function is developed it needs to be tested thoroughly and this process has to continue until application development has been completed.Since the user ultimately shall be running the application on a system along with other applications and the application has to endure different user loads, the developers also need to make sure that every function of the application is crash resistant. Testing continuity, ensuring application grade tests are as external as possible, white box testing etc. are some of the areas that functional testing has to cover. Doing all of this manually can not only be time consuming but also leave room for error. A functional testing strategy should incorporate:

    1. Test purpose.
    2. Project creation.
    3. Test construction.
    4. Test automation.
    5. Test execution.
    6. Results check.

    Test automation makes it easier to perform powerful and comprehensive functional tests that will help develop a robust product.

  • Performance Testing:
    Performance testing assumes a big role in the testing and QA process as it helps in identifying and mitigating performance issues that can derail the entire application. Performance Testing can sometimes also be associated with stress testing, load or volume testing as this test aims to gauge the system’s ability to handle varying degrees of system transactions and concurrent users along with assessing its speed, stability, and scalability. Automating test process during performance testing is essential as the goals of a performance test comprise of several factors which include service level agreements and the volumetric requirements of a business. Testers need to test system transactions and user transactions concurrently, measure and reference system response time against the Non-Functional requirements, and also determine the effectiveness of a network, computer, program, device or software. During performance testing, the testers need to measure the response time or the number of MIPS (millions of instructions per second) at which a system functions. They also need to evaluate that the qualitative attributes of scalability, reliability, and interoperability to ensure the product meets with the specifications required. Understanding the content to assess problem areas, building test assets, running tests and analyzing the data to understand performance bottlenecks are some of the things that testers need to do.
    Since performance testing is intensive and exhaustive, testers should leverage test automation once the test strategy has been defined.

Conclusion:
Taking a strategic approach to testing automation and viewing it as a part of the development process makes sure you have a product that is flawless and delivers on all the desired metrics. This ultimately leads to higher quality software, greater customer satisfaction, and higher ROI.