Node.js–A Great New Way to Build Web Apps

The past few years have seen Node.js explode on the application development scene like a superhero. Despite its humble beginnings with Yammer and Voxer, Node.js managed to establish its authority quite fast and is now seeing great mainstream adoption with giants like Walmart and PayPal putting their trust in it. Netflix too moved its website UI to Node.js from Java. Once the underdog, Node.js has established its credibility and superhero status within the enterprise and has an increasing number of developers adopting it to build fast and scalable web applications with this open source cross-platform runtime environment. Ever since its launch, Node.js has been seen as this cool and trendy server side platform that managed to attract the developer community. The great thing about Node.js is that it’s core functionality has been written in JavaScript making it a great choice for developing real-time applications. Apart from this, Node.js is packed with a host of other features that make it ideal for building web applications. So what makes Node.js so great?

  • Neutral Language
    Since Node.js supports JavaScript, the same language can be used in the front end as well as the backend thus breaking down front and back end development boundaries which make the development process more efficient. Considering that JavaScript is a language that is used by a majority of developers it saves them the trouble of translating code and helps in managing development time and cost. With the Node.js, framework developers do not need to translate the logic from their head on the server side framework and also do not have to translate the HTTP data from JSON (the data-interchange format) to the server side objects. Since Node.js is generally understood both by Java and .NET camps it makes it easier for the developers to deploy it on both Unix and Windows infrastructures.Node.js uses Google’s V8 engine on Chrome which is written in C++ and has an exceptional running speed as V8 has the capability to easily compile JavaScript into native machine code without any problem.
  • Scalability
    Node.js effectively solves the concurrency problems that plague developers unilaterally. The problem of concurrency in server-side programming languages often causes poor performance and impacts the throughput and scalability of an application. Developers get an event driven architecture with Node.js and non blocking I/O API which takes care of these issues. Node.js is also built to handle asynchronous I/O’s from the base up. This helps in managing many web development related problems.
    Node.js also splits a single process into multiple processes called workers by adopting the cluster module. This modules allows developers to create ‘child processes’ that can function under the ‘parent process’ and can communicate with the parent Node by sharing the server handles and by using IPC (Inter-process communication).Furthermore, applications in Node.js are easier to scale since the developers write simple code and Node.js simply takes over from there. Instead of using processes and threads, Node.js uses a simple Event Loop and defined call backs where the server automatically enters the loop when the callback definition ends for scaling efficiently. Simply put, Node.js helps applications execute common tasks like reading or writing to the filesystem, reading or writing to the network connections or to the database with ease and speed and makes applications capable of managing a large number of simultaneous connections with high throughput.
  • Built-in support
    Node.js has an inbuilt support for package management using the NPM, the Node Package Manager, a tool that comes as a default with Node.js installations. The NPM module is publicly available, has reusable components and is easily available through an online repository of over 300,000 packages with dependency and version management. This ecosystem encourages sharing and is open to all and gives developers more prospects to create effective solutions by giving them the opportunity to update, share and reuse code with ease.
  • Great for real-time web applications
    Developing real-time web applications such as chats and gaming apps in Node.js is extremely easy. Developers do not need to concern themselves with things such as low-level sockets and other such protocols. It allows the developers to write JavaScript on both the server and client side and facilitates automatic data synchronization by automatically sending data between the client and the server and ensures that data changes on the server are immediately reflected at the required points.Applications in Node.js are composed of small modules that are piped together which ensures that, unlike monolithic applications, these applications do not creak under unseen weight and stress. This also makes adding new functionalities to the application much easier as the changes do not need to be made deep inside the codebase.

    Along with all this, Node.js can also be used as a proxy server if the enterprise does not have the infrastructure for proxy. It also allows for actual data streaming and does not take HTTP requests and responses as isolated events and reduces processing time. Node.js applications are also capable of dealing with high loads. In 2013 Walmart put their entire traffic on Black Friday through Node.js and ensured that their servers did not exceed 1% of server utilization despite having 200, 000, 000 users online.

    Node.js has all the features that make it most appealing to the developer community and also renders it enterprise ready – it’s easy to scale, is secure and easy to learn. It also takes care of low latency issues that plague most tech companies because of the asynchronous input-output operations feature. Adding it all together, it is clear that with Node.js organizations can achieve more as it half the number of developers can be used to build products. It also reduces the number of required servers to service a client and increases app performance by reducing load times by almost 50%. Given the increasing industry confidence on Node.js, it is quite clear that its future is indeed bright.

Software Testing Life Cycle

We all must be aware of the tree’s life cycle where a small seed goes through distinct phases to gradually grow and develop as a large tree.

The similar concept of life-cycle is also followed in the software engineering field, mainly in the development life cycle and the testing life cycle where former perceives the gradual development of the business or functional requirements into a software application and the latter one visualizes the testing of the software application from a scratch to the release of the quality software application. Since, our article is not concerned to development life-cycle, we will discuss about testing life cycle only.

What is Testing Life Cycle?

Development life cycle is followed by the testing life cycle. A testing life cycle comprises of several phases and activities aligned in a sequential manner to initiate, execute and terminate the testing process.

A software testing process could be initiated as soon as the development process begins and may be carried out in parallel to the development activities. It can be understood through V&V development model, where a corresponding test methodology is defined for each development phase.

Now, coming back to the testing life cycle, it mainly consists of following phases in a subsequent manner.

Let’s find out what each phase consists and is responsible for.

1. Requirement Analysis:-

The very first phase of the software testing lifecycle involves the study and analysis of the available requirements and specifications. Both functional and non-functional requirements are being viewed and study from the testing point of view, to find out the testable requirements i.e. those requirements which may produce results on feeding with the input data.

  • On the availability of requirements and specifications.
  • When the application architecture is available.


  • Brainstorming sessions for the requirement analysis and feasibility.
  • Identifying and sorting out the requirement priorities.
  • Creating the requirement traceability matrix (RTM).
  • Identifying the suitable test environment.
  • Identifying the requirements acceptable for the automated testing and the manual testing.


Requirement analysis stage visualizes the combined efforts of QA team, project manager, test manager, system architect, business analyst, client and the major stakeholders so as to have greater understanding of the requirement and subsequently the better outcomes.


  • Testable Requirements.
  • Requirement Traceability Matrix(RTM)
  • Automation feasibility report (if applicable).

2. Test Planning:-

With the information gathered about the requirements in the previous phase, QA team move a step ahead in the direction of planning the testing process. Basically, a strategy or strategies is/are defined and described for the testing process/activities.

When to go for it?

  • On the successful completion of the requirement analysis phase.
  • When the testable, refined and clear requirements got defined and specified, i.e. on the availability of requirement documentation.
  • Good understanding of the product domain.
  • Availability of Automation feasibility report (if any).


  • Scope and objectives are outlined.
  • Deciding the testing types to be performed along with the specific strategy for each of them.
  • Roles and Responsibilities are determined and assigned.
  • Identifying the resources and testing tools required for the testing.
  • Estimating the time and the efforts to carry out the testing activities.
  • Defining and detailing the test environment.
  • Defining the time schedules.
  • Entry, exit criteria along with the suspension and resumption criteria is defined.
  • Planning the training activity and sessions required by the testers(if any).
  • Risk analysis is being done.
  • Change management process is specified and described.


As per the requirement and the availability, QA Manager or QA lead is accountable for planning the testing process.


  • Test Plan documentation
  • Time and effort estimation documentation.

3.Test Case Design & Development:-

The requirements has got analysed and accordingly the QA team comes out with a test plan. Now, it’s time to do some creative work and to give a shape to this test plan in the form of test cases. Based on test plan and detailed requirements, test cases are designed and developed for the purpose of verifying and validating each and every requirements specified in the documentation.


  • Test cases are designed, created, reviewed and approved.
  • Relevant existing test cases are reviewed, updated and approved.
  • Automation scripts (if any) are developed, reviewed and approved.
  • Relevant test data are generated or imported from the development environment.
  • Test conditions along with the input data and expect outcome for each test cases are defined and specified.


Generally, the testers have the job of writing the test cases under the supervision of QA lead or QA manager. However, the testers may be accompanied by the developers in generating the effective automation test scripts.

When to prepare/create test cases?

  • On the availability of software requirement specification (SRS) and business requirement specification (BRS).
  • When the test plan is ready.
  • Automation feasibility report(if any) is available.


  • Test cases including automation scripts.
  • Test Coverage Metrics.
  • Test Data

4.Test Environment Setup:-

The software testing process needs an appropriate platform and environment encompassing the necessary and required hardware and software, to create and replicate the favourable conditions and intended environmental factors to perform actual testing activities i.e. execution of the developed test cases on the software.

  • Test data is set up.
  • Test environment checklist is prepared and the required hardware and software are aggregated.
  • Test server is setup and network settings are configured.
  • Test Environment management and maintenance process is defined and described.
  • Smoke testing of the environment to check is readiness.
  • Testers are being equipped with the bug reporting tools.


QA team under the supervision of QA manager sets up the test environment

When to set up Test Environment?

  • When test data is ready for use.
  • Test Plan documentation is available.
  • Needed resources such as hardware, software, testing tools & framework, server, etc. are available.

However, the test environment set up phase may be carried out concurrently with the test case design & development stage.


  • Test Environment is set up and ready to execute tests.
  • Smoke Test Results.

5.Test Execution:-

With the test cases, test data and the suitable test environment, QA team is now ready to try hands on some actual testing activities. The test execution phase involves the execution of the developed test cases with the help of test data in the set up test environment.

  • Test Cases execution as per the test plan.
  • Comparison of actual results with the expected outcomes.
  • Identifying and detecting defects.
  • Logging the defects and reporting the identified bugs.
  • Mapping defects with the test cases and accordingly updating the requirement traceability matrix.
  • Re-testing, once a defect gets fixed or removed by the development team.
  • Regression testing(if required).
  • Tracking a defect to its closure.


Test Engineers are deployed to carry out the task of test case execution.

When to go for the test execution?

Being equipped with the test strategy, test plans, test cases, test data, properly configured and set up test environment along with some other needy resources, the QA team can kickoff the test execution process.


  • Test Status and results.
  • Bug or Defect Report.
  • Complete and updated Requirement Traceability Matrix (RTM).

6.Test Closure:-

The completion of the test execution phase and delivery of the software product marks the beginning of the test closure phase. This phase perceives the meeting and discussion amongst the QA team members with respect to test execution and its results. Apart from the test results, other testing related parameters are considered and reviewed such as quality achieved, test coverage, test metrics, project cost, adherence to deadlines, etc.


  • Retrospection of the whole testing process.
  • Test Life Cycle exist criteria is evaluated along with some other essential aspects such as test coverage, quality achieved, fulfilment of goals and objectives, critical business goals, etc.
  • Need to change the exit criteria, test strategy, test cases, etc. are discussed.
  • Test Results are analysed and reviewed.
  • All the test deliverables such as test plan, test strategy, test cases, etc. are collected and maintained.
  • Test Closure Report and test metrics is prepared.
  • Defects are arranged severity wise and priority wise.


Generally, the QA lead or the QA Manager is responsible for preparing the test closure report.

When to perform test closure activities?

Generally, the test closure activity begins after the completion of test execution activities and delivery of the software product. However, it is not necessary to carry out the closure task only after the delivery of the software application. It may be performed after closure of the testing activities due to some other reasons such as achievement of targets, cancellation of the project or when the product needs update, etc.


  • Test Closure Report.
  • Test Metrics.
  • Learned process.


In nutshell, it may be concluded that similar to development life cycle, testing life cycle also consists of several phases and each phase counts a large number of activities to strategically and orderly carry out the testing process in an effective and efficient manner and subsequently ensuring maximum productivity and quality achievement.

Keeping Your Testing and Automation Strategy Relevant

“Evolution is the secret for the next step” – Karl Lagerfeld

The need to change and the ability to adapt to change has been the reason why today, we have grown so much. Without going into the details of human evolution, which will not find relevance in this piece, we want to mention the evolution of technology and how phenomenally it has grown over the last few decades. This growth is only because someone, at some time, identified a ‘chance’ of growth…of doing something better. Bringing about all this change was not easy, yet it was essential and imperative in order to stay relevant.

how to keep your test automation strategy relevant

Just like everything else, change is also essential for a Test automation strategy to stay relevant. Considering the dynamic business environment that we have today where technology changes and advancements are the norm, frequent product upgrades and product evolution are inevitable in order to stay relevant and ahead of the curve. Keeping this in mind, it is imperative that in order to ensure a high-performing and flawless product, having a strong testing strategy is a must. Since the consumer does not have the time, energy or bandwidth to deal with a product riddled with bugs that lead to slow performance, testing assumes an even more important role. For testing professionals, thus, this means building a testing suite that can enable this change.

  1. Much like testing an inventory, testing professionals too, have to look at the overall test strategy as well as the detailed test plans and test cases to identify which test plans will remain relevant in the long run and which plans will become obsolete. Taking this big picture view, with an eye out for the details, on a regular basis thus becomes essential to release upgraded products that are bug-free and with development costs under control.
  2. When there is a product upgrade, changing the entire test strategy can become a problem that can snowball into a big expense. Much like product development, having a monolithic test plan with many interconnected parts only slows down the process, since if one test fails the entire testing suite comes to a halt. Having smaller and independent test cases addresses this problem and increases the efficiency of the testing suite.
  3. One more key element is the test data being considered. As the product evolves, the conditions it operates under will change and the testing has to address those changed conditions. Creation of test data to assess which tests can do so, getting frequent data dumps from the production team, assessing how the test can be spread the tests to other environments, using data dumps from production to access relevant test data are some issues that test automation suits should cover to ensure pertinence.
  4. Then the big item – test automation! What must be considered is the level of automation incorporated into the overall testing strategy. There has to be a healthy balance of manual and automated testing. Test cases that have to be repeated continually, cases that are manually time-consuming and need a speed of execution or cases that are difficult to perform manually are ripe for automation.
  5. One way to ensure continued evolution of the automation suite, in line with the evolution of the product it is testing, is to consider the test automation suite too as a product; one that needs frequent iterations and upgrades. Keeping this relevant starts with selecting the right testing tool, designing the framework and features, test bed preparation, scheduling and timeline management and iterating the deliverables of the testing automation are some of the key contributors of a successful test automation strategy that stays relevant. Along with this, testers have to focus on the maintainability of the testing automation suite so that when the product changes, your test automation suite can adapt to that change and deliver what is expected with minimal effort. Just as software needs to be maintained, a test automation suite, too, needs maintenance. Thus, treating your testing assets like any other piece of software becomes a critical contributor to the relevance of the test automation suite. Charting the lifecycle of the testing suite, much like software maintenance, and identifying maintenance needs such as preventive maintenance, corrective maintenance, and adaptive maintenance are important for the longevity of a test automation suite.
  6. Creating test automation suites that anticipate, or allow, changes in the UI also ensures that the test automation suite can work with future versions of the product. To make this happen, some of the things that testers can do are build test suites that divide the test into individual parts, allow keyword-driven testing and support multiple scripting languages amongst others. What testers need to bear in mind at all times is, that for dependable test automation suites assessing the validity of the testing suite with each product iteration reduces the burden of test maintenance. Adding the needed tests and removing the ones that are redundant on a proactive basis after each product release increases the life of the test automation suite.


By building a strong test strategy that can remain relevant for a long time, testers provide developers the confidence to refactor legacy code and build solid and stronger products. Building a strong test strategy or a robust automation suite is a labor of love for testers that needs a lot of thought and nurturing. Once that is achieved, the test automation suite and the tester want nothing more than to live happily ever after.

Made For Each Other – How a Dating Site Leveraged the Power of Test Automation?

Over the years, test automation has become an indispensable part of the product development strategy of most companies. In these days of extreme pressure to go-to-market faster, it seems no test strategy is complete without an automation component. This is the story of one of our key customers, a (or maybe THE) leading dating and matchmaking site in the game today, and how they gained from adopting a sustained, strategic and comprehensive test automation strategy – oh, and of how we helped them get there.

Our story starts when an updated version of the dating site was in the works. A consumer internet site like this operates under some fairly extreme conditions. Getting your product out into the hands of the target users at a pace faster than the competition is vital. Then there is the need to provide an incomparable, error and trouble-free user experience – in this social age, even the smallest problem could cause users to switch to alternatives. So the name of the game is fast, extremely high-quality product development.

When we entered the stage our client was facing a dilemma because of the limited number of QTP licenses they had. The choice was to either take much longer to run their 8000+ scripts with the existing number of licenses or to buy additional licenses. In the first case, they stood to lose a possible market opportunity because of the extra time the release would take, and the second meant a significant expenditure. Neither were very palatable options.

We approached the problem differently. We suggested a shift to Selenium. Being an open source framework for test automation with wide acceptability in the market, this made sense from the cost point of view definitely. Of course, opting for an Open source option like Selenium had is pros and cons with some possible compromise on features, support and possibly a loss of accumulated knowledge. The migration of 8,000 QTP scripts to Selenium, was also a massive task in itself. The major challenge was that there was very limited time to complete the conversion and every day spent on making this effort meant adding costs. The capabilities of the manual testing team, a substantial part of their workforce, the effective and efficient channelization of their efforts and the best utilization of their time over the period of transition was also a big concern.

Our solution was to leverage our own Krypton, a feature-rich, hybrid test automation framework which is a scriptless test-automation tool. A glimpse at some of these features – parallel test execution is possible within Krypton, a sure shot time saver. Krypton also supports all the browsers in the market and it also features keyword driven testing, automated reporting capability, and parallel recovery. Using Selenium as the base and Krypton, the team managed to complete the migration and create the brand-new test automation framework.

Now that this phase of the effort has been put to bed we can look back at the results with pride. For one, the automation framework delivered on its promise of achieving greater code coverage in a relatively short span of time. This means a better-tested product going out into the market faster.

Krypton has been designed in such a way that it was relatively easy for the manual testers to understand. The analysis at the end showed that getting them working with this tool after training was achieved within a very short span of 2-4 weeks, a crucial aspect in the timely delivery of the product.

By using this solution, the company was able to achieve tremendous savings in terms of license costs and the efforts of the manual testing resources – almost $3 million by their own estimates.

Today, the new framework is implemented as the preferred choice for automated testing. It has helped the customer to increase the efficiency, effectiveness, and overall coverage of the testing effort. The fact what we are most thrilled about is the impact we have managed to deliver in such a high-pressure situation. As far as this dating site is concerned, perhaps Krypton was that special one it had been waiting for, all its life and now, as a result – Love is all around!

Strategies for Testing a Minimal Viable Product(MVP)

Strategies for Testing a Minimal Viable Product

Creating a Minimal Viable Product gives entrepreneurs the opportunity to test a product idea and assess the validity or invalidity of their business plan. The heart of the Lean Startup methodology, an MVP, is little more than a rough draft, an outline sketch of a product. However, an MVP, is, under no circumstances a half-baked product. It is instead a process through which entrepreneurs assess what their customers actually demand in their product versus what they feel that the product should do. Developing an MVP is about answering some rudimentary questions that stem from theoretical inquiry of “Should this product be built?” or “Can we build a sustainable business around this set of products and services?” and goes on to developing the ‘build-measure-learn’ feedback loop that tests the assumptions regarding the product by putting the rough draft in front of the users. A great number of start-ups favor the MVP approach to software development as they can communicate their product to their target audience, gather feedback fast and iterate the product according to that feedback.

Considering that the focus and aim of a Minimum Viable Product is to remain, well, ‘minimum’, sometimes companies developing such products are unlikely to give too much emphasis to testing. Since MVP’s have a limited objective performing elaborate tests on them seems like a waste of time and resources. However, at the same time, we need to note that in order to gain the validation of the customer the product has to pass from one test level to another. Thus, having a test plan for an MVP too is important.

A basic test plan could comprise both automated and manual tests. We have written in the past, how considering the fact that MVP development does not lend itself to long-term planning, dedicating time and resources to develop a strong test automation strategy seems like a waste. Given that the aim of the MVP is to build the leanest possible feature set to address the core demand of the final product that meets the user criteria the final product might turn out to be quite different from what was initially envisioned. The automated tests that were developed as a part of the test suite thus might be rendered completely useless in the event of these product iterations. So what should an MVP test strategy contain?

Writing elaborate Unit Tests for an MVP may not be required. Since the MVP is open to frequent iterations, validating that each unit of the software designed performs as it should and build the confidence in the written code is not required. However, we also cannot entirely dismiss unit testing for MVP. Running a few Unit Tests once iterations have been made to the code to see if there are some defects that emerge in product functionality and usability owing to the change in code works in favor of the product.

Along with this, it makes sense to conduct some middle-tier tests to ensure that the data is being delivered to the other tiers in the desired format. Since it is not essential to test individual components when developing an MVP, testing the module as a whole to verify the expected outlook and check the usability of the product makes better sense. A quick round of integration testing to verify and validate the end to end functionality of the connected components also helps in delivering a sound, yet basic MVP.

UI testing perhaps is the most important test for an MVP. Since UI tests check how the application works with the user and assesses if all the functionalities of the product are understandable and easy to use. It also assesses if the user can navigate seamlessly through the product without stumbling upon bugs etc. and assess the possibility of errors on various interactions that occur during the product use. Considering that the average user is more concerned about the usability of the product more than its underlying structure, UI testing of MVP becomes all the more important.

Both the developer and the user know that the MVP is a version that has been put out solely for the purpose of market validation. At the same time, you need to put out a relatively ‘sound’ product in front of your target audience to get the feedback that holds value which will eventually lead to the development of an elaborate and dependable product. To make sure that this happens, startups, entrepreneurs and other organizations looking to develop MVP’s have to put some amount of focus on testing. Taking a more global approach to testing and allocating a designated time to do so will only help in developing a product in alignment with the initial vision that might be minimum, but in no way will it be poor.

Ruby On Rails vs. PHP

PHP and ROR are two very widely used and in-demand programming languages. Both of these languages are dynamic, are very flexible and fun, concept driven and easy to learn. This means that you spend less time learning the details and focus more on learning the programming concepts which eventually help the developer build applications faster. Both, ROR and PHP are open-source programming languages and have been around for quite some time to prove their stability. While PHP had not been too frequent with their upgrades, over the past two years they have had some major releases that have spiraled their popularity even more. Presently, PHP holds 20.1% of the market share while ROR stands at a close 18.91%. However, in 2016 ROR adoption is picking up greater speed when it jumped seven points from last year and secured its highest ranking every in the TIOBE index.

Here, we try to take a close look at both to determine which is to be used and when.

PHP is a generic Object Oriented Programming language that is simple to learn and easy to use. It has a very large community of developers and users and provides extensive database support. There are a great number of extensions and source codes available in PHP and can be deployed on almost all web servers and works on almost all operating systems and platforms. PHP also allows for the execution of code in a restricted environment, offers native session management and extension API’s. Deploying a CMS in a PHP application is phenomenally simple because of the sheer number of frameworks, libraries, and resources at its disposal.

Deploying a PHP application is also a very simple process. You can simply FTP the files to a web server or deploy it equally easily using Git without worrying too much about the web stack. The entire PHP framework directly can be easily copied onto the server and run when using frameworks like Code Igniter.

PHP also has a huge web focus and. While it is a server-side scripting language and can also be used as a general purpose programming language it essentially has a huge web focus and seems to be born for it. It has a high degree of extensibility which renders it easy for customization in the web app development process. PHP has also addressed previous issues like object-handling and improved the basic object-oriented programming functionality in their upgrades. The latest PHP 7 release boasts of explosive performance improvements, drastically reduced memory consumption and easier error handling amongst other features.

Since programmers could manipulate the code to suit their requirement, the evolution of PHP led to a lot of bad code. As the coding standards improved the code became more verbose making it suitable for enterprise usage. PHP is still the go-to language for building web applications and web development because of its ability to interact with different database languages but still remains unsuitable for desktop applications. PHP is also a great resource to create dynamic web pages and also to create internal scripting languages for projects. Some of the big names using PHP presently are Facebook, NASA, Zend, Google etc. Wikipedia says that PHP is installed on over 240 million websites and approximately 2.1 million web servers.

Ruby, the programming language that runs with ROR or just Rails, is heavily influenced by Perl, Eiffel, and Smalltalk. It is a full-stack web application framework which is object oriented, has a dynamic type system and automatic memory management. ROR is a mature framework that enables high-quality products that are can be maintained easily. It works on multiple platforms, offers a Very High-Level Language (VHLL), has advanced string and text manipulation techniques and can easily be embedded into Hypertext Markup Language (HTML). The ROR framework is extremely automated that allows the programmer to focus on solving the business problem that needs addressing instead of spending time working around the framework. The Generators/Scaffolding and plug-in assets accelerate the development process and make maintenance a lot easier as compared to PHP. The ActiveRecord ORM in ROR is extremely straightforward to use. Additionally, it has integrated testing tools and is Object-Oriented right from ground up with a concise and powerful coding structure. Rails also supports caching out of the box which makes it easy to scale, unlike popular belief.

However, unlike PHP, ROR has a comparatively steeper learning curve and is not quite easy to run it in Production mode. ROR is also more difficult when it comes to errors as in Ruby instead of throwing up an error message the entire app just blows up.

Having said this, while ROR might not be easy to learn, it has better security features, a flexible syntax debugger and comes off as a more powerful language than PHP. It won’t be too off the mark to say that, while learning Ruby can be difficult this is a language meant for the ‘thinking developer’ and offers a superior toolset for application development.

ROR is being used by AirbnbGitHub, Groupon, Shopify, Google Sketchup, BaseCamp, SoundCloud, Hulu etc. Rails also makes an excellent choice for web apps, highly scalable websites, enterprise applications, and for projects that need rapid web development. However for single page applications, dynamic content and games, high traffic and high usage platforms like chat rooms, ROR might not be the best choice.

So which one is better – PHP or ROR? To begin with, it wouldn’t be fair to compare the two languages since Rails is a framework for Ruby while PHP is a language and also has many frameworks. However, both these ecosystems are efficient and powerful in their own right. Sometimes the selecting one over the other becomes a matter of personal preference , availability of skills and the specific business case.

Key considerations on Big Data Application Testing

2016 is emerging as the year of Big Data. Those leveraging big data are sure to surge ahead while those who do not will fall behind. According to the Viewpoint Report, “76% (of organizations) are planning to increase or maintain their investment in Big Data over 2 – 3 years”. Data emerging from social networks, mobile, CRM records, purchase histories etc. provide companies with valuable insights to uncover hidden patterns that can help enterprises chart their growth story. Clearly, when we are talking about data, we are talking about huge volumes that amount to almost petabytes, exabytes and sometimes even zettabytes. Along with this huge volume, this data which originates from different sources also needs to be processed at a speed that will make it relevant to the organizations. To make this enterprise data useful, it has to be projected through the users via applications.

As with all other applications, testing forms an important part of Big Data applications as well. However, testing Big Data applications has more to do with verification of the data rather than testing of the individual features. When it comes to testing a Big Data application, there are a few hurdles that we need to cross.

Since data information is fetched from different sources, for it to be useful, it needs live integration. This can be achieved by end-to-end testing of the data sources to ensure that the data used is clean, data sampling and data cataloging techniques are correct and that the application does not have a scalability problem. Along with this, the application has to be tested thoroughly to facilitate live deployment.

The most important thing for a tester, testing a big data application thus becomes the data itself. When testing Big Data applications, the tester needs to dig into unstructured or semi-structured data with changing schema. These applications can also not be tested via ‘Sampling’ as in data warehouse applications. Since Big Data applications contain very large data sets, testing has to be done with the help of research and development. So how does a tester go about testing Big Data applications?

To begin with, testing of Big Data applications demand the testers to verify the large volumes of data by employing the clustering method. The data can be processed interactively, real-time or in batches. Checking the quality of data also becomes of critical importance to check for accuracy, duplication, validity, consistency, completeness etc. We can broadly divide Big Data application testing into three basic categories:

  • Data Validation:
    Data Validation, also known as the pre-Hadoop testing, ensures that the right data is collected from the right sources. Once this is done, the data is then pushed into the Hadoop system and tallied with the source data to ensure that they match in this system and are pushed into the right location.
  • Business Logic validation:
    Business logic validation is the validation of “Map Reduce” which is the heart of Hadoop. During this validation, the tester has to verify the business logic on every node and then verify it against multiple nodes. This is done to ensure that the Map reduce process works correctly, data segregation and aggregation rules are correctly implemented and key value pairs are generated correctly.
  • Output validation:
    This is the final stage of Big Data testing where the output data files are generated and then moved to the required system or the data warehouse. Here the tester checks the data integrity, ensures that data is loaded successfully into the target system, and warrants that there is no data corruption by comparing HDFS file system data with target data.

Architecture Testing forms a crucial part of Big Data Testing as a poor architecture will lead to poor performance. Also, since Hadoop is extremely resource intensive and processes large volumes of data, architectural testing becomes essential. Along with this, since Big Data applications involve a lot of shifting of data, Performance Testing assumes an even more important role in identifying:

  1. Memory utilization
  2. Job completion time
  3. Data throughput

When it comes to Performance Testing, the tester has to take a very structured approach as it involves testing of huge volumes of structured and unstructured data. The tester has to identify the rate at which the system consumes data from different data sources and the speed at which the Map-Reduce jobs or queries are executed. Along with this, the testers also have to check the sub-component performance and check how each individual component performs in isolation.

Performance testing a Big Data Application needs the testers take a defined approach that begins with:

  • Setting up of the application cluster that needs to be tested.
  • Identifying the designing the corresponding workloads.
  • Preparing individual custom scripts.
  • Executing the test and analyzing the results.
  • Re-configuring and re-testing components that did not perform.optimally.

Since the testers are dealing with very large data sets that originates from hyper-distributed environments, they need to make sure that they verify all this data faster. To enable that, testers need to automate their testing efforts. However, since most of the automation testing tools are yet not skilled enough to handle unexpected problems that could arise during the testing cycle and the absence of a single tool that can perform the end-to-end testing, automating Big Data application testing requires technical expertise, great testing skills, and knowledge.

Big Data applications hold much promise in today’s dynamic business environment. But to appreciate its benefits testers have to employ the right test strategies, improve testing quality and identify defects in the early stages to deliver not only on application quality but cost as well.

Using AngularJs For Enterprise Apps

It is the hottest buzzword in the web application development space. It is called as the “superheroic JavaScript framework”. It is very popular amongst the developers– it has the highest number of contributions on GitHub community. It is built and maintained by Google Engineers. No wonder, AngularJS is creating a buzz in the industry today. In our earlier blog post, we cited 8 reasons behind the enormous popularity of AngularJS. Rapid development, easier code maintenance, development flexibility, mobility, and collaboration support are some of the many reasons why organizations have started seriously considering AngularJS for their web application needs.

One of the questions which keep doing the rounds is whether AngularJS is suitable for large-scale enterprise applications which involve multiple pages, require faster loading, have a huge amount of data involved, require faster performance, and need to be secured. Well, looking at the industry, quite a few popular enterprise applications have already leveraged this framework and are seeing some good results. For example – Google’s DoubleClick platform which serves millions of ads per day uses AngularJS framework as a front-end. IBM’s MobileFirst Platform Foundation (earlier known as IBM Worklight) also uses AngularJS front-end interface to ensure that the look-and-feel between the web clients and the mobile is consistent.

Of course, like any other technology framework, you need to use it smartly and appropriately to get the most from it. Here are a few things to keep in mind while developing enterprise applications using AngularJS –

Code Base Structure and Conventions
AngularJS does not follow very strict conventions. In the absence of strict conventions, each developer has the flexibility to adapt and use the framework as per his/ her own style. AngularJS is an MVC framework which is loosely based on the traditional MVC Framework. Since there is not much documentation on the conventions, while building a large-scale enterprise application, it is important that the development team defines and agrees on a set style guide on conventions to follow. The style guide should provide the developers the guidance on all the aspects of development such as splitting of controllers, placement of directives, services reuse, management of libraries, and so on. One of the recommendations to design a maintainable code structure is to follow modular approach. Modules are identified on the basis of functionality and it helps the developers in development as well as during the maintenance phase of a project.

Understating the $scope
$scope is one of the most essential and important aspects of AngularJS. Building a large-scale application with AngularJS requires a detailed understanding of all the intricacies of the $scope life-cycle. AngularJS’s two-way data-binding allows every $scope to have $watch expressions. For enterprise applications, where performance is of paramount importance, it is imperative to keep a close eye on the $watch expressions and their linkages with the $scope. A few tips while working with $scope –

  • $scope should always contain necessary information that is required by the view.
  • Avoid unnecessary filling up of $scope because it may lead to degradation of performance.
  • Remember that $scope follows prototypal inheritance which may lead to confliction of information so do take special care while working with $scope.

Third Party Integrations and Ecosystem
Enterprise applications need to use several third party libraries for a variety of functionalities. If not handled correctly, AngularJS can create a lot of problems during the integration. With its own way of using $digest loop, AngularJS, by default, tends to be ignorant about the changes which are done to the DOM by the third party libraries. In such situations, the developers need to take care of manually kicking off a $digest loop. On the other hand, AngularJS offers a lot of plugins but those plugins or libraries are not easily accessible at a central location and, therefore, developers tend to reinvent the wheel and, either build the plugin on their own or look for some third party library. Creation and documentation of a widget library collection enlisting all the required widgets and libraries helps in easier identification of the right plugins and libraries and makes the development smooth.

File Management
Since AngularJS does not have any defined conventions for file naming and storing, it is recommended that while developing an enterprise application the guidelines and conventions for file system organization and management are defined properly and finalized before the development begins. While there are multiple ways of organizing the files, organizing them according to the features is being increasingly preferred by the developers.

With both hype and adoption around AngularJS, the enterprises today have no option but to adapt the fast-moving changes within the modern frameworks like AngularJS and move towards making their Web applications powerful, scalable, usable, and yet, simple.

Top 5 Secrets to Bug Hunting Success in Software Testing

What’s a Bug in a software?

A bug is a defect present in the software, obstructing its desired performance. The defect(s) could be a mistake in coding, error in designing, faulty requirements and many more. Presence of bug continuously degrades the software quality.

How to remove bugs?

Testing Team is deployed, to locate bugs, for their removal. Testers build test strategies and test cases to identify bugs, in order to remove them. In the era of agile development, tester’s role has become more relevant than catching the bug. Now, their work is not limited to just testing only. Testers are working closely with clients and development teams, so as to attain the maximum quality of the software in the minimal time. They are acting as the representatives of end-users/client, to have better & deeper understanding of the quality improvement needs, as desired by the end-users/client.

Testers with acquired knowledge & gained skill and experience know how to perform their job efficiently, but a good tester is always starved for anything related to testing. Here, we are giving you, top 5 secrets to achieve a sure shot success in the bug hunting process.

  • Explore, beyond the rules: It is impossible to make 100% bug-free software, considering the impracticality in covering all facets of the software. However, a good test strategy along with effective test cases keeps your target of achieving bug-free software, close to 100. But, a good tester should not always stick to test strategy and impulsively follows it. This may make you the victim of unintentional blindness and shortens your range of thinking. They should try to explore more feasible scenarios, by thinking beyond these strategies. Along with test cases, a good tester should consistently try to explore more functionalities under the test.
  • Pattern Study: Bugs are social in nature, they are likely to reside in groups. Each bug or groups have the tendency of affecting the same place, feature or functionality, again & again, uniquely. Regular monitoring of the bug-catching mechanism and using past test ideas could assist, in locating bugs, by evaluating the pattern of occurrence of the bugs, responsible for the affecting the similar functionality. The information derived from this evaluation, will aware and prevents developers, to make bug-occurring mistakes.
  • Quick Attacks: Unless you don’t have any prior or little knowledge and understanding of the software, you cannot gather requirements, which is a must requirement for formal preparation of test strategies, plans, and documentation. Instead of waiting for the requirements, quick attacks may be made on the software by executing wrong or inappropriate things. Quick attacks can be like
    • Leaving any field blank, which is mandatory to be filled.
    • Typing words in the field, that requires numbers only.
    • Typing numerous words or large numbers in the field, to check the acceptability and efficiency to handle, of the software.
  • Hit the bug hard: Discovery of a bug may appear as a success to you. You may try to record your success in the bug-tracking report, but wait, this is just the beginning, not the end. Presence of bugs indicates potholes in the software, thereby confirming software’s unstable nature. Testers should take the advantage of this instability, and try to thrash the software, rudely by feeding it with unfeasible inputs, cutting down its resources, etc. This may result in the revelation of more harmful bugs.
  • Be open to taking help from colleagues: No one is perfect in the world, everyone needs help. Even the vivid tester, alone could not discover the bugs in all cases. There might be some cases, where a tester alone, is unable to locate bugs. In that case, he/she should feel free to collaborate with their colleagues and share their ideas and views on bug-hunting. This may generate several ideas and effective solutions for any typical case.


Testing is a large domain, which not only encompasses various approaches and strategies, to provide the bug-free product but also empowers the testers to make use of their skills and experiences, to reveal out the defects in a software product, in any possible way. Still, a tester may consider and go through above stated noteworthy points, to make a testing process, easier and meaningful.

Key Considerations in Cloud Application Testing

Cloud applications that are developed ‘on’ the cloud or applications that are developed ‘for’ the cloud, are very different from traditional web applications. The biggest advantages of cloud applications are that they are cost efficient and scalable and are built using more modern technologies such as CSS3, HTML5, jQuery, JavaScript etc. At the same time, cloud applications have to be multi-tenant, highly configurable, secure, fault-tolerant and to provide business advantage. This would suggest that testing cloud applications is very different from testing traditional applications.

At the high level, testing cloud applications consists of validating the applications with data, business workflows, compliance, network/application security, performance, scalability as well as compatibility to build robust applications. Unlike web application testing, cloud testing remains relatively unaffected by versioning, server installation, multi-platform testing or backward compatibility. The focus here is more on security, SLA adherence, deployment, access, interfaces between components and failovers.

In this blog, we take a look at some key considerations that testers have to give special consideration to when testing cloud applications, many of which are dependent on the infrastructural nature of the cloud.

  1. Performance Testing
    Since cloud applications run on hardware that is shared and testers have no control over, performance testing of the application and the required scalability become essential. Running load tests on the application and the shared resources simultaneously, thus, become imperative to evaluate if the performance of the application is impacted in any way. Testers also need to evaluate response times, latency, response codes, errors, deviations etc. and isolate the issues that cause a performance dip with increasing loads or multi-user operations. Testing also has to take into consideration the number of concurrent users accessing the application from multiple geographical locations.
  2. Security Testing
    Since cloud applications share infrastructure and resources, testers need to perform a high degree of security testing to ensure data integrity and security. Testers, thus need to implement security testing in the form of SQL injections, testing cookies, cross-site scripting, multi-tenant isolation, access validations for roles and application data. They need to address accessibility concerns by performing multi-privilege tests and access control tests to ensure that one tenant’s data cannot be accessed by another. Security testing also assumes a very important role to ensure compliance according to government standards. Considering the infrastructure on which the application is hosted is owned and managed by someone else, testers need to create security tests within, without and across the cloud infrastructure system and the application itself to ensure the security of business data and the application.
    Testers also need to consider testing the network to control access, sensitive data flow, and encryption along with testing the network bandwidth to ensure data availability and its transfer from the cloud application to the network.
  3. Third-party dependencies
    Testers need to test third party dependencies since cloud applications are most likely to consume external API’s and services to provide certain functionalities. Testers thus need to monitor and test these API’s as a part of their own solution to make sure the application functions in the manner that it should and identify any associated deterrents of performance.

Testing of cloud applications has to be a proactive process considering the frequent upgrades and releases, especially live upgrades, interface upgrades etc. that are made to the application. Hence, testers need to ensure that any of the new changes do not impact the existing functionality of the application. For this, they need to ensure that validating the changes are prompt and do not cause any performance bottlenecks. Since the software teams developing cloud applications move fast, testing needs to be more organized, documented and defined. Hence having a detailed testing plan that defines the scope of testing, the elements that need to be tested and test definitions to produce quality releases and delivering fool-proof applications.


There is no escaping the cloud services(services of cloud) in today’s business environment – more and more applications will get built with the cloud in mind and testing services looks set to change as a result. For those reading this post – how has the cloud impacted your testing practices?

Considerations for Automating Mobile UI Testing

What is a successful application? Our view is, a successful application is one which responds in a functionally correct manner with the user and satisfies his (or her) need. At the same time, the application has to be simple, bug-free and easy to use. Given that the affinity towards mobile apps is growing, apps that have a killer user interface and easy usability are the ones surviving the mobile app tsunami.

This brings us to the topic of UI testing. UI testing of mobile applications tests that the user interface of the application works in the desired manner. This type of testing has to make sure that menu bars, icons, and buttons designed to make the app fun and easy to use behave correctly. Quite obviously, testing each of these aspects is a time-consuming, expensive and, sometimes even, a tedious process. This is where automation comes into play and presents an opportunity to eliminate the need to manually verify all aspects of the user interface and document all the errors noted.

Testing, any form of testing, of mobile applications, is a relatively more complex process than testing web or desktop applications. To begin with, desktop applications are usually tested against one dominant platform. In the case of mobile apps, there is no one dominant platform. Mobile apps, unless otherwise specified, have to be tested on iOS, Android (and its various versions) and now maybe Windows platforms too. Along with this, the different device form factors and device diversity make mobile app testing a far more extensive process. For example, the official Android device gallery has over 60 devices of various screen sizes, form factors, and resolutions. As mobile apps become more sophisticated the need for deeper testing increases. So just like we automate unit tests and functionality tests to ensure that the app ‘function’ like it should, we can also automate UI tests to ensure that the app ‘looks’ the way it should.

  1. UI validation
    A mobile UI needs continuous validation during the building stage. It is essential to check size, colors, positions etc. of every single artefact on every individual view. Doing this exercise manually can be an extremely tedious process. Sometimes it can be tempting to skip the views that seem simple and have been developed a while ago and move to the most recent work that has been done. Automating the process of collecting the snapshot of the preview images becomes very helpful to discover UI mistakes and deliver greater value.
  2. Considerations for UI Test Automation
    There are certain aspects of the UI tests that lend themselves to automation easily. However, it also has to be understood that describing tests at only the technical level can lead to brittle tests. Changes in workflows, requirement changes etc. are to be considered when writing tests for mobile applications. Considering that business rules change quite frequently as per user requirements, it makes sense to write the UI tests that are closer to the business rules and describe them with as much clarity and efficiency as possible to what has to be done.
  3. Selecting a UI Automation Tool
    Testers have to make sure that the test automation tool that they employ tests not only the latest OS versions but also test earlier versions. For example, when testing an Android application, the test automation tool should ideally not only test the app UI for the latest Android version but test for the early versions and the sub-versions of Android. A test automation tool should allow testers to elaborate additional program modules that can be utilized during the late development cycles. Testers also have to look out for testing tools that do not have to deal with the source code to reduce the test complexity. Test automation becomes a tedious process during UI testing if testers have to write scripts for each individual device and the tests have to be adjusted each time the UI of the tested program is altered.

During UI testing testers have to make sure that the automation tool that they choose allows then to reproduce the complex sequences of user actions. Clearly selecting the right test automation tool is a great contributor to UI testing success. Using the right test automation tools will enable testers to focus on the testing system, spot defects, and regression errors to ensure that the new code does not break the current functionality, fix broken tests and adapt them to the testing system. It will so help them generate better, more detailed and informative reports and ensure that the testers spend more time in ‘testing’ and less time in setting up the tests.

Automating UI testing does require an initial investment. However, the ROI of the same is realized when the same test is applied time and again at a negligent incremental cost. When testing UI, testers, thus need to ensure that features that can change in the immediate future in terms of the UI flow, need real-time data from multiple sources, have technical challenges etc. should ideally not be considered for automation as these add to the testing costs. Being judicious about what to test, considering connectivity options and target devices, using Wi-FI networks in combination with network simulation tools work towards making automating mobile UI testing time efficient and cost effective.

All said and done, UI testing for mobile apps is challenging but it is also clear that utility exists when deploying it selectively for specific purposes.

Why TechWell’s StarWest and StarEast are Becoming my Favorite Conferences?

By Rajiv Jain (CEO, ThinkSys)

“No great idea was ever born in a conference, but a lot of foolish ideas have died there.” – F Scott Fitzgerald

Talk about damning something with faint praise. I am not sure what conferences Fitzgerald attended – I myself have been to more than a few, many focused on software testing, and with some admittedly mixed results. This post though is not about those events that didn’t work, it’s about two events that most definitely do work – Techwell’s StarWest and StarEast conferences. I got back a couple of weeks ago from StarEast in Orlando and now that I am done catching up with post event follow-ups it seemed right to put this down.
rajiv jain representing thinkSys at stareast
First, as you know, there is no dearth of Testing focused conferences. Googling “software testing conferences” spits out over 67.3 million page results. Clearly us software testing pros like talking shop. Among those results, you will find some good ones and some extremely nondescript ones but conventional wisdom has been that the stars of the show are TechWell’s StarEast and StarWest (yes, I know I’m not the first one with that pun). So convinced are we of the utility of these events that Thinksys elected to be a Platinum Sponsor at the most recent edition of both these conferences. So what drives this confidence?

1. The Speakers and the Sessions:
I can’t think of any other place that can boast of all-star line-ups including speakers like James Bach, Michael Bolton, and Joe Colantonio among a host of other gurus and expert-level users. The range of topics covered is also staggering – everything from testing and test automation best practices to the emerging trends not yet fully visible as they come down the turnpike. There is so much to learn from. This year I particularly liked Isabel Evans’ StarEast Keynote, “Telling our Testing Stories”. Her thoughts on the miscommunication between IT and Rest of the World resonated with me. I agree that we have to present our work as a testing organization in words and stories that are understood outside of our group if we have to show the value of our work.

2. The people:
The range the conferences span is vast – everybody from a junior tester looking to build a shining testing career to those tasked with software quality in some of the biggest companies in the world attend. As a company, access to such a highly targeted audience is extremely helpful. Over the two events this year, the companies we met spanned all parts of the QA spectrum. There were those focused on doing manual testing well to organizations who are seeking to make QA an integral part of the product planning process, involved from concept to delivery – a change we see happening more and more.

3. A bellwether:
This is the place to be if you want to get a sense of where the QA world is going in the near future and even beyond. The speaker sessions and even the conversations over coffee give you so much information to chew over. For instance, our meeting with the TechWell program chair, Lee Copeland was quite interesting. He is seeing a change in the role of QA and in the skills needed today, driven by the adoption of Agile and Continuous integration. We fully expect these trends to be transformational for testing. Testing could become a very different organization from what it is today.

4. A platform:
In many ways, this is where we get the real ROI as a sponsor. Our objective obviously is to get in front of our target audience and showcase who we are and what we have to offer. Over the two events this year we focused on our test automation tool, Krypton. I have to say that overall Krypton was quite well received. As a new tool, what was perhaps even more valuable for us was the feedback we got from those that saw what it could do. Many people liked our adoption of the Open Source initiative for this tool and many had valuable suggestions to offer on features, usability and so on. It does appear that they would like to see the tool made more visible and marketed more openly. On a personal note, I got an opportunity to talk about something close to my heart – what makes test automation projects fail. I have long believed that it is essential to know what could go wrong before you can work to ensure that everything will go right. The response to my talk on the subject was, frankly, quite overwhelming. At StarEast, the room was overflowing with attendees and the feedback during the talk and even later from people who stopped by the Thinksys booth was extremely gratifying. Clearly, this is something I should do more often!


In closing, let me share something that Ryan Holmes said about SXSW, but is accurate enough about our conferences in general too. “One of the ironies of a conference dedicated to all things digital and virtual is that the best ways to connect with people are surprisingly old-school. Social media tools can improve the odds of a serendipitous encounter at SXSW, but old-fashioned hustle, palm-pressing and – above all – creativity go a long way.” I have come to believe that in the software testing world the place to meet people is StarWest (& StarEast). Assuming you agree- will you be at Anaheim, come October?

Things one should know while E-commerce Testing.

On observing the fast-changing retail landscape, Jeff Jordan of Andreesen Horowitz said: “We’re in the midst of a profound structural shift from physical to digital retail.” Today, eCommerce is a $341 billion industry and growing at the rate of approximately 20% each year. It is estimated that eCommerce sales will cross $414.0 billion in sales in 2018 according to Forrester.
Things one should know while E-commerce Testing.
While the popularity of eCommerce continues, the past two years also have been witness to many high-profile website glitches and crashes, especially during the high-traffic holiday season. From Walmart, Belk, Tesco to the mighty Amazon, all have experienced major glitches at crucial times, resulting in heavy losses, disgruntled customers, and some serious media bashing.

The expectations that people have of these eCommerce websites are increasing. Today, eCommerce websites need to not only look great but have to also ensure that they are user-friendly, easily navigable and do not take much time to load. Thus e-commerce site/website testing has emerged as a crucial component of e-Commerce business success. The need is to ensure that all the parts of the website function in harmony and that performance and security issues do not lead to bad press. For this to happen, testing cannot be treated as an afterthought and should ideally be built into the project from the very beginning. In this blog, we identify things to test when testing an eCommerce website.

  1. Product Page and Shopping Cart Behavior:
    While testing eCommerce website, the product page is equivalent to the shelving and the goods available in a brick-and-mortar store. Thus testing that the products display correctly and properly on the site is a given. Considering that the product page displays a lot of information such as product image, description, specification and pricing etc. it is critical that all this information and the associated value proposition display correctly whenever a customer logs in. Additionally, you have to check if the shopping cart is adding the products and the associated pricing correctly. Testers need to add multiple items to the shopping cart and then remove them to see if the price alterations during the changes are correct. You also have to ensure that special deals, vouchers, coupons etc. process correctly all the way through the checkout. Further, testers need to ensure that the cart remembers the items that have been stored when the browser is closed suddenly and then restarted.
  2. Category page:
    Category pages have a lot to convey so testers need to pay a great deal of attention in testing the category pages and must ensure that sorting, testing and pagination (SRP) making testing the search result page essential for eCommerce success. Considering that the search form is present in most of the pages, testers must make sure that when a customer goes to the SRP page the relevant products, product information, and the number of products per page display correctly, and all items on the next page and that there are no duplications in the next page. Ensuring that the sorting parameters work correctly and the sort order remains as chosen even when you paginate is important. Further, testers need to play close attention to filtering options and ensure that filtering and pagination work harmoniously. Finally, testers have to check if sorting and filtering options are both applied they remain the same as we paginate or if new products are added.
  3. Forms and Account creations:
    Optimizing forms in eCommerce with the help of testing can help increase conversion rates. Since forms are a key talking point, whether it is to sign up for the site newsletter or to create an account or at the checkout, testers need to make sure that these forms function correctly. Testers need to make sure that the information gathered in these forms is being stored, displayed and used correctly. If the customer creates a new account, then testers need to check the login behavior and ensure that the customer is connected to the right account. Testers also need to check the login and logout sessions, the login redirects and finally, check if the purchased items get added to the correct account. If the customer is proceeding to check-out as a guest, then testers need to make sure that they, the customers, get the option to create an account when the order is placed.
  4. Checkout and Payment Systems:
    Testing checkouts and payment systems are critical to the success of an eCommerce site. Lengthy checkout procedures and complicated payment systems can increase the chances of shopping cart abandonments. Testers need to ensure that the checkout and payments process is smooth and glitch free. For this taking a close look at the checkout process to assess that the final amount to pay is correct after all other charges such as VAT, delivery charges etc. are levied. Testers also need to check that final updates after alterations such as changes in products being ordered, change of delivery address etc. reflect correctly. Testers need to also check the payment systems using each payment method that is on offer. Debit cards, credit cards, PayPal, mobile payment options, all should be checked individually to check if the systems work correctly and also ensure that confirmation emails are sent correctly.
  5. Performance and Load testing:
    Testers need to pay special emphasis on performance and load testing of eCommerce sites. Almost 18% of shopping carts are abandoned because of slow eCommerce websites. For an eCommerce website that makes $1 100,000 per day, a 1-second page delay could potentially cost $2.5 million in lost sales each year. Testers also need to make sure that the website can handle high traffic and assess how the website performs when instead of two, 200 people log in simultaneously on their website without slowing down.
    Apart from this, testers need to check web browser compatibility, ensure that cookies are audited and that there are no broken links. Further, they need to also check if the website has mobile device compatibility considering 7 out of 10 users access eCommerce websites from their mobile devices.
  6. Security Testing:
    Testers need to focus on the security testing to safeguard customer data and ensure that the customers privacy is not compromised. Testers thus need to check the penetration and access control, check for insecure information transmission, web attacks, digital signatures etc. They also have to ensure that the application handles incoming and outgoing data securely with penetration tests and identify vulnerabilities that can cause a security breach and jeopardize client information.


Testing an eCommerce website requires careful planning, meticulous execution and an eye for detail. However arduous this task may seem, it is a critical element that contributes significantly to the success of an eCommerce website.

Software testing for Microservices Architecture

Over the last few years, Microservices has silently but surely made its presence felt in the crowded software architecture market. The Microservices architecture deviates from the traditional monolithic application built where the application is built as a single unit. While the monolithic architecture is quite sound, frustrations around it are building especially since more and more applications are being deployed in the Cloud. Microservices architecture has a modular structure where instead of plugging together components, the software is componentized by breaking it down into services. The applications, hence, are built like a suite of services that are independently deployable, scalable and even provide the flexibility for different services to be written in different languages. Further this approach also helps enables parallel development across multiple teams.
testing microservices architecture
Quite obviously, the testing strategy that applied to monolithic needs to change with the shift to micro services. Considering that applications built in the micro services architecture deliver highly on functionality and performance, testing has to cover each layer and between the layers of the service and at the same time remain lightweight. However, because of the distributed nature of micro services development, testing can often be a big challenge. Some of the challenges faced are as follows:

  • An inclination of testing teams to use Web API testing tools built around SOA testing which can prove to be a problem. Since the services are developed by different teams, timely availability of all services for testing can be a challenge.
  • Identifying the right amount of testing at each point in the test life cycle
  • Complicated extraction logs during testing and data verification
  • Considering that development is agile and not integrated, availability of a dedicated test environment can be a challenge.

Mike Cohn’s Testing Pyramid can help greatly in drawing the test strategy to identify how much of testing is required. According to this pyramid, taking a bottom-up approach to testing and factoring in the automation effort required at each stage can help address the challenges mentioned above.

  1. Unit Testing
    The scope of unit testing is internal to the service and is written around a group of related cases. Since the number of unit tests is higher in number they should ideally be automated. Unit testing in micro services has to amalgamate Sociable unit testing and Solitary unit testing to check the behaviors of the modules by observing changes in their state and also look at the interactions between the object and its dependencies. However, testers need to ensure that while unit tests constrain the ‘behavior’ of the unit under test, the tests do not constrain the ‘implementation’. They can do so by constantly questioning the value of the unit test in comparison with the maintenance cost or the cost of implementation constraint.
  2. Integration Testing
    While testing the modules in isolation is essential, it is equally important to test that each module interacts correctly with its collaborator and test them as a subsystem to identify interface defects. This can be done with the help of integration tests. The aim of the integration test is to check how the modules interact with external components by checking the success and error paths through the integration module. Conducting ‘gateway integration tests’ and ‘persistence integration tests’ provide the assurances help in providing fast feedback by identifying logic regression and breakages between external components which ultimately helps in assessing the correctness of logic contained in each individual module.
  3. Component testing
    Component testing in microservices demands that each component is tested in isolation by replacing external collaborators using test doubles and internal API endpoints. This provides the tester a controlled testing environment and helps them drive the tests from the customers perspective, allows comprehensive testing, improves test execution times and reduces build complexity by minimizing moving parts. Component tests also identify if the microservice has the correct network configuration and its capability to handle network requests.
  4. Contract Testing
    The above three tests provide a high test coverage of the modules. However, they do not check if the external dependencies support end-to-end business flow. Contract testing tests the boundaries of the external services to check the input and output of the service calls and test if the service meets its contract expectation. Aggregating the results of all the consumer contract tests helps the maintainers make changes to a service, if required, without impacting the consumer and also help considerably when new services are being defined.
  5. End-to- End Testing
    Along with testing the services, testers also need to ensure that the application meets the business goals irrespective of the architecture used to build it and test how the completely integrated system operates. End-to-end testing thus forms an important part of the testing strategy in micro services. Apart from this, considering that there are several moving parts for the same behavior in micro services architecture, end-to-end tests identify coverage gaps and ensure that business functions do not get impacted during architectural refactoring.

Testing in micro services has to be more granular and yet, at the same time, avoid become brittle and time-consuming. For a strong test strategy, testers need to define the services properly and have well-defined boundaries. Given that the software industry is leaning in greatly towards micro services, testers of these applications might need to change processes and implement tests directly at the service level. By doing so, not only will they be able to test each component properly, they will also have more time to focus on the end-to-end testing process when the application is integrated and deliver a superior product.

Role of DevOps in QA

DevOps, a compendium of development and operations is an organisational strategy, focusing on a close collaboration and communication of the software developer team with the other professionals belonging to the testing and releasing teams. The planning employs automated processes in a symbiotic environment which ultimately results in building, testing and releasing the software with clockwork and guaranteed reliability.

How does QA benefit from DevOps:

Some ten years back, QA was seen as a group disparate from the Developers’ teams, with different skill sets and responsibilities and management. Fast forward into the DevOps age, and things are quite different today. Here’s how we look at QA through the glasses of Devops…

1. Automated Deployment:

The conventional approach of a software “release” is now passé with devops facilitating the delivery of the product on a monthly, weekly and even hourly basis into the market through automated processes. This has been made possible through a continuous cycle of improvement where the developers, testers and operations people all working in sync and moving in the same direction.

2. Environment is now a part of the product:

Traditionally here’s how the flowchart used to be like….you create a software, get it verified in a testing environment of the QA team and when the litmus test is over, so to speak, unleash it into the big bad world of the user. If anything then went wrong, it was the problem of the operations teams. Not any more….As is evident from the success of Google’s Unbounce , the QA verifies the environment with their chief enabling infrastructure being the code itself. At the occurrence of any change/problem, the QA team initiates the requisite deploys, examines that the intended change, functions as expected and move over to the latest deployed code with the option of a roll back if needed.

3. Prevention is better than discovery:

In a devops environment, the priority for QA is prevention of faults and not just finding them. As opposed to let’s say ten-fifteen years ago, the QA teams of today have the luxury of pushing the code on when it’s fully functional and rolling it back when things go awry. This has positive ramifications in that, the QA team can continuously track the quality of the product. Thus the QA team has a profound influence on the development and operational phases of the software.

4. Less of Human error:

Devops enables more of automated testing in QA thus reducing the glitches due to fatigue associated with manual testing. This also ensures 100% code coverage and quick scripting of test cases.

5. Greater teamwork and rapport:

At the individual level, the testers and the operations team get a chance to be on the same page and level as the developers. This improves coordination ticking the boxes for higher market outreach and efficiency gains