Did We Get It Right? – A Review Of Our 2016 Predictions

“Science is not, despite how it is often portrayed, about absolute truths. It is about developing an understanding of the world, making predictions, and then testing these predictions.” Brian Schmidt

Schmidt is an Australian educator of repute – in the spirit of heeding the advice of our teachers let’s take a look back at what we predicted for the world of testing in 2016, and test just how on (or off target) we were.

  • Internet of Things:
    In many ways, this was an easy prediction to make and it’s fair to say that we hit the mark, clearly, the market has dramatically expanded. Zinnov estimated a 2016 market of USD 54 Billion for IoT Technology products and Gartner estimated that 6.4B connected things were in use worldwide in 2016, a growth of 30% over 2015. We predicted that such growth in IoT products would call for a greater emphasis on usability testing and performance testing and a sustained emphasis on automation in testing. In usability, the focus last year was on testing facets like installation, interoperability, and the launch and usage experience. Performance factors in focus were load-bearing capability, speed, and scaling ability. Among the key features in the IoT world are “Over The Air” updates (OTA) – where the OS and firmware get updated frequently. Many releases call for increased regression testing – a natural fit for greater automation.
  • Mobile Testing:
    Digital Transformation of enterprises, driven by the growing power of mobility was one of the defining trends of the year gone by. We estimated that there would be a slew of new mobile apps focused on mCommerce and mobile payments. This seems to have panned out a shade slower than expected in the early part of the year but with some tailwinds later in the year. Business Insider estimated US in-store mobile payment volume to reach $75 billion in 2016, indicating some resistance from consumers. Late in the year, though, a high-growth market like India witnessed a strong push towards digital payments. Our estimate had been that with the growth of such mobile-enabled businesses would come a greater emphasis on security and penetration testing of mobile apps – it’s fair to say this has panned out as expected. We had also predicted the rise of testing for voice commands with the rise of Siri. In many ways, this trend has moved faster than our estimates with the sudden advent of digital assistants like Amazon Alexa.
  • Agile Development / Continuous Delivery:
    These are trends that we really took to heart over the year. If you have been following our blogs you would have seen numerous references to the changing role of testing and test automation in the Agile way of life, and most recently, on the DevOps approach and how testing has been impacted. Perhaps the most visible difference in software development due to Agile and DevOps has been the ever-shorter iterations and the increasing number of releases. The world of software testing has been impacted in multiple ways – testing is getting involved at much earlier stages in the product lifecycle and is much more closely integrated into the product development and deployment process and automation is playing a greater, and more critical role – just like we expected.
  • Security Testing:
    Even in the earlier sections on IoT and Mobile Testing, security testing has found mention. The appearance of threats like the Mirai botnet in 2016 only reinforced just how important security testing had become over the year. This applies across mobile apps, web apps, and desktops apps and the need is for comprehensive security testing. It became fair to assume that any vulnerability in your code or in the code of any of the underlying technologies or products would be open to exploitation and this only drove up the emphasis on security testing. The “World Quality Report 2016”, jointly published by Cap Gemini, Sogeti and HP, reported that 65% of the QA executives surveyed found security to be their top concern. This was more or less in line with what we had predicted at the start of the year.
  • Focus on automation in testing over test automation:
    This was more a fervent appeal than a prediction, to make automation more strategic or more central to the process of creating high-quality products. The objective was to ensure that the full benefits from the automation initiative shone through. To this extent we are happy that, at least in the interactions that we have been having, the focus has shifted from achieving “fewer testers” to doing “better testing”, and from unattainable goals like “100% test automation” to strategic impact”. We still believe that the role of automation is to support the testers, not to replace them and more and more are coming around to that way of thinking – kind of like we predicted!

Niels Bohr said, “Prediction is very difficult, especially about the future”. We are in no position to disagree with a Nobel Laureate – so despite the reasonable accuracy of our 2016 predictions, we are in no rush to turn in our software development hats for a crystal ball!

Criteria for Selecting Mobile Application Testing Tools for Your Business.

Mobile application testing is one of the complex and strenuous testing activities for the testers due to involvement of multiple factors and conditions in it. Yes, mobile application testing needs to be carried out on each different and possible combination of factors related to functioning of the mobile apps such as device, operating system, platform, network configuration & settings, and many other relevant parameters. Thus, it makes the task of mobile testers more hectic and complex to ensure coverage of specified testing requirement and that’s too on each possible combination of devices, OS, platform, etc. along with their different versions and variants.

However, the job of mobile testers could be made easier with the involvement and the usage of testing tools in the mobile application testing process, which may significantly reduce their efforts and time in testing a mobile application.

The market is flooded with wide variety of mobile application testing tools advertising their proficiency and competency in testing the mobile apps. Availability of these tools and their appealing advertisement often confuses and misleads testers, ending up with the selection of inappropriate or ineffective tool along with worthless expenditure incurred over it.

Here, we are listing out some criteria which may be considered while selecting mobile application testing tools to fulfil and accomplish the need and requirement of testing both from the technical and business prospective.

  1. Targeted Platform:Selection of testing tool should be made with respect to the platform along with its different variant and versions, for which a mobile app is targeted and intended to function. However, it is preferred that apart from targeting one or major platforms, the testing tool should able to provide testing for other platforms also.  This ensures the cross-platform testing of the mobile applications.
  2. Code and Build Requirement and Need:Software code and build is a matter of concern with respect to their privacy and security. Thus, code or the build should not be shared or exported outside the testing team, boundaries or environment to any unknown or unauthorized entity. The selected tool should not compromise with the privacy and security of software source code and build in any respect.
  3. Additional features:Besides automating the mobile app tests, a testing tool should able to provide additional and useful features. It should be able to deliver multiple functionalities such as
    • Logging and reporting defects.
    • Filtering logged defect with respect to priority, time, type and other relevant parameters.
    • Monitoring and tracking the bug.
    • Able to ease QA or Project manager in viewing the overall and summarized status of the tests.
  4. Continuous testing:The automated testing tool should able to deliver continuous testing to evaluate the degree of impact caused to software due to change or modification on code. The changes produced in the code should be readily tested by the tool.
  5. Third party bug tracking system:Selected tool should able to support and integrate with other or third-party bug tracking system.
  6. Team Management:The tool along with the task of testing the mobile app should also provide the advantage of managing the activities of testing team which may include roles and responsibilities, task assigned to each member, status of the task, feedback and reviews.

Conclusion:

Above stated are some general criteria for selecting the testing tool, however a tester based on his/her experience and rational thinking and along with the help of business team, may include and consider more parameters for selecting best testing tool for testing the mobile app.

Localization Testing: why, when and how?

Organizations, whether small-sized or large-sized are moving towards globalization. The reason, booming global economy is attracting each and every industry to function beyond local boundaries and explore more opportunities to grow and expand at a much faster pace on the global platform. Nevertheless, let’s move to our topic.

While developing a software product, focus is completely made on the incorporation of each and every functionalities and features which may attract users and ensuring large audience for the software i.e., efforts are made in the direction of developing a quality software application that should be readily accepted by the users’ world widely irrespective of geographical locations. This is called globalization of the software product.

Well indeed, localizing your product is a good move and of utmost importance but ensuring its localized feature could not be ignored also. Let’s see why localization of software is required?

Why localization of software is required?

Software needs to be localized to meet the needs and the expectations of the local audience. It might be possible that a software product meeting the needs and expectations of a particular territory, culture, or region is unable to stand on the expectation of users belonging to different culture or region. Thus, only linguistic translation is not sufficient to make software localized as there could be several differences between two cultures, countries, region for each different aspect such as style, conventions, standards, designing, time-zone, font, colour, etc. Users of each different culture, region would have different taste and different perspective and ways to look and use the software product. For making a software product globally recognized, it is preferred to first target the local markets and then go for the global market. Without localization, you cannot go for the globalization of the product.

What is Localization Testing?

Localization testing is one of the testing methodologies provided by the software quality assurance process to ensure that a globalized software product is readily adaptable to a particular locale, culture or region settings and environment.

How to do localization testing?

The localization testing of a software product may comprise following activities:

  • Multiple character support.
  • Evaluating UI feature and issues such as truncated or missing texts or content or inappropriately translated or displayed.
  • Whether language and content specifying and describing the system’s functionalities is with respect to targeted country or location.
  • Conventions, standards and protocols being implemented and used in the system is as per the targeted area or region.
  • Consistency throughout the software documentation with respect to targeted country language and settings.
  • Grammatical mistakes.
  • Time-zone, Date and currency used with respect to targeted country or area.
  • System’s adherence to rules, regulation, laws and agreement of a particular country.
  • Appropriate layout, designing, placing & displaying of images and texts.
  • Consistencies in design, layout and style.
  • Screen resolution w.r.t. targeted device resolution.
  • Proper encoding and decoding of characters.
  • Correct and appropriate translation of the content for a particular country or region.

Conclusion:

Above given is just a few of general activities that may be carried out during localization testing of a system. However, the activities and scope of localization testing may vary by the testing team depending on the needs and requirements.

In Software, What To Automate Is As Important As How To Automate

If Shakespeare was a tester, in the initial days of test automation adoption he would certainly have asked, ‘To automate or not to automate, that is the question’. As the adoption of test automation has increased and become an integral part of every testing strategy, this question has evolved just that little bit. Today, most testing teams recognize that they have to incorporate test automation to keep up with the speed of development. Agile testing methodologies and newer software development methodologies such as Test Driven Development (TDD) etc. place testing at the heart of software development. Hence, the tests have to run as fast as the development process. A failure to do so will drive up costs due to timeline overruns. While test automation carries the promise of great software quality, the fact remains that we cannot automate each and every test. Why? Simply because you want to get maximum returns from your test automation initiatives. Automating everything only drives up costs because of the time and resources required and the level of complexity involved. At the same time, it is essential to note that by automating the right tests, teams can increase test coverage, reduce the number of bugs and improve software quality and eventually take your product to market much faster. The reality is that automated testing is not an ‘all or nothing’ proposition. Software testing still needs some amount of manual testing – the trick to testing success lies in identifying what to automate as much as in deciding how to automate.

When to use test automation?

For any automation initiative to be successful it is imperative that the testing team first identifies the activities that are repetitive in the development cycle. Identifying the development environment and validating the functionality across these environments becomes the starting point of all automation initiatives. It’s best to not compare automated and manual testing since both these activities serve a different purpose. With test automation, you can increase test coverage, get faster feedback, find more bugs and save time. Manual testing, on the other hand, essentially involves the checking of facts and thus becomes a more investigative exercise where tests are designed and executed simultaneously and the human brain is employed to spot failures in the system.

Automation tests take the pain out of testing by taking care of the tasks that are repeatable. In our experience, we have seen the below-mentioned tests that lend themselves beautifully to automation and increase test accuracy and improves software quality.

Regression Testing

Even the smallest tweak in software code can lead to the product behaving differently. When you fix something, you run the risk of breaking something else in the code. Regression testing ensures that any change of any addition to the software code does not impact the existing functionality. This test also catches bugs that might have been unwillingly released into the system due to an upgrade or a patch. During the course of software development, regression tests are conducted frequently to assess that even the smallest alterations, enhancement, configuration changes etc. in the application source code does not impact application functionality.

Functional Testing

Automating most, if not all of the functional tests also enhances the performance of a testing team. Functional Testing is focused on what the software ‘does’ and is not concerned with the internal details of the application. It, thus, becomes easier for the testers to automate tests and set performance benchmarks that are developer-independent to assess if the function being developed performs as expected and is crash resistant when faced with user load. Automating functional tests ensures that even the most inexperienced tester can perform powerful and comprehensive functional tests and contribute to developing a robust software product.

Unit Testing

Unit testing is the testing of small code fragments to gain a deeper and more granular view of how the code is performing. The identified pieces of code are checked independently and in isolation to ensure that they are behaving correctly. Manual unit testing is time and resource intensive and can be error prone. Automating unit tests ensures that the source code remains error free, identifies errors early in the development phase and ensures that the code developed works then and in the future.

Integration Testing
Integration testing is performed to see how the software performs when all the pieces are put together. Integrations tests are performed when there is a coupling between two software systems. When a coupling is broken the software does not perform as it should. Since integration testing involves testing across all layers, manually testing these would mean re-executing these tests by hand each time. This impacts the build process negatively as manual testing of integration tests becomes extremely time-consuming and resource intensive. Instead, if integration tests are automated, testers will be able to catch bugs faster and ensure that the application is performing optimally when all its pieces are put together.

Smoke Testing
Smoke testing is a quick test conducted after a build is completed for identifying and then fixing obvious defects in a piece of software. Smoke tests are usually non-comprehensive and are focused on to assess that only the important functions work and assess if the build is stable enough to proceed with further testing. Smoke testing is also called Build Verification Testing and should be automated if builds are frequent as it exposes integration issues and identifies problems with the code early.

Performance Testing
Performance testing of an application is intensive and an exhaustive process as it involves identifying performance issues. Tests like load testing, volume testing, stress testing etc. fall within the purview of performance testing as these are all targeted towards identifying factors that affect the performance of an application. Performance tests are conducted to assess that the application can manage varying degrees of system transactions, handle a large volume of concurrent users without impacting the speed, stability, and scalability of the application negatively. Performance testing involves the testing of several functional and non-functional components of the application and assesses the reliability of the product and identify reasons behind performance bottlenecks (whether it is software or hardware problem).

Having established the case for test automation, it is important to note that tests that check the application usability, random testing, device interface testing, back-end testing are generally best conducted manually. Employing great manual testers is essential especially during exploratory testing since the manual testers have the ability and experience to question the system and see if things behave differently. For complete testing success, it thus becomes essential to take a strategic approach towards the testing initiatives and find the balance between manual testing and test automation and then find the right set of testing tools to aid the automation process so that it becomes cost effective and delivers great returns.

How to Manage Test Data in End-to-End Test Automation?

The present era imparts the diverse usage of latest and advanced technologies in producing out the fined quality of software products as a boon to mankind in performing each of the long, complex, heavy, useful and repetitive activities and operations in an effective way and at no time. However, most of the software products or may be said all of them are standalone inefficient to carry out their desired and appropriate functioning.

These software applications require their association or integration with other external applications, systems and environment components in order to perform their intended functions in a smooth and in an uninterrupted manner, and thereby increases the already present complexity of the software application multiple times, and subsequently escalates the probability of occurrence of bugs and defects in the system.

Software QA process provides the approach of end-to-end testing methodology which not only looks after and ensures the integration aspect of a software with other required systems needed to execute intended functionalities but also tests the completeness of a software application, right from the beginning to the end, at each different level to ensure desired, appropriate and streamlined work-flow and data-flow throughout its schema during functioning.

Adding to this, automation of end-to-end testing process may prove to be an efficient, productive and time-saving approach as the said testing technique encompasses the whole software system to test, including different types of interfaces, databases and other relevant entities, along with complexities associated with each of them. Further, the involvement and usage of large volume of test data to extensively and thoroughly test the system along with the consistency, accuracy and integrity throughout the testing schedule, right away strikes out option of manual approach of testing.

What is Test Data?

An umbrella term “test data” comprises all sorts of data input required to test system’s functionalities and may include positive data for expected functioning, negative or invalid data for error and exception handling mechanism.

Test data has a major role in the software testing process to generate out the qualities or the deficiencies present in the system. Thus, it arise the need of creating and maintaining the test data in end-to-end test automation, which is not an easy task for the testing team.

Then, how to deal with the test data management in end-to-end test automation?

A QA team may consider and implement certain specific strategies/practices to manage the creation and execution of test data in end-to-end test automation as per their needs and requirements. A few of these are stated below.

  • Test data creation during test phase set up

To ensure correct and precise results for each testing process/phase and for each functionality or module to be tested, it is preferable that the test data for each testing activity should be created in parallel along with the other activities carried out for a particular test phase. This approach would derive out and makes the availability of appropriate and desired test data inputs for each of the testing process. In this approach, test data may be generated using insert operation over database or may be simply through the user interface of the software application.

However, with simultaneous process of creating the test data, the amount of time required to execute and terminate a test phase also increases. Further, setting up the test data would require additional development and execution of extra scripts, and subsequently extra burden on the cost of automation.

  • Test data creation prior to test phase

Creating test data prior to actual execution of the tests may prove to be a more convenient and productive option than creating test data during the test phase as the former lets the testing team to be completely focussed on the execution of the automated test scripts rather partially indulging and focussing on the test creation as well as on its execution.

Test data prior to actual execution of the testing activity may be generated similar to that done during the test phase i.e. applying insert operations on database or by using user interface of the system. However, the strategy is uncertain about the veracity and appropriateness of the test data for testing the software.

  • Cleaning the test data

This approach involves the refreshing of the test data or restoration of test data in its original state after the tests execution (or before the execution of the next phase). To implement this strategy, backup of the test data repository or database needs to be taken for restoring the original state of the test data or clearing the test data database after the execution of test. Thus, the strategy rolls back the executed used test data to its original state which ensures and maintains the repeatability of the tests along with the test data. However, it requires thorough knowledge of the database model and database provides no or limited access to its architecture, which may be considered as the reasons to strike out this approach from the list.

  • Visualizing and understanding the data layer

With help of different types of tools readily available in the market, a tester may virtualize and go through each different data layer to analyse and understand the data flow, which may help and prove to be beneficial in testing the system.

  • Cutting out the test data

Testing is a methodology that works great when it comes in a good practice, that’s why expertise professional it required to do such tasks. To perform test, firstly it is necessary to make a systematize procedure to test individual pages of site by tracking bugs and required fixes within system. Using regression testing technique at the time of fixing bugs will eventually lead you to the correct destination because it has an ability to perform better the way every team wanted.

Conclusion:

Besides above stated strategies, a testing team may go for the other approaches which suits and fulfils their need and requirement in the given time. Managing test data in automation is a crucial step or task which directly impacts the productivity and results of the automation as automation is all about repetitive and larger usage of test scripts, including test data to perform end to end testing of software application.

The Growing Case of Angular JS for the Mobile Web

Angular JS, an open-source framework has gained a lot of traction in the world of web development today. This framework by Google has been seen as a viable choice even for responsive mobile web application development as it allows developers to create trendy applications easily. Considering that most applications today are data-driven, Angular JS fits comfortably into the developer’s toolbox as it enables interaction of backend web services with external data sources. This framework allows developers to extend the HTML syntax to iterate the application components in a succinct manner while allowing the use of HTML as the main template language. This blog takes a look at some considerations that make Angular JS great for developing mobile web applications.

  • Responsiveness:
    According to the Cisco Visual Networking Index, Global mobile devices grew to 7.9 billion in 2015, up from 7.3 billion in 2014. According to this report, “the typical smartphone generated 41 times more mobile data traffic (929 MB per month) than the typical basic-feature cell phone (which generated only 23 MB per month of mobile data traffic).” Clearly, developers now need to create web applications that will present themselves correctly on mobile devices. Since Angular JS is an open source JavaScript MVC framework it allows developers to create rich and responsive applications for both desktop and mobile environments using the same codebase. Additionally, these applications can be run on any HTML 5 compliant browser and mobile browsers.
  • Scalability and Maintainability:
    Modern day web applications need a scalable architecture, so that upgrades, patches, and bug-fixes can be implemented easily. Angular JS is the perfect choice for building large and scalable applications. It features ng-class and ng-model directives, provides two-way data binding and allows developers save the data on the server in just a few lines. Applications built with Angular JS are also easily maintainable as it employs object-oriented design principles. Along with this, Angular JS allows developers to use both MVC or MVVM to separate presentation from business logic while boosting maintainability.
  • Mobile features:
    A mobile web application has to ensure that all the application features display correctly across browsers. Angular JS has some awesome mobile components. Using frameworks such as Ionic or Mobile UI gives developers the flexibility to augment mobile components and offer rich user interfaces, overlays, sidebars, switches, swipe features, scrollable areas and top and bottom navigation bars that do not bounce on scrolling when the application is viewed on a mobile, as well as the option to provide push notifications and analytics. These Angular JS frameworks utilize robust libraries (Mobile UI uses overthrow.js and fastclick.js) to provide a smooth mobile experience that is highly responsive and touch enabled. The two-way data binding capability ensures that when the framework experiences a browser change it updates the necessary patterns immediately thereby providing a uniform viewing experience.

    As Angular JS uses reusable logic, it allows the reuse of web application logic on multiple devices across multiple platforms and at the same time also allows developers the flexibility to customize the UI for each platform. Developers thus can keep the functionality of the application separate for the UI of the application which helps in providing a uniform application experience.

  • Performance:
    Performance is critical for mobile web applications as a slow application is almost worse than having no application. Angular JS takes great care of performance issues since it uses a declarative paradigm for creating patterns. Instead of describing all the steps needed to achieve an end result, Angular JS uses lightweight code where only the end result needs to be described. Angular JS also loads pages asynchronously which decrease the page load time, increases the speed of the application thereby boosting performance.
  • Dependency Injection:
    Additionally, Angular JS allows developers to create applications combining separate modules that can be co-dependent or autonomous. It has a built-in dependencies implementation mechanism that enables it to independently identify situations where additional objects are needed, and simultaneously provides these and binds them thus making application development much easier. As Angular JS uses the MVC structure and separates data from logic components the dependency injection enables bringing server-side services to client-side web application which helps in reducing the burden on the server and also contributes to improved application performance.
  • Security:
    To call a good application ‘great’, developers have to ensure that it has robust security features. Developers can optimize the security features of a responsive web application with Angular JS. It uses HTTPS interface, in the form of a simple web service or even a RESTful API, to communicate with servers. Angular JS also provides CSRF protection, supports strict expression evaluation and allows strict contextual escaping. Additionally, to increase or augment security features, especially for enterprise applications, Angular JS also developers implement supplemental libraries like Idapjs to implement single sign-on via interaction between libraries and AngularJS.

Along with the above-mentioned benefits, Angular JS also has huge and active community support. Getting started with Angular JS is also quite easy as it is not necessary to learn the entire framework to build an application. Angular JS is built with testing in mind and makes it easy to mock physical devices and situations such as GPS, blue tooth etc. This testing focus also makes test automation implementation much easier.

Conclusion

Given all these advantages Angular JS is being adopted incredibly fast by developers across the globe…this means more add-ons and high-quality libraries and additional support for Angular developers. Get set for more Angular JS in the mobile web!

Website Testing- Did you miss anything while testing?

Website testing is an extensive type of testing carried out on the website to cover each quality aspect. Website testing itself is a broad testing term which consists of numerous testing areas and multiple testing activities to test website from each different perspective.

The end purpose of performing website testing is to make a user comfortable in learning, understanding, using and navigating the website, and thus includes almost all types of testing like:

  • Functionality testing for ensuring the intended and appropriate functionalities and features of a website.
  • Usability testing to ensure user-friendliness features of a website.
  • Compatibility testing looks after the compatibility of a website on multiple variants of browsers, OS, network, hardware, software, devices and many such elements, which are needed in the functioning of a website.
  • Performance testing ensures the smooth performance of a website both under the expected and undesirable load, conditions and environment.
  • Database testing concerns with the veracity, integrity, consistency, accuracy of the diverse range of data stored at backend of the website.
  • Security Testing ensures that no loopholes or security glitches left undetected in the website, which may grant access to unauthorized or malicious users to website and may victimize the website with other malicious attacks.

Each of the above stated testing types including some more types focuses and targets multiple aspects of a website. Thus, there is lot to test in a website and a tester may likely to miss one or more critical testing/elements which may need to be considered and tested. Here, we are listing out some of the essential elements in a checklist format, which are generally needs to be carried out to test a website.

1. Functionality Testing:

Following things are usually needs to be tested while performing functionality test of a website.

  • Forms testing: Forms on a website is used by the users to either store or retrieve information. Testing the form to ensure its consistency and integrity throughout the website. Further form testing may include following activities:
    • Validating each field of the form.
    • Validating each field with negative or invalid value.
    • To ensure default values for each field.
    • Highlighting the mandatory field with asterix * symbol.
    • Testing password field ability to conceal the entered password.
  • Link testing:Testing various links present or defined for a website and includes following types
    • External link: links which directs the website to any other web page or website outside its domain.
    • Internal link: links directing one web page to another web page within the website domain.
    • E-mail link:links providing direct access to the client mail page with pre-filled general information such as recipient address
    • Broken link: These links does not provide access to any other page, internally and externally both. Thus, they are known as dead links or broken links.
  • Validation testing: Website validation is done to ensure website adherence and conformance to certain specified and established standards, which is used to optimize the website at SEO level. This includes validation of feeds, html, xhtml and css properties and tags.
  • Cookie testing

2. Database Testing:

Mostly websites are driven by the back-end of a system i.e. data provided and stored. Thus, database testing is a crucial testing for a website and may include following testing activities:

  • Correct and appropriate execution of database queries.
  • Verifying and validating the data integrity throughout the database in the event of addition, deletion and update of data.
  • Accurate retrieval of data against the injected query.
  • Testing tables along with the triggers, stored procedures, and views properties of a database.
  • Verifying all sorts of keys used in the database system.

3. Performance Testing:

In performance testing, certain specified parameters needs to be evaluated to assess the website performance under different conditions. Performance testing of a website generally consists of following:

  • Stability of the website under different load(in terms of users) and factors to work uninterruptedly without going into the state of failure or crash down, like
    • Normal load, full resource utilization.
    • Normal load with cut in resources.
    • Heavy load, full resource utilization.
    • Heavy load with cut in resources.
    • Extreme load with full access to resources.
    • Extreme load with limited access to resources.

    Here, resources may include usage of hardware, software (assisted application), server, memory (RAM), CPU, network speed and connection, space. Further, the above conditions may also include time criteria. Generally, load, stress, soak, spike and volume testing is being performed to ensure stability and reliability of the website.

  • Scalability of the website to accommodate the growing and changing requirements (load and resources).
  • Response time, throughput and speed of the website under different load and conditions as stated above.

4. Usability Testing:

If a user lacks interest in using and navigating a website, this directly impacts the traffic on the website. Thus, usability testing is essential QA activity to ensure user-friendliness feature of a website. In usability testing, following points may be taken into account:

  • Testing design, layout and presentation of the website as per the user’s need & expectation.
  • Easy & smooth navigation & control between the web pages throughout the website.
  • Content also plays a major role in maintaining the user’s interest. Thus, content testing needs to be performed on the website, which may include, spelling and grammatical error, pictures & font size & style, colour and other perceivable elements of the website.

5. Compatibility Testing:

Compatibility testing is done to ensure compatibility and subsequently the intended and appropriate functioning of the website across multiple variants of browsers, operating systems, devices, hardware or software, network configuration & settings, display resolution along with their different versions.

6. Security Testing:

Website or web security testing is done to explore and correct or remove security vulnerabilities present in the website. Following activities may be carried out under web security testing:

  • Penetration testing of the website, i.e. attacking the system to detect security flaws and loopholes.
  • Accessing website using invalid or incorrect credentials (login & password) multiple times.
  • Checking log files located at server and containing the various information such as those of transactions, error messages, security breaches, etc.
  • Hacking & cracking the password.
  • Verifying the submission of confidential data and information through SSL certificates (HTTP).
  • Checking the SQL injection attacks on the website.
  • To ensure automatic termination of sessions in the event of non-activity of user for a considerable amount of time. Further, the logout user should not able to use session further.
  • Checking the unauthorized access to confidential data, information and web log files and repositories.

7. User Interface Testing:

A user interface mainly comprises three components- application, database server and web server. These three components are needed to be tested to ensure proper interfacing along with the accurate and appropriate flow of the data.

Conclusion:

Above listed, is just a general checklist which commonly covers almost all sorts of website and important testing areas along with the corresponding testing methods. However, a tester may expand the list based on the specified requirements & specification, and the need felt by him/her, to carry out more in-depth and thorough testing of the website so that nothing gets missed out.

Top 90 QA Interview Questions Answers

  1. What is Software Quality Assurance (SQA)?
  2. Software quality assurance is an umbrella term, consisting of various planned process and activities to monitor and control the standard of whole software development process so as to ensure quality attribute in the final software product.

  3. What is Software Quality Control (SQC)?
  4. With the purpose similar to software quality assurance, software quality control focuses on the software instead to its development process to achieve and maintain the quality aspect in the software product.

  5. What is Software Testing?
  6. Software testing may be seen as a sub-category of software quality control, which is used to remove defects and flaws present in the software, and subsequently improves and enhances the product quality.

  7. Whether, software quality assurance (sqa), software quality control (sqc) and software testing are similar terms?
  8. No, but the end purpose of all is same i.e. ensuring and maintaining the software quality.

  9. Then, what’s the difference between SQA, SQC and Testing?
  10. SQA is a broader term encompassing both SQC and testing in it and ensures software development process quality and standard and subsequently in the final product also, whereas testing which is used to identify and detect software defects is a sub-set of SQC.

  11. What is software testing life cycle (STLC)?
  12. Software testing life cycle defines and describes the multiple phases which are executed in a sequential order to carry out the testing of a software product. The phases of STLC are requirement, planning, analysis, design, implementation, execution, conclusion and closure.

  13. How STLC is related to or different from SDLC (software development life cycle)?
  14. Both SDLC and STLC depict the phases to be carried out in a subsequent manner, but for different purpose. SDLC defines each and every phase of software development including testing, whereas STLC outlines the phases to be executed during a testing process. It may be inferred that STLC is incorporated in the SDLC phase of testing.

  15. What are the phases involved in the software testing life cycle?
  16. The phases of STLC are requirement, planning, analysis, design, implementation, execution, conclusion and closure.

  17. Why entry criteria and exit criteria is specified and defined?
  18. Entry and exit criteria is defined and specified to initiate and terminate a particular testing process or activity respectively, when certain conditions, factors and requirements is/are being met or fulfilled.

  19. What do you mean by the requirement study and analysis?
  20. Requirement study and analysis is the process of studying and analysing the testable requirements and specifications through the combined efforts of QA team, business analyst, client and stakeholders.

  21. What are the different types of requirements required in software testing?
  22. Software/functional requirements, business requirements and user requirements.

  23. Is it possible to test without requirements?
  24. Yes, testing is an art, which may be carried out without requirements by a tester by making use of his/her intellects possessed, acquired skills and gained experience in the relevant domain.

  25. Differentiate between software requirement specifications (SRS) and business requirement specification (BRS).
  26. SRS layouts the functional and non-functional requirements for the software to be developed whereas BRS reflects the business requirement i.e., the business demand of a software product as stated by the client.

  27. Why there is a bug/defect in software?
  28. A bug or a defect in software occurs due to various reasons and conditions such as misunderstanding or requirements, time restriction, lack of experience, faulty third party tools, dynamic or last time changes, etc.

  29. What is a software testing artifact?
  30. Software testing artifact or testing artifact are the documents or tangible products generated throughout the testing process for the purpose of testing or correspondence amongst the team and with the client.

  31. What are test plan, test suite and test case?
  32. Test plan defines the comprehensive approach to perform testing of the system and not for the single testing process or activity. A test case is based on the specified requirements & specifications define the sequence of activities to verify and validate one or more than one functionality of the system. Test suite is a collection of similar types of test cases.

  33. How to design test cases?
  34. Broadly, there are three different approaches or techniques to design test cases. These are

    • Black box design technique, based on requirements and specifications.
    • White box design technique based on internal structure of the software application.
    • Experience based design technique based on the experience gained by a tester.
  35. What is test environment?
  36. A test environment comprises of necessary software and hardware along with the network configuration and settings to simulate intended environment for the execution of tests on the software.

  37. Why test environment is needed?
  38. Dynamic testing of the software requires specific and controlled environment comprising of hardware, software and multiple factors under which a software is intended to perform its functioning. Thus, test environment provides the platform to test the functionalities of software in the specified environment and conditions.

  39. What is test execution?
  40. Test execution is one of the phases of testing life cycle which concerns with the execution of test cases or test plans on the software product to ensure its quality with respect to specified requirements and specifications.

  41. What are the different levels of testing?
  42. Generally, there are four levels of testing viz. unit testing, integration testing, system testing and acceptance testing.

  43. What is unit testing?
  44. Unit testing involves the testing of each smallest testable unit of the system, independently.

  45. What is the role of developer in unit testing?
  46. As developers are well versed with their lines of code, they are preferred and being assigned the responsibility of writing and executing the unit tests.

  47. What is integration testing?
  48. Integration testing is a testing technique to ensure proper interfacing and interaction among the integrated modules or units after the integration process.

  49. What are stubs and drivers and how these are different to each other?
  50. Stubs and drivers are the replicas of modules which are either not available or have not been created yet and thus they works as the substitutes in the process of integration testing with the difference that stubs are used in top bottom approach and drivers are used in bottom up approach.

  51. What is system testing?
  52. System testing is used to test the completely integrated system as a one system against the specified requirements and specifications.

  53. What is acceptance testing?
  54. Acceptance testing is used to ensure the readiness of a software product with respect to specified requirement and specification in order to get readily accepted by the targeted users.

  55. Different types of acceptance testing.
  56. Broadly, acceptance testing is of two types-alpha testing and beta testing. Further, acceptance testing can also be classified into following forms:

    • Operational acceptance testing
    • Contract acceptance testing
    • Regulation acceptance testing
  57. Difference between alpha and beta testing.
  58. Both alpha and beta testing are the forms of acceptance testing where former is carried out at development site by the QA/testing team and the latter one is executed at client site by the intended users.

  59. What are the different approaches to perform software testing?
  60. Generally, there are two approaches to perform software testing viz. Manual testing and Automation. Manual testing involves the execution of test cases on the software manually by the tester whereas automation process involves the usage of automation framework and tools to automate the task of test scripts execution.

  61. What is the advantage of automation over manual testing approach and vice-versa?
  62. In comparison to manual approach of testing, automation reduces the efforts and time required in executing the large amount of test scripts, repetitively and continuously for a longer period of time with accuracy and precision.

  63. Is there any testing technique that does not needs any sort of requirements or planning?
  64. Yes, but with the help of test strategy using check lists, user scenarios and matrices.

  65. Difference between ad-hoc testing and exploratory testing?
  66. Both ad-hoc testing and exploratory testing are the informal ways of testing the system without having proper planning & strategy. However, in ad-hoc testing, a tester is well-versed with the software and its features and thereby carries out the testing whereas in exploratory, he/she gets to learn and explore more about the software during the course of testing and thus tests the system gradually along with software understanding and learning throughout the testing process.

  67. How monkey testing is different from ad-hoc testing?
  68. Both monkey and ad-hoc testing are the informal approach of testing but in monkey testing, a tester does not requires the pre-understanding and detailing of the software, but learns about the product during the course of testing whereas in ad-hoc testing, tester has the knowledge and understanding of the software.

  69. Why non-functional testing is equally important to functional testing?
  70. Functional testing tests the system’s functionalities and features as specified prior to software development process. It only validates the intended functioning of the software against the specified requirement and specification but the performance of the system to function in the unexpected circumstances and conditions in real world environment at the users end and to meet customer satisfaction is done through non-functional testing technique. Thus, non-functional testing looks after the non-functional traits of the software.

  71. Which is a better testing methodology: black-box testing or white-box testing?
  72. Both black-box and white-box testing approach have their own advantages and disadvantages. Black-box testing approach enables testers to externally test the system on the basis of specified requirement and specification and does not provide the scope of testing the internal structure of the system, whereas white-box testing methodology verify and validates the software quality through testing of its internal structure and working.

  73. If black-box and white-box, then why gray box testing?
  74. Gray box testing is a third type of testing and a hybrid form of black-box and white-box testing approach, which provides the scope of externally testing the system using test plans and test cases derived from the knowledge and understanding of internal structure of the system.

  75. Difference between static and dynamic testing of software.
  76. The primary difference between static and dynamic testing approach is that the former does not involves the execution of code to test the system whereas latter approach requires the code execution to verify and validate the system quality.

  77. Smoke and Sanity testing are used to test software builds. Are they similar??
  78. Although, both smoke and sanity testing is used to test software builds but smoke testing is used to test the initial build which are unstable whereas sanity tests are executed on relatively stable builds which had undergone multiple time through regression testing.

  79. When, what and why to automate?
  80. Automation is preferred when the execution of tests needs to be carried out repetitively for a longer period of time and within the specified deadlines. Further, an analysis of ROI on automation is desired to analyse the cost-benefit model of the automation. Preferably functional, regression and functional tests may be automated. Further, tests which requires accuracy and precision, and is time-consuming may be considered for automation, including data driven tests also.

  81. What are the challenges faced in automation?
  82. Some of the common challenges faced in the automation are

    • Initial cost is very high along with the maintenance costs. Thus, requires proper analysis to assess ROI on automation.
    • Increased complexities.
    • Limited time.
    • Demands skilled tester, having appropriate knowledge of programming.
    • Automation training cost and time.
    • Selection of right and appropriate tools and frameworks.
    • Less flexible.
    • Keeping test plans and cases updated and maintained.
  83. Difference between retesting and regression testing.
  84. Both retesting and regression testing is done after modification in software features and configuration to remove or correct the defect(s). However, retesting is done to validate that the identified defects has been removed or resolved after applying patches while regression testing is done to ensure that the modification in the software doesn’t impacts or affects the existing functionalities and originality of the software.

  85. How to categorize bugs or defects found in the software?
  86. A bug or a defect may be categorized on the priority and severity basis, where priority defines the need to correct or remove defect, from business perspective, whereas severity states the need to resolve or eliminate defect from software requirement and quality perspective.

  87. What is the importance of test data?
  88. Test data is used to drive the testing process, where diverse types of test data as inputs are provided to the system to test the response, behaviour and output of the system, which may be desirable or unexpected.

  89. Why agile testing approach is preferred over traditional way of testing?
  90. Agile testing follows the agile model of development, which requires no or less documentation and provides the scope of considering and implementing the dynamic and changing requirements along with the direct involvement of client or customer to work on their regular feedbacks and requirements to provide software in multiple and short iterative cycles.

  91. What are the parameters to evaluate and assess the performance of the software?
  92. Parameters which are used to evaluate and assess the performance of the software are active defects, authored tests, automated tests, requirement coverage, no. of defects fixed/day, tests passed, rejected defects, severe defects, reviewed requirements, test executed and many more.

  93. How important is the localization and globalization testing of a software application?
  94. Globalization and localization testing ensures the software product features and standards to be globally accepted by the world wide users and to meet the need and requirements of the users belonging to a particular culture, area, region, country or locale, respectively.

  95. What is the difference between verification and validation approach of software testing?
  96. Verification is done throughout the development phase on the software under development whereas validation is performed over final product produced after the development process with respect to specified requirement and specification.

  97. Does test strategy and test plan define the same purpose?
  98. Yes, the end purpose of test strategy and test plan is same i.e. to works as a guide or manual to carry out the software testing process, but still they both differs.

  99. Which is better approach to perform regression testing: manual or automation?
  100. Automation would provide better advantage in comparison to manual for performing regression testing.

  101. What is bug life cycle?
  102. Bug or Defect life cycle describes the whole journey or the life of a defect through various stages or phases, right from when it is identified and till its closure.

  103. What are the different types of experience based testing techniques?
  104. Error guessing, checklist based testing, exploratory testing, attack testing.

  105. Whether a software application can be 100% tested?
  106. No, as one of the principles of software testing states that exhaustive testing is not possible.

  107. Why exploratory testing is preferred and used in the agile methodology?
  108. As agile methodology requires the speedy execution of the processes through small iterative cycles, thereby calls for the quick, and exploratory testing which does not depends on the documentation work and is carried out by tester through gradual understanding of the software, suits best for the agile environment.

  109. Difference between load and stress testing.
  110. The primary purpose of load and stress testing is to test system’s performance, behaviour and response under different varied load. However, stress testing is an extreme or brutal form of load testing where a system under increasing load is subjected to certain unfavourable conditions like cut down in resources, short or limited time period for execution of task and various such things.

  111. What is data driven testing?
  112. As the name specifies, data driven testing is a type of testing, especially used in the automation, where testing is carried out and drive by the defined sets of inputs and their corresponding expected output.

  113. When to start and stop testing?
  114. Basically, on the availability of software build, testing process starts. However, testing may be started early with the development process, as soon as the requirements are gathered and available. Moreover, testing depends upon the requirement of the software development model like in waterfall model, testing is done in the testing phase, whereas in agile testing is carried out in multiple and short iteration cycle.

    Testing is an infinite process as it is impossible to make a software 100% bug free. But still, there are certain conditions specified to stop testing such as:

    • Deadlines
    • Complete execution of the test suites and scripts.
    • Meeting the specified exit criteria for a test.
    • High priority and severity bugs are identified and resolved.
    • Complete testing of the functionalities and features.
  115. Whether exhaustive software testing is possible?
  116. No

  117. What are the merits of using the traceability matrix?
  118. The primary advantage of using the traceability matrix is that it maps the all the specified requirements with that to test cases, thereby ensures complete test coverage.

  119. What is software testability?
  120. Software testability comprises of various artifacts which gives the estimation about the efforts and time required in the execution of a particular testing activity or process.

  121. What is positive and negative testing?
  122. Positive testing is the activity to test the intended and correct functioning of the system on being fed with valid and appropriate input data whereas negative testing evaluates the system’s behaviour and response in the presence of invalid input data.

  123. Brief out different forms of risks involved in software testing.
  124. Different types of risks involved in software testing are budget risk, technical risk, operational risk, scheduled risk and marketing risk.

  125. Why cookie testing?
  126. Cookie is used to store the personal data and information of a user at server location, which is later used for making connections to web pages by the browsers, and thus it is essential to test these cookies.

  127. What constitutes a test case?
  128. A test case consists of several components. Some of them are test suite id, test case id, description, pre-requisites, test procedure, test data, expected results, test environment.

  129. What are the roles and responsibilities of a tester or a QA engineer?
  130. A QA engineer has multiple roles and is bounded to several responsibilities such as defining quality parameters, describing test strategy, executing test, leading the team, reporting the defects or test results.

  131. What is rapid software testing?
  132. Rapid software testing is a unique approach of testing which strikes out the need of any sort of documentation work, and motivates testers to make use of their thinking ability and vision to carry out and drive the testing process.

  133. Difference between error, defect and failure.
  134. In the software engineering, error defines the mistake done by the programmers. Defect reflects the introduction of bugs at production site and results into deviation in results from its expected output due to programming mistakes. Failure shows the system’s inability to execute functionalities due to presence of defect. i.e. defect explored by the user.

  135. Whether security testing and penetration testing are similar terms?
  136. No, but both testing types ensure the security mechanism of the software. However, penetration testing is a form of security testing which is done with the purpose to attack the system to ensure not only the security features but also its defensive mechanism.

  137. Distinguish between priority and severity.
  138. Priority defines the business need to fix or remove identified defect whereas severity is used to describe the impact of a defect on the functioning of a system.

  139. What is test harness?
  140. Test harness is a term used to collectively define various inputs and resources required in executing the tests, especially the automated tests to monitor and assess the behaviour and output of the system under different varied conditions and factors. Thus, test harness may include test data, software, hardware and many such things.

  141. What constitutes a test report?
  142. A test report may comprise of following elements:

    • Objective/purpose
    • Test summary
    • Logged defects
    • Exit criteria
    • Conclusion
    • Resources used
  143. What are the test closure activities?
  144. Test closure activities are carried out the after the successful delivery or release of the software product. This includes collection of various data, information, testwares pertaining to software testing phase so as to determine and assess the impact of testing on the product.

  145. List out various methodologies or techniques used under static testing.
    • Inspection
    • Walkthroughs
    • Technical reviews
    • Informal reviews
  146. Whether test coverage and code coverage are similar terms?
  147. No, code coverage amounts the percentage of code covered during software execution whereas test coverage concerns with the test cases to cover specified functionality and requirement.

  148. List out different approaches and methods to design tests.
  149. Broadly, there are different ways along with their sub techniques to design test cases, as mentioned below

    • Black Box design technique- BVA, Equivalence Partitioning, use case testing.
    • White Box design technique- statement coverage, path coverage, branch coverage
    • Experience based technique- error guessing, exploratory testing
  150. How system testing is different to acceptance testing?
  151. System testing is done with the perspective to test the system against the specified requirements and specification whereas acceptance testing ensures the readiness of the system to meet the needs and expectations of a user.

  152. Distinguish between use case and test case.
  153. Both use case and test case is used in the software testing. Use case depicts and defines the user scenarios including various possible path taken by the system under different conditions and circumstances to execute a particular task and functionality. On the other side, test case is a document based on the software and business requirements and specification to verify and validate the software functioning.

  154. What is the need of content testing?
  155. In the present era, content plays a major role in creating and maintaining the interest of the users. Further, the quality content attracts the audience, makes them convinced or motivated over certain things, and thus is a productive input for the marketing purpose. Thus, content testing is a must testing to make your software content suitable for your targeted users.

  156. List out different types of documentation/documents used in the software testing.
    • Test plan
    • Test scenario
    • Test cases
    • Traceability Matrix
    • Test Log and Report
  157. What is test deliverables?
  158. Test deliverables are the end products of a complete software testing process- prior, during and after the testing, which is used to impart testing analysis, details and outcomes to the client.

  159. What is fuzz testing?
  160. Fuzz testing is used to discover coding flaws and security loopholes by subjecting system with the large amount of random data with the intent to break the system.

  161. How testing is different with respect to debugging?
  162. Testing is done with the purpose of identifying and locating the defects by the testing team whereas debugging is done by the developers to fix or correct the defects.

  163. What is the importance of database testing?
  164. Database is an inherited component of a software application as it works as a backend system of the application and stores different types of data and information from multiple sources. Thus, it is crucial to test the database to ensure integrity, validity, accuracy and security of the stored data.

  165. What are the different types of test coverage techniques?
    • Statement Coverage
    • Branch Coverage
    • Decision Coverage
    • Path Coverage
  166. Why and how to prioritize test cases?
  167. Due to abundance of test cases for the execution within the given testing deadline arises the need to prioritize test cases. Test prioritization involves the reduction in the number of test cases, and selecting & prioritizing only those which are based on some specific criteria.

  168. How to write a test case?
  169. Test cases should be effective enough to cover each and every feature and quality aspect of software and able to provide complete test coverage with respect to specified requirements and specifications.

  170. How to measure the software quality?
  171. There are certain specified parameters, namely software quality metrics which is used to assess the software quality. These are product metrics, process metrics and project metrics.

  172. What are the different types of software quality model?
    • Mc Call’s Model
    • Boehm Model
    • FURPS Model
    • IEEE Model
    • SATC’s Model
    • Ghezzi Model
    • Capability Maturity Model
    • Dromey’s quality Model
    • ISO-9126-1 quality model
  173. What different types of testing may be considered and used for testing the web applications?
    • Functionality testing
    • Compatibility testing
    • Usability testing
    • Database testing
    • Performance testing
    • Accessibility testing
  174. What is pair testing?
  175. Pair testing is a type of ad-hoc testing where pair of testers or tester and developer or tester & user is being formed which are responsible for carrying out the testing of the same software product on the same machine.

Quality Assurance vs. Quality Control

Quality is of paramount importance, especially when the consumer has a plethora of options at his or her disposal. When it comes to software, the consumer of today has no time to spare for slow performing, defective or bug riddled applications. Clearly, quality issues are a huge business risk, because of which there has been an increased emphasis on Quality Assurance and Quality Control. However, in our conversations with clients we have often noticed that when discussing product quality and software testing, the terms Quality Assurance (QA) and Quality Control (QC) are used interchangeably. While both QA and QC can be considered two sides of the same coin when it comes to managing quality, they are not one and the same thing. In this blog, we take a look at the main difference between QA and QC and understand the role each plays in the software development and testing process.

Quality Assurance and Quality Control – What do they mean?
To begin with, let’s understand the roles of QA and QC. Quality Assurance is a process that deliberates on ‘preventing’ defects while Quality Control deliberates on ‘identifying’ these defects. Given the adoption of new development methodologies such as agile, QA works towards improving and optimizing the development cycle by performing process audits, establishing process checklists for the project and establishing metrics to identify process gaps and ensure that the product works as per expectation.

QC, on the other hand, focuses on identifying any defects in the product after it has been developed. The QC department is responsible for testing the end product and verifies that there are no discrepancies between the product requirements and its final implementation and that the end product performance is optimal.

To put is quite simply, Quality Assurance works on managing the quality of the product being developed and Quality Control validates the quality of the output.

Strategy and orientation – Prevention v/s detection

QA is focused on a strategy of prevention and hence is more focused on planning, documenting and setting guidelines for product development to ensure a desired quality of the product. It is because of this that QA activities are undertaken at the inception of the project and have to ensure that software specifications meet the company and industry standards. Designing quality plans, conducting inspections, identifying defect tracking tools and training the responsible team in the defined processes and methods fall within the purview of Quality Assurance. This process is more proactive than reactive since the aim of the QA team is to prevent defects from entering the development cycle in the first place and to mitigate all the risks that have been identified in the specifications phase. Thus all people who are responsible for product development are responsible for QA.

QC activities are more focused on defect detection and verifying the product output against the desired quality levels. This method is more reactive in nature as it identifies defects post the final production of the product. QC checks are conducted at different points in the development cycle to ensure that the final product meets and performs according to the agreed specifications.

Unlike QA, which may or may not involve executing the program or code, QC activities will always involve executing the final program or code to identify defects and implement the fixes in order to achieve the desired quality of the product.

Scope – Process v/s Product

The scope of QA can be said to be more in the process of development rather than on the product itself. The aim of QA is to ensure that the development team is doing the right thing at the right time and in the right way. QA activities are verification oriented, they can be related to all products that will be created using that process.
The focus on QC, on the other hand, is only on the product and not necessarily on the development process. QC activities are primarily the responsibility of the testing team and are conducted once the QA activities are completed. QC is a line function that is product or project specific and involves activities such as testing and conducting reviews to identify defects in the final product.

Conclusion
Having listed out the key difference between QA and QC, it is also important to note that these two are equally important parts of a quality management system and ultimately have the same goal – to ensure a great quality, optimally performing product. Using Quality Assurance and Quality Control in conjunction with one another and developing a consistent feedback loop can help in identifying the root causes of defects and thereby allow teams to develop strategies that can entirely eliminate these problems at the development stage itself and achieve high-quality products.

Qualities of Scrum Master

Scrum is a technique by which we can manage complex tasks involved in a software development process, in an agile environment. It is a framework that enables a team to collaborate on complex projects. Scrum is a team effort with characteristics like self-organizing, cross functional and highly productive who has the intention of adding some value to the product development. Scrum enable teams to experiment, discover and collaborate their learning.

qualities of a scrum master

Who is Scrum Master?

A scrum master is the one who is responsible for monitoring the activities of the agile team to ensure that the proceedings are in the favour of the project’s goals.

A scrum project team is usually divided into three roles – product owner, scrum master and the team.

  1. Product Owner–He manages return on investment by supervising the team about what to build and the sequence to be followed in the process.
  2. Scrum Master– His responsibility is to facilitate communication between the customer and the team, provide solutions to a given problem and ensure maximum value to the end user.
  3. Team– The prime duty of a team is to simply build the product by applying agile practices.

A scrum master makes sure that all resources are utilized in the best possible way. Agile software development is incremental, iterative and collaborative in nature. An agile project is scheduled for a fixed length of time, ideally 1 to 4 weeks, during which the team often meets to discuss the nitty gritty of the project.

Qualities a Scrum Master must possess :

  • Possess great knowledge and a consistent learner:An efficient scrum master needs to possess great knowledge about scrum framework and methodology. Ideally a scrum master must be aware of the niceties of the project along with the skills and strengths of his team members, be able to foresee any challenges that may cause trouble in the path of product development and take measures to eradicate the same.
  • Responsible :A person who is entrusted with the task of handling a team, must possess the responsibility to supervise the process. Scrum master is responsible as well as accountable for his team’s work.
  • Collaborative :A team’s best output lies in a collaborative approach by the team working on a given project. A scrum master must try to create an atmosphere of open communication so that the team members can freely discuss issues about the project, be able to offer their personal opinions and reach consensus to solve a given problem.
  • Committed :Commitment of the scrum master towards his team is of prime importance. He must dutifully carry out his task of being a team lead and guide the team towards accomplishment.
  • Influential :The personality of a scrum master or any team lead in general should personify an epitome of dedication and diligence. That way a team will be in a position to get inspired and deliver the best of their potential.
  • Have great knowledge about the team he is working on :Well, a team can only be managed efficiently if the scrum master knows a great deal about his team members in terms of their thought process, their approach towards a given situation, traits of a member and so on.
  • Resolves team conflicts :A scrum master must cordially resolve team conflicts in order to deliver the best. Identifies what leads to team conflict thus introducing activities or practices to eliminate or to reduce it and take measures to prevent from further occurrence.
  • Introduces change :Change is inevitable. It is a very important fact that in order to grow productively, there needs to be a flexible approach towards any new change in terms of adoption of a new tool, admission of a new team member, introduction of new methodology of work and so on. Few engineering practices that could be introduced are worth mentioning-
    1. Introducing automated builds and continuous integration in order to reduce the total amount of time and effort required in manual builds.
    2. The development of the project must follow an incremental approach to maintain simplicity.
    3. Encouraging pair programming to make the increased software quality.
  • Encourage to take initiative:A great quality of a scrum master is to encourage the team to take initiative regarding their activities, process and environment.
  • Shares experiences :Learning from past experiences is a great way to understand the working of various activities and apply the same in the ongoing work to achieve better results.
  • Be a leader :A scrum master is the one who teaches like a teacher and acts a leader as well. The scrum team should be led by a person who has leadership qualities and can guide the team towards an accomplished direction.
  • Possess problem solving skills :A scrum master must be able to inculcate the habit of problem solving approach among team members.

Conclusion:

A scrum master is required to play many roles at a time. He must therefore possess a well-balanced leadership and management skills to guide his team towards attainment of a common goal. A team must therefore endeavor to cooperate with each other and coordinate tasks among themselves in order to deliver an end product that is robust and value-driven. Thus agile and Scrum together inculcates a habit of open communication and active participation by all the team members equally.

Importance of Shift Left and Shift Right Testing Approaches

Gone were the days, when testing was considered as a separate phase and is carried out post-development of the software application. With the introduction of new and innovative approaches of software development or may be called software models such as agile and DevOps, testers are now able to show their presence and apply their sincere efforts towards the achievement of quality traits in a software application, right from its seeding.

Further, it is also pertinent to mention that despite continuous and strenuous testing efforts put up right from the dawn of the development phase till the production, a software application may lack in some or more qualities from user’s point of view, keeping in view the user’s expectation to operate software functionalities along with its performance in real world conditions and environment. Thus, testing both at the starting & during the development, and after the production is equally important to provide best quality software products to users.

The approaches of testing the software application starting from the initiation of its development and throughout the development, and testing done post-development are generally referred to as shift left testing approach and shift right testing approach.

Shift Left Testing Approach:

Shift left testing approach as the name suggests, shifts the testing process to the left of the development phase, i.e. testing starts at the starting point of the development. Basically, testing process begins with the development and is being carried out throughout the development phase with the vision to prevent defects rather encountering with unexpected defects at a later stage.

Shift left testing is an approach to introduce testers at an early stage of software development to ease the task of developers in developing the required software application of acceptable quality based on the specified requirements and specifications. Based on the requirements and specifications, acceptance criteria both for the software and business requirements is being defined and created. Continuous testing right from the beginning of the development phase and throughout the development helps in identifying, locating and removing or correcting the flaws occurring at each level of the development, which subsequently helps in reaching the acceptance criteria defined for the software. This approach may comprise of different types of testing for a particular job, including regression testing to test the originality after applying patches.

The early introduction of the testing process and not after the software development provides flexibility to the developers to implement dynamic changes throughout the development on the continuous perusal of feedbacks, reviews and reports provided by the testers, which is needed to apply patches or correction to remove bugs or defects as earliest as possible. This approach prevents or minimizes the presence of defects at the end of the development i.e. at the point of delivery or release.

Thus, shift left testing approach may be seen as an easy and economical way of eliminating the efforts, time and cost in correcting the software at a much complex state, when it is completely developed and is ready to release, and is almost technically & economically infeasible to implement changes in the application.

Shift Right Testing Approach:

When it is possible to develop software product of desired quality in an economical and convenient manner using shift left testing approach, then why shift right testing approach?? Well, the developed software adheres to the quality and attributes as specified in the business and software requirements, but what about the functioning and performance under real conditions and in the real world environment. Although, system is completely developed in compliance to stated requirements and specification, but how to encounter with the unexpected events or circumstances occurring with the software such as slow performance, crash down, failure and many such things.

Shift left testing approach is an essential requirement but solely not enough to certify the quality of the software as user experience and reviews is equally important to business/software requirements in determining the software quality.

Shift right testing approach initiates the testing task from the right i.e. post production of the software. The software application present at the right end, i.e. completely built and functioning software application is being tested to ensure performance and usability traits. Feedbacks and reviews are received from targeted users to experience satisfaction in using and managing the software in real world conditions, which thereby helps in improving the quality further.

Although, shift left testing provides the scope of early testing of software to prevent defects, still shift right testing approach has its own unique advantages over shift left testing like

  • Automation of a complete and stable software application is easy to be done in comparison to unstable application (partially or not completed).
  • As compare to shift left, shift right approach provides wider test coverage as testers have the access to test complete system at a much larger time.
  • As targeted users are involved in the approach, the strategy of right shift provides feedback and reviews of the end users, which helps to improve quality of the software at a much larger extent.

Conclusion:

In view of the above stated facts, it may be concluded that both shift left and shift right testing approaches are equally important and delivers unique and different way of testing the software application, which are adequate to touch each and every aspect of the application to ensure best possible quality.

4 Crucial Aspects Of Cloud Selection and Implementation

Cloud computing is a new way of doing business. A lot of companies, both big and small, are considering moving to the cloud to make their business more organized and efficient. This allows them to stop worrying about IT infrastructure and concentrate on other aspects that will help in business expansion in the future. But there are some factors that are absolutely important when you are considering this migration. The move will directly or indirectly affect your security and privacy issues, risk management practices, compliance, auditing and many other such issues. Here are four of the most important aspects you need to take into consideration to ensure that the move takes place smoothly:

  1. Reliability and Security:
    Shifting to the cloud is going to be a big change for the systems, data management and overall functionality of your organization. While the model for how IT services are delivered and consumed may be in for a change, the end objectives will be the same, hence it is very important that the new solutions support all the elements that are vital for the end users. Even small glitches in the cloud implementation could lead to major problems in the functioning. Your cloud may be a test bed for new services and applications, developers are working on or it may be running your payroll. No matter what the purpose of the cloud is, users expect it to function perfectly every minute of the day. Hence, it is important that you choose a cloud service that is completely available, reliable and secure. Ensure that it has the capability to continue to operate and keep the data intact in the virtual datacenter even if a failure occurs in some component. Additionally, if the cloud architecture is dealing with a shared resource pool, security and multi-tenancy need to be integrated into all aspects of the process. Services must be seen to be secure and reliable to gain the trust of the users that their data and applications are secure.
  2. Selecting the right provider:
    Know what your exact cloud computing needs are so that you can dictate the type of services you choose from your specific provider. Aspects such as available data storage, pricing structure and accessibility need to be clearly defined before taking the first steps. For basic storage, software-based cloud offerings such as Dropbox may work best. If you are looking for more than basic data storage, with an IT infrastructure and on-demand access to virtual servers, vendors such as IBM, SmartCloud Enterprise, GoGrid and Amazon Web Services would be best. If you want access to specific business solutions you should be turning to the Saleforce and other SaaS providers of the world. In terms of pricing, do thorough research before making a decision and make sure that you are only paying for what you use. Ideally, the pricing scheme should come with options to add services as needed. Pricing for cloud implementation services varies significantly, from as low as $1 per month to $100 a month.
  3. Business advantages:
    You are taking the big step of cloud implementation but what you want to focus on is how the move will affect your business and help expand it. In this case, you need to listen to what your service provider is promising you. Make sure that your provider is not focused only on technology outcomes. It may deliver excellent technology but may not be relevant to your business. Opt for a provider whose solutions can help you with high customer retention or streamlined product delivery. For this, choosing a service provider with your specific vertical market as a focus could help. To gain maximum benefits from cloud implementation, try to communicate your business objectives clearly to your provider. Thus they can be more involved in the process of business expansion and give their inputs during their time of decision-making.
  4. Regulatory compliance:
    The Cloud move can involve the transmission of data across uncontrolled internet connections that are susceptible to interception and monitoring. Most cloud-based services are secure and use different forms of encryption either via web-based communications (eg. SSL or TLS over HTTPS) or through secure applications. However, the effectiveness of the encryption may depend on a number of factors and the actual algorithms may fall short of the Federal Information Processing Standards (FIPS) encryption requirements. At the same time, cloud services that make use of proprietary transmission software may require validation in order to meet the Government standards. Hence it is very important that the cloud provider you choose has all the provisions to maintain regulatory compliance so that it meets any applicable industry regulations, allowing your business to grow smoothly and flourish.

Conclusion:
Cloud computing is a big move. Data centers are now delivering highly reliable and highly scalable services to clients, but it is up to those enterprises making the move to pick the right service provider and the right set of features their business demands. Look at the factors of value to your business and make your choice – and if you have already made the move then help the rest of us out – let us know how you went about making your choice?

What CEOs of eCommerce Companies are Thinking This Holiday Season?

“The reason it seems that price is all your customers care about is that you haven’t given them anything else to care about.” – Seth Godin

It’s that time of the year again. As Halloween slips by CEO’s of eCommerce and consumer internet-focused companies may be forgiven for having some scary visions of their own. There is so much at stake for these companies in the period between Thanksgiving and through till Christmas and the New Year, that some may wonder why this is called the Holiday Season. Don’t believe me- look at the numbers. The National Retail Foundation has reported that retailers could make as much as 30% of their entire annual sales in this short period. The number for online sales could rise this year to touch $117 billion, according to the NRF. Clearly, you cannot afford to have any problems slowing you (or your site) down at this time – your whole year could be a wash if something like that happens.

So what are these CEOs with scary visions thinking of at this time? Based on the conversations I have had with people like this over the years, I can narrow it down to 4 areas that seem to be the top priority.

  • Performance and Scalability:
    This is a serious issue considering the site is going to be hit more frequently and in much larger numbers at this time of the year than at any time in its lifetime before that. There is a very real impact if the site slows down, underperforms or crashes at this time. Kissmetrics has reported that each delay of 1 second in page response causes as much as a 7% reduction in conversions. The analysis is that for an ecommerce site selling $ 100,000 per day, this delay of 1 second could cost a whopping $ 2.5 million in lost sales annually – not a hit you want to take. They also report that customers have high expectations of speed, 47% of them want a web page to load in less than 2 seconds and as many as 40% of them will actually bail on the page if it takes more than 3 seconds to load.

    The CEO’s concern thus, is the site should allow for the greatly larger number of users, potentially in concentrated bursts and within a reasonably small time-window. Performance and load testing of the site and every component on it thus becomes critically important. Testing the site so it doesn’t pack up under the pressure of the sharp scaling that it will encounter at this time also becomes key.

  • Security:
    The enemy is at the gate – at least that’s what the consumers think. The highly visible coverage of credit card fraud, data loss, and identity theft has made consumers wary and this is impacting their buying behavior. A 2015 study by Bizrate Insights found that as many as 60% of consumers surveyed believed that online stores were just not doing enough to protect their card and personal information. This lack of confidence reflected in 34% of them expressing a hesitation to buy online.
    Those leading online retail companies have security front and center on their list of priorities. The cloud that hosts their site, the technologies their site is built on, the payment infrastructure, the individual components of the site, and even all the bells and whistles the site employs have all to be designed to be secure and rigorously tested to validate that they are, indeed so.
  • Usability:
    We live in an age of busy people with short attention spans. A famous book on design principles actually propagated the maxim “Don’t make me think.” This is the age of the impatient consumer and the “ease-of-use” factor of the website is an overriding concern. The Baymard Institute’s “Ecommerce Checkout Usability” survey from last year found that over 1 in 4 consumers abandoned their shopping cart without completing the order because they found the checkout process overly long or complicated. It’s not just the loss of revenue due to lost sales that motivate such sites to improve usability – there’s money to be made too. Defaqto Research has reported that 55% of the consumers in their survey wanted a better experience so much that they would be willing to pay more to get it.Taken together this represents a powerful motivator for the CEOs of these eCommerce companies to invest time, money and design effort to make their sites more intuitive, easy to navigate, and friendly. Testing of the UI, obviously plays a big role in that process.
  • Mobile:
    This is almost a foregone conclusion that your consumers are on the mobile. Statista has reported that over 75% of US internet users access the internet from their respective mobile devices. These users are spending money while online too, already about 28% of total online spending is from tablets and smartphones – projected to touch $ 200 billion in 2018. Then, there’s the consequence of not being on the mobile bandwagon. MoPowered found that 30% of all consumers abandoned their transaction if the experience was not optimized for the mobile.The challenge for the CEOs of these ecommerce and consumer internet companies is how to stay in front of this mobile game? There are some many mobile devices out there, multiple operating systems (well, at least 2), device capabilities, form-factors, and other such factors to worry about. Testing whether your site performs well across all of these options has to form a significant time of the testing strategy.

Conclusion:
These CEO’s of consumer internet and eCommerce companies obviously believe Seth Godin, that’s why they worry about the entire consumer experience on their site and the value they have to deliver. Given the importance of the holiday shopping season, it’s issues like those listed here, that could well be occupying their hearts and minds, on that quest.

Testing Strategies For The eCommerce Shopping Season

eCommerce is now a high-octane space with almost all retailers vying to make a winning online presence. As the holiday season kicks into gear with Black Friday and Cyber Mondays and continues on to Christmas, online retailers have to ensure that just like their brick and mortar store, their online store too is ready to service the heavy footfalls in the weeks ahead. Large retailers such as Walmart and Target too have worked on their online stores to avert incidents that they faced in 2015 where their eCommerce stores buckled under the pressure of heavy traffic on Black Friday and Cyber Monday. eCommerce companies, both big and small, are working to improve the digital experience provided by their eStore by improving reliability and increasing capacity to handle and manage high traffic. An Adobe Digital Insights report shows that approximately “35% of Americans are ready to shop right at the dinner table to ensure a good deal”. A Synchrony Financial survey shows “that more than half of holiday shoppers say the best deals are online, and 37% report they plan to do more of that this year, given the pretty much, anytime, anywhere convenience.” It thus becomes essential that the online retailers provide a seamless and hassle-free shopping experience for greater profits in the holiday season.

While having a great digital strategy forms an essential part of increasing sales in the holiday season, one way eTailers can ensure that their online store performs optimally is by testing their website. In this blog, we take a look at a few testing strategies for the eCommerce shopping season.

  • Catalog and segment infrastructure – ease of use:
    Given that the number of items and related discounts increases considerably during the shopping season, eTailers must ensure that all these items are displayed correctly on the Products page. Along with this, testing that all products and the associated discounts are reflecting correctly, the cataloging of the products has been done correctly and that product browsing is easy become imperative. Testers also have to ensure that all search options reflect and display correctly, the number of products on a particular page present themselves correctly, there is no duplication of products on the next page and ensure that pagination and filtering options work in harmony so that the user can browse through the website with ease.
  • Load and Performance testing:
    Research from load balancing and cyber security solutions company Radware showed that slow eCommerce websites contribute to 18% of shopping cart abandonments. Thus, testers need to make sure that the website loads fast, especially when the traffic to the website is high. Testers should look at the historical data and then assess the spike in the traffic that can be expected on the website. Along with the traffic, testers need to test the web application components such as the hardware, the database, and the network bandwidth to assess if these can handle the anticipated load and accordingly adjust the application’s performance profile. Additionally, testers will also need to assess how many concurrent requests the system can handle at maximum load, assess if response time for all test paths is acceptable, assess the reason for poor website performance such as large data sets, or browser incompatibility etc. Extensive load testing will also determine if the website needs to deploy more load balancers to eliminate the problem of refused connections that ultimately leads to disgruntled customers.
  • Mobile testing:
    Pymnts.com estimated that Mobile sales grew by almost 53% from 2014 to 2015 on Cyber Monday and accounted for USD $514 million in revenues. According to a report by Dynatrace, over 50% millennials (who are the largest and fastest growing demographic in the US) who use smartphones do more holiday shopping from their mobile devices than in-store. Thus testers have to make sure that they do not ignore performance testing for mobile and ensure that the mobile application or the mobile website does not crash under peak pressure. Along with the overall performance testing for mobile, testers have to also assess problems and get solutions for problems such as mobile latency and conduct mobile network speed simulations for optimal performance.
  • Shopping cart and payments:
    Testers have to make sure that all products in the shopping cart display correctly when the user proceeds to checkout. Given that people are pressed for time, they need to ensure that the checkout process is smooth and that all discount codes reflect correctly. Regression testing with all active and inactive codes thus becomes important. Testers also have to make sure that the discount codes are not putting an undue amount of load on the database as this too, can impact the performance of a website. Finally, they need to check that all the payment systems in use function appropriately even during peak traffic.
  • Security:
    eTailers also must account for the security of their customers. Data from ACI Worldwide reveals that while eCommerce transaction grew by 21% in 2015 from Thanksgiving, the attempts grew by just 8%! Testers thus should make sure that the security layer is not compromised by ensuring secure handling of incoming and outgoing data, doing more penetration tests to identify vulnerabilities and taking a multi-layered approach to security.

Conclusion:
With eCommerce sales estimated to cross $414.0 billion by 2018, making sure that your eCommerce website performs according to the expectations of the customer, especially during the holiday shopping season becomes imperative. By taking a methodical and planned approach to eCommerce testing, eTailers can make sure that they can unwrap the holiday season with profits.

Test Design – The Crucial Step to Test Automation

Recently, our test automation experts were having a conversation with an organization which was restarting its test automation project. The discussions started with the company narrating how their earlier test automation project failed miserably after 13 months incurring them huge costs, lot of time, and worse of all, loss of faith of the team in test automation. The company had started their initiative with the purchase of expensive test automation technology. Then they wrote automation scripts for most of the manual test cases and started running those. The end result? A huge number of automated test scripts which require high maintenance, require human intervention to run and are not useful for the product at all.

The problem we see in most scenarios is that the discussions around test automation start around what to automate and what not to automate. Ideally, what needs to be defined is what to test and what not to test. That’s what is called test design. Test design is a crucial phase for testing. Test Design involves analysis of the product specifications and coming up with test cases to validate the product functionality. This is 100% human effort and cannot be automated. It requires the involvement of domain experts, software development experts, and testing experts who need to work together to prepare a test plan with great attention to detail. Test design is what makes or breaks the success of test automation.

“More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded – indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest.”

-Boris Beizer, Software Testing Techniques

An effective test design involves defining the test cases to test the software. The test cases should be created in such a manner that those can easily read, written, and maintained. The objective of the test cases should be to not only find the bugs in the product but also improve the overall product experience for the user. Test case maintenance is one of the crucial aspects which is often overlooked. Test Design needs to consider this aspect and make sure that the test cases are designed in such a way that those are easily maintainable, even by the people who did not create those in the first place. Especially in the case of test automation projects, the test design should aim towards reducing the maintenance costs for test development. The test design needs to align with the business goals such as faster time to market, more test coverage, and increased team confidence.

Similar to a software development project, the test automation project also needs to go through design, development, architecture, and maintenance. The test automation needs to have product development mindset and the test automation suite needs to follow the product roadmap.

Typically, a good test design involves –

  • A detailed thinking and design of what to test, how to test, designing of the test, and the execution plan.
  • Test authoring using method such as Model-based testing, boundary value analysis, action-based testing, error guessing etc. and defining of the keywords and action works.
  • Test case design for bug identification, impact, and maintainability.
  • Having a standard set of guidelines for writing the test cases.
  • Grouping of tests cases into small modules and suites.
  • Test case writing based on test objectives, steps, test data, and validation criteria.

Conclusion

In a nutshell, a good test design is a crucial step in test automation to achieve meaningful test coverage, find defects in the software, and build confidence in the testing team. It achieves more accuracy, is effective, and involves low maintenance. Contrary to common belief, a good test design is not very hard to do. It just requires dedicated thinking, patience, domain knowledge, understanding of good testing practices, and knowledge of design guidelines. Go for it and you will see the returns very soon!