Is Microsoft A Secret Enterprise Mobility Challenger?

While declaring the 2015 numbers, Microsoft COO Kevin Turner, announced one product as their “hottest” product and predicted that it would be a “$ 1 Billion product in the future”. This product was the Enterprise Mobility Suite (EMS), only a year in the market at the time of Turner’s enthusiastic endorsement. At the time, Corporate VP Brad Anderson said, “As the value and necessity of EMM grows, we see customers evolving their approach, innovating, and bringing new needs and demands every day. On a really regular basis I see the traditional point solution MDM vendors, or the identity and access management vendors, struggling to keep up with these demands – customers are seeking more comprehensive and holistic solutions that are architect-ed for (and can scale to) the cloud.”

In the time since that announcement, while EMS doesn’t yet seem to have hit that landmark number, Microsoft’s focus on the space is clearly apparent. In line with Anderson’s observation, there also seems a clear recognition of the kind of customers to target. Organizations that appreciate comprehensive solutions, with robust architecture, the ability to scale, and a significant cloud story seem to be in their sights – in other words, Enterprise customers. Does that mean that Microsoft could be a secret challenger for the Enterprise Mobility market?

First, though, perhaps Microsoft’s Enterprise focus shouldn’t come as a surprise – this focus has always been there. The revenues for the first Q of the ongoing Financial year, as a case in point, showed the most significant growth in Office commercial revenue (up 5%), server revenue (up 11%) and in other Enterprise friendly products like the “intelligent cloud” (up 8%) and Azure (up a whopping 116%).

Microsoft thus has a ready opening with its Office suite of products – still a staple in most enterprises. It seems a natural extension for those enterprises to turn to Office 365 when they want to extend the reach of those productivity apps into the mobile workforce. Microsoft reported that Office has already been downloaded 340 million times on iPhones, iPads and Android devices. This may only be the tip of the app iceberg, though – there are a further 669000 apps for phones, tablets, and desktops on the Windows Store. This signifies a clear attempt by Microsoft to build a comprehensive ecosystem for the Enterprise.

Another beachhead seems to have been established by the organic growth of Microsoft in the Enterprise segment with its newly-found Cloud focus. Microsoft reported that 80% of the Fortune 500 were on the Microsoft Cloud. It’s not only large Enterprises turning to the Microsoft Cloud, 40% of Azure’s revenue comes from ISVs and startups. This is significant because there is a natural coming together of the Cloud and Mobility all across the Enterprise. This forms a potent combination that Microsoft is trying to set itself up to exploit.

One clear sign of Microsoft’s Enterprise interest is visible in how EMS has evolved. The product, earlier called the Enterprise Mobility Suite is now named as the Enterprise Mobility + Security (still EMS) and includes a significant nod to the concern of Enterprises everywhere about Security. A key part of the suite is Microsoft Intune. Intune has capabilities for managing mobile devices, apps, and even PCs, all from the Cloud. This allows employees to leverage data and corporate apps, on-demand, from anywhere – all while being perfectly secure. The suite also features Azure Rights Management, which makes securely sharing protected files inside and outside the organization very easy. Other significant inclusions are Azure Active Directory Premium for managing identity and access, Azure Information Protection for protecting information, and Microsoft Advanced Threat Analytics + Microsoft Cloud App Security for identity-driven security applications. Coupled with Azure Active Directory, all this forms a pretty formidable secure shield designed for Enterprise acceptability.

This does not mean that it’s all smooth sailing, though. Microsoft’s mobile story has had its fair share of ups and downs as Nokia and Windows Mobile will attest. That said, though, there is clearly some fresh thinking sweeping through the corridors of power at Redmond WA and that could well mean a new Enterprise Mobility generation built on a solid Microsoft foundation – stranger things have happened!

The Fundamentals of Continuous Integration Testing

Over the past decade, most organizations have adopted agile development methodologies to develop products faster and reduce their time to market. As agile development methodologies have evolved they have given rise to newer development practices that put testing at the heart of development. Continuous Integration is one such development practice that adopts an architecture based approach, to provide development teams more flexibility and ensure the production of high-quality software even more frequently.

Testing and development expert, Martin Fowler defines Continuous Integration as a practice where a development team integrates their work on a daily basis and makes way for multiple daily integrations, which are then verified by an automated build and test. This allows for faster error detection and significantly reduced integration errors. This makes the development process more efficient. Adopting the Continuous Integration methodology enables teams to develop software that is more cohesive and robust, even while releasing at extremely short intervals.

One of the main contributors to Continuous Integration success is continuous testing. Delivering high-quality software in extremely short timeframes is not possible unless each and every piece of code is tested thoroughly. Testing, thus, becomes one of the most important elements of Continuous Integration. The development team also has to work on developing a comprehensive automated testing suite both at the unit and at the functional level to guarantee code quality.

Since the goal of Continuous Integration is to never break the build, it becomes imperative that all measures are taken to ensure that untested or broken code does not get committed and that strict version control policies are implemented. The entire environment here has to be based on automation to ensure that despite any and every code addition to the application results in a releasable version. Additionally, it also has to ensure that the application can be built in any version.

Continuous Integration is a subset of Continuous Delivery, where the built application is delivered to the testing and production teams to ensure application reliability, generate faster user feedback and ensure continued high performance. Continuous Integration automates the manual stages of application development and hence, makes the development and deployment process faster. Frequent and incremental builds are the hallmark of Continuous Integration and this effectively eliminates the requirement of integration testing.

In Continuous Integration, the developers submit the new code and/or code changes to a central code repository where the release manager merges the code with the main branch and then pushes out the new release. It is the Continuous Integration system that monitors the version control system for changes and then launches the build after getting the source code from the repository. The server then runs unit tests and functional tests to check the functionality, validity, and quality of the product. The same package is then deployed for acceptance testing and is then deployed on the production server.

Automation plays a critical role in Continuous Integration to ensure that all packages can be deployed on all the servers at the click of a button. To enable this, its becomes essential to maintain a single source code, automate the build, keep the build fast and make it self-testing. Along with this, every commit has to be built on the integration machine, keeping systems transparent and easily accessible by the invested parties to test the product in a production environment. The Continuous Integration server has to competently inform the respective teams of each successful build and alert the team in case of any failure. The team then has to ensure that the issue is fixed at the earliest.

It is clear that with Continuous Integration, testing, too, is continuous. Some key testing areas to focus on are:

  • Ensure that continuous regression tests are run in the background and that they capably provide regular feedback to minimize regression defects.
  • Carry out continuous performance tests to study the application for response time, identify changes in speed, reaction time, and application consistency.
  • Conduct frequent load tests to clear performance goals and make the application ready for use.
  • Ensure that the load tests begin in smaller and incremental scenarios and culminate as one large package.
  • Conduct continuous scalability testing to gauge the throughput, network, and CPU memory usage, to reduce business risks.
  • Ensure end-to-end functional tests to test the functionality of the product in different scenarios.

Continuous Integration sets the stage for Continuous Delivery by reducing manual intervention. It also brings greater transparency to the entire development and QA process which makes taking factual decisions to improve efficiency faster. Since Continuous Integration testing aims to find bugs and code breaks faster with the help of more automation, the development process becomes more cost efficient. Steve Brodie, CEO of Electric Cloud aptly summarizes the importance of automation in a Continuous Integration and Continuous Delivery world as, “You’re only as fast as the slowest element, the slowest phase in your software delivery pipeline”. Clearly, automation lies in the heart of the Continuous Integration process and makes the magnitude of change implementation smaller, more manageable and efficient.

Practical Ideas to Stay Creative in Software Testing

As Software testers, or for that matter in any profession, we are frequently chastised of being monotonous in our line of thought, lacking in ingenuity or plainly speaking, being dumb altogether. To top it all, we are frequently reminded to think “out of the box” ( as if that sounds very imaginative..;P). However creativity is not a trait sovereign to a select few and can be inculcated through minor attitudinal shifts. The following are not a set of guidelines to become the next Einstein, but will certainly aid you in your endeavor to become the next torchbearer in the field of testing.

1. Old is Gold:

You are 5 years into your job as a hardened testing professional. You’d think your long forgotten Java skills are as useful to you today, as a typewriter might be in today’s times. Right?….WRONG. For you to grow into an A-league professional, a developer’s insight into de-bugging problems is worth its weight in gold. Sharpening those coding skills from college will help you adapt better and customise the use of a testing tool. Getting involved in app developing contests as a pass time will throw you into an ocean of bugs which are in tune with the latest methodologies deployed towards making glitch free software applications of today

2. Brainstorm:

At the time of creating a test case, you are normally provided with an excel sheet, enumerating the various guidelines from the developer team and the requirements from the client. For once, chuck ’em out of the window (not literally of, course). Get your pencil and notebook out (yeah just like the good ‘ol school days).For five minutes, think of all the test case scenarios you can dish out of your mind. Jot them down and review. At the end of it all, what do we have? A plethora of test cases without even glancing at the mundane list of client requirements.

3. The Programmer community:

It always bodes well to have an ear for the latest IN thing which the Developer team is discussing. What latest tools and languages they are deploying to make major efficiency gains and value additions in their products. Be in the loop. You don’t have to stop with the development teams, maintain a good rapport with the designers, the DevOps engineers and the like. While you are it, be on the lookout for any glaring gaps they are overlooking, the critical bugs which they keep encountering regularly. Try to eke out ways to rectify them.

4. Coding, a hobby, really?

Getting involved in app developing contests as a pass time, will throw you into an ocean of bugs which are in tune with the latest methodologies deployed towards making glitch free software applications of today

5. Peers and that theory of getting more by giving:

It’s essential to keep abreast of what your colleagues are up to. One way of effectively keeping tabs is to engage through Social Media, being part of conferences and forums. They say, knowledge improves by sharing it further. Point is to share your own solutions while at same time picking out on the brains of others

6. News, news and some more news:

Thanks to the tsunami of knowledge which has engulfed us today, finding information is not so hard. Cherry-picking that, which is relevant to our need is the hard bit. Handy tools like RSS and Evernote can help us capitalise on our hunt for new ideas, by keeping us up to date with the current events in the field of Software Testing. Rummage through old dusted owner manuals and forage through the history of testing problems which still hold relevance.

Creating your own ideas and building them up with others’ after reviewing them, will unleash THE CREATIVE tester inside, that you always dreamt to be.

10 Essential Testing Stages for your Mobile Apps

2016 was truly the ‘year of the mobile’. Mobile apps are maturing, consumer apps becoming smarter, and there is an increasing emphasis on the consumerization of enterprise apps. Slow, poor-performing and bug-riddled apps have no place in today’s smartphone. Clearly, mobile apps need to be tested thoroughly to ensure the features and functionalities of the application perform optimally. Given that almost all industries are leaning towards mobile apps (Gartner predicts that there will be over 268 billion mobile downloads in 2017 that will generate a revenue of USD $77 billion) to make interactions between them and their consumers faster and more seamless, the demand for mobile testing is on the upswing. Mobile app testing is more complex than testing web applications primarily because of the need to be tested on different platforms. Unlike web application testing where there is a single dominant platform, mobile apps need to developed and then tested on iOS, Android, and sometimes more platforms. Additionally, unlike desktops, mobile apps must deal with several device form factors. Mobile app testing also becomes more complex as factors such as application type, target audience, distribution channels etc. need to be taken into consideration when designing the test plans and test cases.

In this blog post, we look at ten essential testing stages for mobile applications:

  1. Installation testing:
    Once the application is ready, tests need to conduct installation testing to ensure that the user can smoothly install or uninstall the application. Additionally, they also have to check that the application is updating properly and does not crash when upgrading from an older version to a newer one. Testers also have to ensure that all application data is completely removed when an application is uninstalled.
  2. Target Device and OS testing:
    Mobile testers have to ensure that the mobile app functions as designed across a plethora of mobile devices and operating systems. Using real devices and device simulators testers, they can check the basic application functionality and understand the application behavior across the selected devices and form factors. Applications also have to be tested across all major OS versions in the present installed base to ensure that it performs as designed irrespective of the operating system.
  3. UI and UX testing:
    UI and UX testing are essential to test the look and feel of the application. This testing has to be done from the users’ perspective to ensure that the application is intuitive, easy to use, and has industry-accepted interfaces. Testing is needed to ensure that language- translation facilities are available, menus and icons display correctly, and that the application items are synchronized with user actions.
  4. Functionality Testing:
    Functionality testing tests the functional behavior of the application to ensure that the application is working according to the specified requirements. This involves testing user interactions and transactions to validate if all mandatory fields are working as designed. Testing is also needed to verify that the device is able to multitask and process requirements across platforms and devices when the app is being accessed. Since functional testing is quite comprehensive, testing teams may have to leverage test automation to increase coverage and efficiency for best results.
  5. Interrupt testing:
    Users can be interrupted with calls, SMS, MMS, messages, notifications, network outage, device power cycle notification etc. when using an application. Mobile app testers have to perform interruption testing to ensure that the mobile app can capably handle these interruptions by going into a suspended state and then resuming functions once the interruptions are over. Testers can use monkey tools to generate multiple possible interrupts and look out for app crashes, freezes, UI glitches, battery consumption etc. and ensure that the app resumes the current view post the interruptions.
  6. Data network testing:
    To provide useful functionalities, mobile apps rely on network connectivity. Conducting network simulation tests to simulate cellular networks for bandwidth issues to identify connectivity problems and bottlenecks and then study their impact on application performance fall under the purview of network testing. Testers have to ensure that the mobile app performs optimally with varying network speeds and is able to handle network transitions with ease.
  7. Hardware keys testing:
    Mobile apps are packed with different hardware and sensors that can be used by the app. Gyroscope sensors, proximity sensors, location sensors, touchless sensors, ambient light sensors etc. and hardware features such as camera, storage, microphone, display etc. all can be used within the application itself. Mobile testers thus, have to test the mobile app in different sensor specific and hardware specific environments to enhance application performance.
  8. Performance Testing:
    The objective of performance testing is to ensure that the mobile application is performing optimally understated performance requirements. Performance testing involves the testing of load conditions, network coverage support, identification of application and infrastructure bottlenecks, response time, memory leaks, and application performance when only intermittent phases of connectivity are required.
  9. Load testing:
    Testers also have to test application performance in light of sudden traffic surges, and ensure that high loads and stress on the application does not cause it to crash. The aim of load testing is to assess the maximum number of simultaneous users the application can support without impacting performance and assess the applications dependability when there is a surge in the number of users.
  10. Security testing:
    Security testing involves gathering all the information regarding the application and identifying threats and vulnerability for the application using static and dynamic analysis of mobile source code. Testers have to check and ensure that the applications data and network security functionalities are in line with the given guidelines and that the application is only using permissions that it needs.

Mobile application testing begins with developing a testing strategy and designing of the test plans. The added complexity of devices, OS’ and usage specific conditions adds a special burden on the software testing function to ensure the most usable and best-performing app. How have you gone about testing your mobile apps to achieve this end?

A Simple Guide to Interoperability Testing

Interoperability testing is one of the types of non-functional testing which ensures the interoperability quality of the software. The term ‘interoperability’ might be heard by you, but if you actually aware of the term. Many of us derive or interpret incorrect meaning of the word – interoperability. So, before discussing the interoperability testing, first we try to know the correct and exact meaning of the word interoperability.

What is interoperability?

In general, interoperability is the ability of a system to work and interact with other systems and applications. Interoperability may be defined as the property or ability of a system to provide and accept features from other system or application. Interoperability quality provides independence in interacting, sharing and exchanging data and information with other system, without interrupting the intend functionalities.

Consider the example of banking application system. A banking application needs to interact, exchange and share data and information with the application of other bank or same bank but different branch or any third party/merchandise vendor for the purpose of financial and business transactions.

Banking Application Interoperability
A user made a financial transaction at XYZ bank from his account to transfer some amount of money to other account of ABC bank. The banking applications of both the banks incorporated with interoperability features interacts with each other independently without interrupting their intended functioning, and shares and exchanges data and information, such as account numbers, credentials, beneficiary name, bank branch, IFSC code, amount of money and other relevant information with each other to carry out the financial transaction/transfer of money.

Now, what is interoperability testing?

Interoperability testing is a form of non-functional testing to achieve and maintain interoperability traits in the system. This form of testing is done to ensure end-to-end functionality between two interacted system based on their specified standards and protocols i.e. irrespective of standard, protocols followed by two systems to execute their intended function, they interact independently to share and exchange data and information.

Further, interoperability testing is used to verify and validate and data loss, incorrect and unreliable operations, and unreliable performance between the two systems.

How to perform interoperability testing?

Interoperability testing may be carried out through following steps in the subsequent manner.

  • Step 1: In first step, proper planning and strategy need to be define and describe. The planning and strategy involves the understanding of each application present in the network, including behaviour, response, functionalities and input taken, output generated of each and every application. Thus, the network of applications is to be considered as one single unit.
  • Step 2: Implementing certain approaches and techniques like requirement traceability matrix (RTM) to map each requirements with that to test case, and thereby eliminating the scope of any unvisited or left requirement. Test plans and test cases are derived and developed. Further, some essential non-functional attribute of the network of applications such as security and performance also needs to be verified and validated before executing the interoperability tests.
  • Step 3: Executing interoperability test cases with the subsequent activities of logging the defects, correcting the defects, retesting and regression testing after applying patches.
  • Step 4: Evaluating the test results with respect to RTM, to ensure complete coverage of the requirements and no requirements has been left out.
  • Step 5: Documenting and reviewing the approaches, steps and practices used in the testing, to further improve the testing process so as to get accurate and quality results.

What are challenges faced in the interoperability testing of the application?

  • Testing all applications generates a good amount of possible combinations which are difficult to test.
  • Differences between the environment where application is being developed and where it is installed may affect testing, in case any of the environments goes down.
  • Different environment issue also needs unique test strategy to encompass the need and features of both the environment.
  • Applications will be connected in network, thus adding network complexity to it would makes the task of testing even more difficult.
  • Root cause analysis, if defect is located.

Solution to these challenges in Interoperability testing:

  • Testing techniques and approaches like orthogonal array testing (OATS), cause effect graph, equivalence partitioning, bva and other similar approaches may prove beneficial in mapping the requirements independently with that to test cases so as to provide and ensure maximum test coverage.
  • Going through past information and data to study and analyse the conditions under which system crashes or breakdown and to estimate in how much it recovers from failure.
  • Making use of the above stated study to prepare proper plan and strategy.

Conclusion:

Interoperability testing is not an easy task to execute, but with the proper planning and strategy along with the information, data and experience gained from the past, interoperability testing guarantees the system’s interoperability quality to interact uninterruptedly and independently with other systems and applications.

My 2017 Software Industry Predictions

It’s that time of the year when we look into our crystal balls and make predictions for the year ahead. 2016 was a phenomenal year for the technology world. Technologies that emerged over the last few years, such as cloud, firmly planted their feet within the enterprise. Businesses changed their maneuvers to leverage their digital infrastructures and found new paths to engage with their customers and make their operations more efficient. What became increasingly evident over the past year was that the IT landscape had to change to accommodate the business challenges and that the enterprise was ready to adapt to the change brought forward by technological innovation. Here’s a look at what the year ahead promises – in my view at least.

  • New technologies provide new business opportunities
    2016 witnessed the rise of technologies such as Augmented Reality, Virtual Reality, IoT, Machine Learning etc. Forrester Research believes that Augmented Reality will be one of the top five technologies that will completely change the world over the course of the next three to five years. Consumers have been receptive towards these new technologies. Look at the success of Pokemon Go if you are looking for examples. As consumers become more open to adopting and experimenting with new technologies, it opens up new possibilities for organizations to create new opportunities by amalgamating data, mobile devices, applications to understand customer journeys better. We can thus expect to see tech budgets focus more on business technology in this new year.
  • Mobile testing all the way
    The World Quality Report 2016-17 discovered that while a large number of organizations were taking advantage of mobile solutions, mobile testing skills were relatively in their nascent stages in the development lifecycle. The lack of mobile testing experts and fragmented testing methodologies seems to have contributed to this. In 2017, however, as the number of consumer and enterprise grade mobile applications grow in demand and adoption, we can expect to see mobile testing strategies becoming more mature. Involving test engineers in the development process from the very beginning will be an assured way of improving business outcomes by delivering high quality and optimally performing app.
  • The future is cloudy
    IDC estimates that by 2020 “67% of enterprise IT infrastructure and software will be for cloud-based offerings.” We can expect to see more organizations move away from the on-premise infrastructure and adopt the cloud. As the demand for agility increases, digital transformation increases and more number of companies become global, organizations will be looking towards adopting cloud to drive innovation.
  • Test automation will become more mainstream
    To remain competitive, organizations will have to speed up their application development process. As the need for speedy deployments increases, 2017 will witness test automation become more mainstream. The focus on automation will be a great deal more as automation and new levels of testing to match the speed of development. Testing and application performance management tools will evolve more and provide organizations a more holistic view of their application development process and allow them to test new features.
  • The rise of Performance Engineering
    2017 is also expected to witness a greater impetus placed on performance to deliver best user experiences. To enable this, organizations will no longer just depend on performance tests but will increasingly focus on performance engineering to deliver consistent and uniform application performance across diverse platforms, devices, and operating systems.
  • Shift in the enterprise application landscape
    We can expect to see greater consumerization of enterprise applications. Instead of clunky enterprise apps, 2017 will usher in the era of consumer-quality enterprise applications that have intuitive user interfaces and an easily navigable information architecture even in the most complex systems. As multi-device collaboration becomes more mainstream, accessing files and information will become seamless across devices.
  • Agile Outbreak
    One of the biggest trends of 2017, I believe will be that the application of agile concepts will step out of the software/product development mode and will be applied in a much wider organizational context. Agile principle derivatives will become increasingly common in areas such as design/ merchandising strategy, design thinking, growth hacking etc. and forge interdisciplinary collaborations. Methodologies such as DevOps and Continuous delivery will also adopt agile to improve outcomes and build products, as well as organizations, that can be said to be well tested and bug-free. This means integrating testing into the build model. At an organizational level, agile concepts will be implemented to improve quality by ensuring scalability, availability, easy maintenance and simplification of complex systems. Agile concepts like transparency, inspection, continuous learning, process focus, flexibility, shorter feedback loops that can benefit each and every aspect of an organization will see greater adoption.

It is certainly a very exciting time to be in this industry as we gear up to face another year that’s full of technological potential and gear up to usher in the ‘age of the customer’.

Testing in the DevOps World-Test Automation the Sequel

“The most powerful tool we have as developers is automation.” – Scott Hanselman

In a previous blog about some stats that told the developing software testing story, we had identified the impact on testing strategies due to the growing adoption of DevOps as a key trend. Our CEO has also written in greater detail on how testing is changing due to DevOps. Looking back though it’s clear this story is still developing and that something more remains to be said. In other words, it deserves a sequel and the central role in this continuing story is reserved for Test Automation.

You may ask, why does Test Automation deserve this star billing? Well, consider the DevOps way for a bit. This is a world with several, almost continuous iterative releases, each within days, even minutes, of each other, all being pushed out to the final production environment into the demanding hands of paying customers. So many releases, so little testing time and so much pressure to deliver quality – has there ever been a more, theoretically, perfect case for automated testing? Let’s hope that puts the “Why” question to bed – now let’s move on to the “How” and “What”.

First, a look at the “How”. As was already apparent in Agile, with so many iterative releases following so close on the heels of each other, it is absolutely impossible to build your automation in parallel with the product-under-test. Thus, with DevOps, it becomes critically important to involve the test automation team at an early enough stage of the product planning to be able to anticipate, as much as possible, the direction the product is likely to take and automate early. This is also the time to plan for the automation carefully. Factors to consider include what conditions are most likely to remain reasonably unchanged and which are likely to undergo frequent changes? How reusable can you make the components of the automation framework? This is also a good time to define the objectives you are looking to achieve with the automation – Faster deployment? Better code quality? Greater confidence based on better regression tests? Essentially, start with the end in mind and measure as you go along to know if you are on the right track.

  1. So, on to the “What”. There is both the opportunity for creating a comprehensive test automation framework and the threat that some of it could be rendered irrelevant by the pace of change in the product.
  2. That said, though, there is value to considering automating the Unit Tests as there is a reasonable chance that several specific components will remain relatively stable over the course of the many iterations.
  3. The greatest value could well be in automating the regression testing – the maximum bang for the buck considering the sheer number of releases. Many DevOps models allow code to be delivered late in the cycle and fixes can be applied right until the time that specific release goes live. Automating the regressions will allow you to test the entire code after each such addition and this makes it far more likely that a high-quality, bug-free product goes out to the end customer.
  4. Among the central value propositions of the DevOps, way is continuous deployment on the operational infrastructure. Continuous deployment means continuous integration of code and of the code into the operational infrastructure. This is where automation can play a key role. An interesting approach being followed by many is to run integration testing separately and parallel from unit testing and sometimes even before unit testing. This approach believes that the integration testing is not impacted by the business logic and is only concerned with whether the product works on the deployment infrastructure. Automation helps test this quickly and relatively comprehensively.
  5. There is also great value on automating the testing for the build – deployment stage starting at the test environment itself. The objective is to run the tests in the test or development environment and ensure smooth deployment in the production environment.

A quote we like about DevOps goes, “DevOps is not a goal, but a never-ending process of continual improvement”. While agreeing with Jez Humble, who said this, perhaps we would like to add that this continual improvement is driven by continual testing, which is turn is based on a solid test automation platform. What do you think?

Entry and Exit Criteria in Software Testing

Software testing, an essential part of software development life cycle, is quite a vast and complex process that requires ample time and efforts of testers to validate software product’s quality and effectiveness. This process, though extensively helpful, often becomes tedious as it has to be executed a plethora of times across different platforms. Moreover, there are multifarious requirements that need to be considered and tested, which sometimes become a source of uncertainty for testers, mostly regarding where to commence & terminate testing. To avoid this confusion, specific conditions and requirements are established by the QA team, before the inception of testing, that helps testers throughout the testing life cycle. These conditions are termed as entry and exit criteria, which play a crucial role in software testing life cycle.

entry exit criteria

What is An Entry Criteria in Software Testing?

As the name specifies, entry criteria is a set of conditions or requirements, which are required to be fulfilled or achieved to create a suitable & favorable condition for testing. Finalized & decided upon after a thorough analysis of software & business requirements, entry criteria ensures the accuracy of the testing process and neglecting it can impact its quality. Some of the entry criteria, which are generally used to mark the beginning of the testing, are:

  • Complete or partially testable code is available.
  • Requirements are defined and approved.
  • Availability of sufficient and desired test data.
  • Test cases are developed and ready.
  • Test environment has been set-up and all other necessary resources such as tools and devices are available.

Both, development and testing phases are used as a source to define the entry criteria for software testing process, like:

  • Development phase/process provides useful information pertaining to software, its design, functionalities, structure, and other relevant features, which offer assistance in deciding the accurate entry criteria like functional and technical requirement, system design, etc.
  • From testing phase, following inputs are considered:
    • Test Plan.
    • Test Strategy.
    • Test data and testing tools.
    • Test Environment.

 

The entry criteria is mainly determined for four specific test levels i.e., unit testing, integration testing, system testing and acceptance testing. Each of these test levels require distinct entry criteria to validate the objective of test strategy and to ensure fulfilment of product requirements.

Unit Testing:

  • Planning phase has been completed.
  • System design, technical design and other relevant documents are properly reviewed, analysed and approved.
  • Business and functional requirements are defined and approved.
  • Testable codes or units are available.
  • Availability of test environment.

Integration Testing:

  • Completion of unit testing phase.
  • Priority bugs found during unit testing has been fixed and closed.
  • Integration plan and test environment to carry out integration testing is ready.
  • Each module has gone through unit testing before the integration process.

System Testing:

  • Successful completion of integration testing process.
  • Priority bugs found during previous testing activities has been fixed and closed.
  • System testing environment is available.
  • Test cases are available to execute.

Acceptance Testing:

  • Successful completion of system testing phase.
  • Priority bugs found during previous testing activities has been fixed and closed.
  • Functional and Business requirement has been met.
  • Acceptance testing environment is ready.
  • Test cases are available.

What is An Exit Criteria in Software Testing?

Exit criteria is an important document prepared by the QA team to adhere to the imposed deadlines and allocated budget. This document specifies the conditions and requirements that are required to be achieved or fulfilled before the end of software testing process. With the assistance of exit criteria, the team of testers are able to conclude the testing without compromising the quality and effectiveness of the software.

Exit criteria highly depends on the by-product of the software testing phase i.e. test plan, test strategy, test cases, test logs, etc. and can be defined for each test level, right from test planning, specification, and till execution. The commonly considered exit criteria for terminating or concluding the process of testing are:

  • Deadlines meet or budget depleted.
  • Execution of all test cases.
  • Desired and sufficient coverage of the requirements and functionalities under the test.
  • All the identified defects are corrected and closed.
  • No high priority or severity or critical bug has been left out.

Similar to entry criteria, exit criteria is also defined for all different levels of testing. Few of them are:

Unit Testing:

  • Successful execution of the unit tests.
  • All the identified bugs have been fixed and closed.
  • Project code is complete.

Integration Testing:

  • Successful execution of the integration tests.
  • Satisfactory execution of stress, performance and load tests.
  • Priority bugs have been fixed and closed.

System testing

  • Successful execution of the system tests.
  • All specified business and functional requirements has been met.
  • Priority bugs have been fixed and closed.
  • System’s compatibility with supported hardware and software.

Acceptance testing

  • Successful execution of the user acceptance tests.
  • Approval from management to stop UAT.
  • Business requirements got fulfilled.
  • No critical defects have been left out.
  • Signing off acceptance testing.

Conclusion:

Defining entry and exit criteria for a software testing process is an essential, as it helps the testing team to finish the testing tasks within the stipulated deadlines without compromising the quality, functionality, effectiveness, efficiency of the software.

Try Our Free Testing POC

Did We Get It Right? – A Review Of Our 2016 Predictions

“Science is not, despite how it is often portrayed, about absolute truths. It is about developing an understanding of the world, making predictions, and then testing these predictions.” Brian Schmidt

Schmidt is an Australian educator of repute – in the spirit of heeding the advice of our teachers let’s take a look back at what we predicted for the world of testing in 2016, and test just how on (or off target) we were.

  • Internet of Things:
    In many ways, this was an easy prediction to make and it’s fair to say that we hit the mark, clearly, the market has dramatically expanded. Zinnov estimated a 2016 market of USD 54 Billion for IoT Technology products and Gartner estimated that 6.4B connected things were in use worldwide in 2016, a growth of 30% over 2015. We predicted that such growth in IoT products would call for a greater emphasis on usability testing and performance testing and a sustained emphasis on automation in testing. In usability, the focus last year was on testing facets like installation, interoperability, and the launch and usage experience. Performance factors in focus were load-bearing capability, speed, and scaling ability. Among the key features in the IoT world are “Over The Air” updates (OTA) – where the OS and firmware get updated frequently. Many releases call for increased regression testing – a natural fit for greater automation.
  • Mobile Testing:
    Digital Transformation of enterprises, driven by the growing power of mobility was one of the defining trends of the year gone by. We estimated that there would be a slew of new mobile apps focused on mCommerce and mobile payments. This seems to have panned out a shade slower than expected in the early part of the year but with some tailwinds later in the year. Business Insider estimated US in-store mobile payment volume to reach $75 billion in 2016, indicating some resistance from consumers. Late in the year, though, a high-growth market like India witnessed a strong push towards digital payments. Our estimate had been that with the growth of such mobile-enabled businesses would come a greater emphasis on security and penetration testing of mobile apps – it’s fair to say this has panned out as expected. We had also predicted the rise of testing for voice commands with the rise of Siri. In many ways, this trend has moved faster than our estimates with the sudden advent of digital assistants like Amazon Alexa.
  • Agile Development / Continuous Delivery:
    These are trends that we really took to heart over the year. If you have been following our blogs you would have seen numerous references to the changing role of testing and test automation in the Agile way of life, and most recently, on the DevOps approach and how testing has been impacted. Perhaps the most visible difference in software development due to Agile and DevOps has been the ever-shorter iterations and the increasing number of releases. The world of software testing has been impacted in multiple ways – testing is getting involved at much earlier stages in the product lifecycle and is much more closely integrated into the product development and deployment process and automation is playing a greater, and more critical role – just like we expected.
  • Security Testing:
    Even in the earlier sections on IoT and Mobile Testing, security testing has found mention. The appearance of threats like the Mirai botnet in 2016 only reinforced just how important security testing had become over the year. This applies across mobile apps, web apps, and desktops apps and the need is for comprehensive security testing. It became fair to assume that any vulnerability in your code or in the code of any of the underlying technologies or products would be open to exploitation and this only drove up the emphasis on security testing. The “World Quality Report 2016”, jointly published by Cap Gemini, Sogeti and HP, reported that 65% of the QA executives surveyed found security to be their top concern. This was more or less in line with what we had predicted at the start of the year.
  • Focus on automation in testing over test automation:
    This was more a fervent appeal than a prediction, to make automation more strategic or more central to the process of creating high-quality products. The objective was to ensure that the full benefits from the automation initiative shone through. To this extent we are happy that, at least in the interactions that we have been having, the focus has shifted from achieving “fewer testers” to doing “better testing”, and from unattainable goals like “100% test automation” to strategic impact”. We still believe that the role of automation is to support the testers, not to replace them and more and more are coming around to that way of thinking – kind of like we predicted!

Niels Bohr said, “Prediction is very difficult, especially about the future”. We are in no position to disagree with a Nobel Laureate – so despite the reasonable accuracy of our 2016 predictions, we are in no rush to turn in our software development hats for a crystal ball!

Criteria for Selecting Mobile Application Testing Tools for Your Business.

Mobile application testing is one of the complex and strenuous testing activities for the testers due to involvement of multiple factors and conditions in it. Yes, mobile application testing needs to be carried out on each different and possible combination of factors related to functioning of the mobile apps such as device, operating system, platform, network configuration & settings, and many other relevant parameters. Thus, it makes the task of mobile testers more hectic and complex to ensure coverage of specified testing requirement and that’s too on each possible combination of devices, OS, platform, etc. along with their different versions and variants.

However, the job of mobile testers could be made easier with the involvement and the usage of testing tools in the mobile application testing process, which may significantly reduce their efforts and time in testing a mobile application.

The market is flooded with wide variety of mobile application testing tools advertising their proficiency and competency in testing the mobile apps. Availability of these tools and their appealing advertisement often confuses and misleads testers, ending up with the selection of inappropriate or ineffective tool along with worthless expenditure incurred over it.

Here, we are listing out some criteria which may be considered while selecting mobile application testing tools to fulfil and accomplish the need and requirement of testing both from the technical and business prospective.

  1. Targeted Platform:Selection of testing tool should be made with respect to the platform along with its different variant and versions, for which a mobile app is targeted and intended to function. However, it is preferred that apart from targeting one or major platforms, the testing tool should able to provide testing for other platforms also.  This ensures the cross-platform testing of the mobile applications.
  2. Code and Build Requirement and Need:Software code and build is a matter of concern with respect to their privacy and security. Thus, code or the build should not be shared or exported outside the testing team, boundaries or environment to any unknown or unauthorized entity. The selected tool should not compromise with the privacy and security of software source code and build in any respect.
  3. Additional features:Besides automating the mobile app tests, a testing tool should able to provide additional and useful features. It should be able to deliver multiple functionalities such as
    • Logging and reporting defects.
    • Filtering logged defect with respect to priority, time, type and other relevant parameters.
    • Monitoring and tracking the bug.
    • Able to ease QA or Project manager in viewing the overall and summarized status of the tests.
  4. Continuous testing:The automated testing tool should able to deliver continuous testing to evaluate the degree of impact caused to software due to change or modification on code. The changes produced in the code should be readily tested by the tool.
  5. Third party bug tracking system:Selected tool should able to support and integrate with other or third-party bug tracking system.
  6. Team Management:The tool along with the task of testing the mobile app should also provide the advantage of managing the activities of testing team which may include roles and responsibilities, task assigned to each member, status of the task, feedback and reviews.

Conclusion:

Above stated are some general criteria for selecting the testing tool, however a tester based on his/her experience and rational thinking and along with the help of business team, may include and consider more parameters for selecting best testing tool for testing the mobile app.

Localization Testing: why, when and how?

Organizations, whether small-sized or large-sized are moving towards globalization. The reason, booming global economy is attracting each and every industry to function beyond local boundaries and explore more opportunities to grow and expand at a much faster pace on the global platform. Nevertheless, let’s move to our topic.

While developing a software product, focus is completely made on the incorporation of each and every functionalities and features which may attract users and ensuring large audience for the software i.e., efforts are made in the direction of developing a quality software application that should be readily accepted by the users’ world widely irrespective of geographical locations. This is called globalization of the software product.

Well indeed, localizing your product is a good move and of utmost importance but ensuring its localized feature could not be ignored also. Let’s see why localization of software is required?

Why localization of software is required?

Software needs to be localized to meet the needs and the expectations of the local audience. It might be possible that a software product meeting the needs and expectations of a particular territory, culture, or region is unable to stand on the expectation of users belonging to different culture or region. Thus, only linguistic translation is not sufficient to make software localized as there could be several differences between two cultures, countries, region for each different aspect such as style, conventions, standards, designing, time-zone, font, colour, etc. Users of each different culture, region would have different taste and different perspective and ways to look and use the software product. For making a software product globally recognized, it is preferred to first target the local markets and then go for the global market. Without localization, you cannot go for the globalization of the product.

What is Localization Testing?

Localization testing is one of the testing methodologies provided by the software quality assurance process to ensure that a globalized software product is readily adaptable to a particular locale, culture or region settings and environment.

How to do localization testing?

The localization testing of a software product may comprise following activities:

  • Multiple character support.
  • Evaluating UI feature and issues such as truncated or missing texts or content or inappropriately translated or displayed.
  • Whether language and content specifying and describing the system’s functionalities is with respect to targeted country or location.
  • Conventions, standards and protocols being implemented and used in the system is as per the targeted area or region.
  • Consistency throughout the software documentation with respect to targeted country language and settings.
  • Grammatical mistakes.
  • Time-zone, Date and currency used with respect to targeted country or area.
  • System’s adherence to rules, regulation, laws and agreement of a particular country.
  • Appropriate layout, designing, placing & displaying of images and texts.
  • Consistencies in design, layout and style.
  • Screen resolution w.r.t. targeted device resolution.
  • Proper encoding and decoding of characters.
  • Correct and appropriate translation of the content for a particular country or region.

Conclusion:

Above given is just a few of general activities that may be carried out during localization testing of a system. However, the activities and scope of localization testing may vary by the testing team depending on the needs and requirements.

In Software, What To Automate Is As Important As How To Automate

If Shakespeare was a tester, in the initial days of test automation adoption he would certainly have asked, ‘To automate or not to automate, that is the question’. As the adoption of test automation has increased and become an integral part of every testing strategy, this question has evolved just that little bit. Today, most testing teams recognize that they have to incorporate test automation to keep up with the speed of development. Agile testing methodologies and newer software development methodologies such as Test Driven Development (TDD) etc. place testing at the heart of software development. Hence, the tests have to run as fast as the development process. A failure to do so will drive up costs due to timeline overruns. While test automation carries the promise of great software quality, the fact remains that we cannot automate each and every test. Why? Simply because you want to get maximum returns from your test automation initiatives. Automating everything only drives up costs because of the time and resources required and the level of complexity involved. At the same time, it is essential to note that by automating the right tests, teams can increase test coverage, reduce the number of bugs and improve software quality and eventually take your product to market much faster. The reality is that automated testing is not an ‘all or nothing’ proposition. Software testing still needs some amount of manual testing – the trick to testing success lies in identifying what to automate as much as in deciding how to automate.

When to use test automation?

For any automation initiative to be successful it is imperative that the testing team first identifies the activities that are repetitive in the development cycle. Identifying the development environment and validating the functionality across these environments becomes the starting point of all automation initiatives. It’s best to not compare automated and manual testing since both these activities serve a different purpose. With test automation, you can increase test coverage, get faster feedback, find more bugs and save time. Manual testing, on the other hand, essentially involves the checking of facts and thus becomes a more investigative exercise where tests are designed and executed simultaneously and the human brain is employed to spot failures in the system.

Automation tests take the pain out of testing by taking care of the tasks that are repeatable. In our experience, we have seen the below-mentioned tests that lend themselves beautifully to automation and increase test accuracy and improves software quality.

Regression Testing

Even the smallest tweak in software code can lead to the product behaving differently. When you fix something, you run the risk of breaking something else in the code. Regression testing ensures that any change of any addition to the software code does not impact the existing functionality. This test also catches bugs that might have been unwillingly released into the system due to an upgrade or a patch. During the course of software development, regression tests are conducted frequently to assess that even the smallest alterations, enhancement, configuration changes etc. in the application source code does not impact application functionality.

Functional Testing

Automating most, if not all of the functional tests also enhances the performance of a testing team. Functional Testing is focused on what the software ‘does’ and is not concerned with the internal details of the application. It, thus, becomes easier for the testers to automate tests and set performance benchmarks that are developer-independent to assess if the function being developed performs as expected and is crash resistant when faced with user load. Automating functional tests ensures that even the most inexperienced tester can perform powerful and comprehensive functional tests and contribute to developing a robust software product.

Unit Testing

Unit testing is the testing of small code fragments to gain a deeper and more granular view of how the code is performing. The identified pieces of code are checked independently and in isolation to ensure that they are behaving correctly. Manual unit testing is time and resource intensive and can be error prone. Automating unit tests ensures that the source code remains error free, identifies errors early in the development phase and ensures that the code developed works then and in the future.

Integration Testing
Integration testing is performed to see how the software performs when all the pieces are put together. Integrations tests are performed when there is a coupling between two software systems. When a coupling is broken the software does not perform as it should. Since integration testing involves testing across all layers, manually testing these would mean re-executing these tests by hand each time. This impacts the build process negatively as manual testing of integration tests becomes extremely time-consuming and resource intensive. Instead, if integration tests are automated, testers will be able to catch bugs faster and ensure that the application is performing optimally when all its pieces are put together.

Smoke Testing
Smoke testing is a quick test conducted after a build is completed for identifying and then fixing obvious defects in a piece of software. Smoke tests are usually non-comprehensive and are focused on to assess that only the important functions work and assess if the build is stable enough to proceed with further testing. Smoke testing is also called Build Verification Testing and should be automated if builds are frequent as it exposes integration issues and identifies problems with the code early.

Performance Testing
Performance testing of an application is intensive and an exhaustive process as it involves identifying performance issues. Tests like load testing, volume testing, stress testing etc. fall within the purview of performance testing as these are all targeted towards identifying factors that affect the performance of an application. Performance tests are conducted to assess that the application can manage varying degrees of system transactions, handle a large volume of concurrent users without impacting the speed, stability, and scalability of the application negatively. Performance testing involves the testing of several functional and non-functional components of the application and assesses the reliability of the product and identify reasons behind performance bottlenecks (whether it is software or hardware problem).

Having established the case for test automation, it is important to note that tests that check the application usability, random testing, device interface testing, back-end testing are generally best conducted manually. Employing great manual testers is essential especially during exploratory testing since the manual testers have the ability and experience to question the system and see if things behave differently. For complete testing success, it thus becomes essential to take a strategic approach towards the testing initiatives and find the balance between manual testing and test automation and then find the right set of testing tools to aid the automation process so that it becomes cost effective and delivers great returns.

How to Manage Test Data in End-to-End Test Automation?

The present era imparts the diverse usage of latest and advanced technologies in producing out the fined quality of software products as a boon to mankind in performing each of the long, complex, heavy, useful and repetitive activities and operations in an effective way and at no time. However, most of the software products or may be said all of them are standalone inefficient to carry out their desired and appropriate functioning.

These software applications require their association or integration with other external applications, systems and environment components in order to perform their intended functions in a smooth and in an uninterrupted manner, and thereby increases the already present complexity of the software application multiple times, and subsequently escalates the probability of occurrence of bugs and defects in the system.

Software QA process provides the approach of end-to-end testing methodology which not only looks after and ensures the integration aspect of a software with other required systems needed to execute intended functionalities but also tests the completeness of a software application, right from the beginning to the end, at each different level to ensure desired, appropriate and streamlined work-flow and data-flow throughout its schema during functioning.

Adding to this, automation of end-to-end testing process may prove to be an efficient, productive and time-saving approach as the said testing technique encompasses the whole software system to test, including different types of interfaces, databases and other relevant entities, along with complexities associated with each of them. Further, the involvement and usage of large volume of test data to extensively and thoroughly test the system along with the consistency, accuracy and integrity throughout the testing schedule, right away strikes out option of manual approach of testing.

What is Test Data?

An umbrella term “test data” comprises all sorts of data input required to test system’s functionalities and may include positive data for expected functioning, negative or invalid data for error and exception handling mechanism.

Test data has a major role in the software testing process to generate out the qualities or the deficiencies present in the system. Thus, it arise the need of creating and maintaining the test data in end-to-end test automation, which is not an easy task for the testing team.

Then, how to deal with the test data management in end-to-end test automation?

A QA team may consider and implement certain specific strategies/practices to manage the creation and execution of test data in end-to-end test automation as per their needs and requirements. A few of these are stated below.

  • Test data creation during test phase set up

To ensure correct and precise results for each testing process/phase and for each functionality or module to be tested, it is preferable that the test data for each testing activity should be created in parallel along with the other activities carried out for a particular test phase. This approach would derive out and makes the availability of appropriate and desired test data inputs for each of the testing process. In this approach, test data may be generated using insert operation over database or may be simply through the user interface of the software application.

However, with simultaneous process of creating the test data, the amount of time required to execute and terminate a test phase also increases. Further, setting up the test data would require additional development and execution of extra scripts, and subsequently extra burden on the cost of automation.

  • Test data creation prior to test phase

Creating test data prior to actual execution of the tests may prove to be a more convenient and productive option than creating test data during the test phase as the former lets the testing team to be completely focussed on the execution of the automated test scripts rather partially indulging and focussing on the test creation as well as on its execution.

Test data prior to actual execution of the testing activity may be generated similar to that done during the test phase i.e. applying insert operations on database or by using user interface of the system. However, the strategy is uncertain about the veracity and appropriateness of the test data for testing the software.

  • Cleaning the test data

This approach involves the refreshing of the test data or restoration of test data in its original state after the tests execution (or before the execution of the next phase). To implement this strategy, backup of the test data repository or database needs to be taken for restoring the original state of the test data or clearing the test data database after the execution of test. Thus, the strategy rolls back the executed used test data to its original state which ensures and maintains the repeatability of the tests along with the test data. However, it requires thorough knowledge of the database model and database provides no or limited access to its architecture, which may be considered as the reasons to strike out this approach from the list.

  • Visualizing and understanding the data layer

With help of different types of tools readily available in the market, a tester may virtualize and go through each different data layer to analyse and understand the data flow, which may help and prove to be beneficial in testing the system.

  • Cutting out the test data

Testing is a methodology that works great when it comes in a good practice, that’s why expertise professional it required to do such tasks. To perform test, firstly it is necessary to make a systematize procedure to test individual pages of site by tracking bugs and required fixes within system. Using regression testing technique at the time of fixing bugs will eventually lead you to the correct destination because it has an ability to perform better the way every team wanted.

Conclusion:

Besides above stated strategies, a testing team may go for the other approaches which suits and fulfils their need and requirement in the given time. Managing test data in automation is a crucial step or task which directly impacts the productivity and results of the automation as automation is all about repetitive and larger usage of test scripts, including test data to perform end to end testing of software application.

The Growing Case of Angular JS for the Mobile Web

Angular JS, an open-source framework has gained a lot of traction in the world of web development today. This framework by Google has been seen as a viable choice even for responsive mobile web application development as it allows developers to create trendy applications easily. Considering that most applications today are data-driven, Angular JS fits comfortably into the developer’s toolbox as it enables interaction of backend web services with external data sources. This framework allows developers to extend the HTML syntax to iterate the application components in a succinct manner while allowing the use of HTML as the main template language. This blog takes a look at some considerations that make Angular JS great for developing mobile web applications.

  • Responsiveness:
    According to the Cisco Visual Networking Index, Global mobile devices grew to 7.9 billion in 2015, up from 7.3 billion in 2014. According to this report, “the typical smartphone generated 41 times more mobile data traffic (929 MB per month) than the typical basic-feature cell phone (which generated only 23 MB per month of mobile data traffic).” Clearly, developers now need to create web applications that will present themselves correctly on mobile devices. Since Angular JS is an open source JavaScript MVC framework it allows developers to create rich and responsive applications for both desktop and mobile environments using the same codebase. Additionally, these applications can be run on any HTML 5 compliant browser and mobile browsers.
  • Scalability and Maintainability:
    Modern day web applications need a scalable architecture, so that upgrades, patches, and bug-fixes can be implemented easily. Angular JS is the perfect choice for building large and scalable applications. It features ng-class and ng-model directives, provides two-way data binding and allows developers save the data on the server in just a few lines. Applications built with Angular JS are also easily maintainable as it employs object-oriented design principles. Along with this, Angular JS allows developers to use both MVC or MVVM to separate presentation from business logic while boosting maintainability.
  • Mobile features:
    A mobile web application has to ensure that all the application features display correctly across browsers. Angular JS has some awesome mobile components. Using frameworks such as Ionic or Mobile UI gives developers the flexibility to augment mobile components and offer rich user interfaces, overlays, sidebars, switches, swipe features, scrollable areas and top and bottom navigation bars that do not bounce on scrolling when the application is viewed on a mobile, as well as the option to provide push notifications and analytics. These Angular JS frameworks utilize robust libraries (Mobile UI uses overthrow.js and fastclick.js) to provide a smooth mobile experience that is highly responsive and touch enabled. The two-way data binding capability ensures that when the framework experiences a browser change it updates the necessary patterns immediately thereby providing a uniform viewing experience.

    As Angular JS uses reusable logic, it allows the reuse of web application logic on multiple devices across multiple platforms and at the same time also allows developers the flexibility to customize the UI for each platform. Developers thus can keep the functionality of the application separate for the UI of the application which helps in providing a uniform application experience.

  • Performance:
    Performance is critical for mobile web applications as a slow application is almost worse than having no application. Angular JS takes great care of performance issues since it uses a declarative paradigm for creating patterns. Instead of describing all the steps needed to achieve an end result, Angular JS uses lightweight code where only the end result needs to be described. Angular JS also loads pages asynchronously which decrease the page load time, increases the speed of the application thereby boosting performance.
  • Dependency Injection:
    Additionally, Angular JS allows developers to create applications combining separate modules that can be co-dependent or autonomous. It has a built-in dependencies implementation mechanism that enables it to independently identify situations where additional objects are needed, and simultaneously provides these and binds them thus making application development much easier. As Angular JS uses the MVC structure and separates data from logic components the dependency injection enables bringing server-side services to client-side web application which helps in reducing the burden on the server and also contributes to improved application performance.
  • Security:
    To call a good application ‘great’, developers have to ensure that it has robust security features. Developers can optimize the security features of a responsive web application with Angular JS. It uses HTTPS interface, in the form of a simple web service or even a RESTful API, to communicate with servers. Angular JS also provides CSRF protection, supports strict expression evaluation and allows strict contextual escaping. Additionally, to increase or augment security features, especially for enterprise applications, Angular JS also developers implement supplemental libraries like Idapjs to implement single sign-on via interaction between libraries and AngularJS.

Along with the above-mentioned benefits, Angular JS also has huge and active community support. Getting started with Angular JS is also quite easy as it is not necessary to learn the entire framework to build an application. Angular JS is built with testing in mind and makes it easy to mock physical devices and situations such as GPS, blue tooth etc. This testing focus also makes test automation implementation much easier.

Conclusion

Given all these advantages Angular JS is being adopted incredibly fast by developers across the globe…this means more add-ons and high-quality libraries and additional support for Angular developers. Get set for more Angular JS in the mobile web!

Website Testing- Did you miss anything while testing?

Website testing is an extensive type of testing carried out on the website to cover each quality aspect. Website testing itself is a broad testing term which consists of numerous testing areas and multiple testing activities to test website from each different perspective.

The end purpose of performing website testing is to make a user comfortable in learning, understanding, using and navigating the website, and thus includes almost all types of testing like:

  • Functionality testing for ensuring the intended and appropriate functionalities and features of a website.
  • Usability testing to ensure user-friendliness features of a website.
  • Compatibility testing looks after the compatibility of a website on multiple variants of browsers, OS, network, hardware, software, devices and many such elements, which are needed in the functioning of a website.
  • Performance testing ensures the smooth performance of a website both under the expected and undesirable load, conditions and environment.
  • Database testing concerns with the veracity, integrity, consistency, accuracy of the diverse range of data stored at backend of the website.
  • Security Testing ensures that no loopholes or security glitches left undetected in the website, which may grant access to unauthorized or malicious users to website and may victimize the website with other malicious attacks.

Each of the above stated testing types including some more types focuses and targets multiple aspects of a website. Thus, there is lot to test in a website and a tester may likely to miss one or more critical testing/elements which may need to be considered and tested. Here, we are listing out some of the essential elements in a checklist format, which are generally needs to be carried out to test a website.

1. Functionality Testing:

Following things are usually needs to be tested while performing functionality test of a website.

  • Forms testing: Forms on a website is used by the users to either store or retrieve information. Testing the form to ensure its consistency and integrity throughout the website. Further form testing may include following activities:
    • Validating each field of the form.
    • Validating each field with negative or invalid value.
    • To ensure default values for each field.
    • Highlighting the mandatory field with asterix * symbol.
    • Testing password field ability to conceal the entered password.
  • Link testing:Testing various links present or defined for a website and includes following types
    • External link: links which directs the website to any other web page or website outside its domain.
    • Internal link: links directing one web page to another web page within the website domain.
    • E-mail link:links providing direct access to the client mail page with pre-filled general information such as recipient address
    • Broken link: These links does not provide access to any other page, internally and externally both. Thus, they are known as dead links or broken links.
  • Validation testing: Website validation is done to ensure website adherence and conformance to certain specified and established standards, which is used to optimize the website at SEO level. This includes validation of feeds, html, xhtml and css properties and tags.
  • Cookie testing

2. Database Testing:

Mostly websites are driven by the back-end of a system i.e. data provided and stored. Thus, database testing is a crucial testing for a website and may include following testing activities:

  • Correct and appropriate execution of database queries.
  • Verifying and validating the data integrity throughout the database in the event of addition, deletion and update of data.
  • Accurate retrieval of data against the injected query.
  • Testing tables along with the triggers, stored procedures, and views properties of a database.
  • Verifying all sorts of keys used in the database system.

3. Performance Testing:

In performance testing, certain specified parameters needs to be evaluated to assess the website performance under different conditions. Performance testing of a website generally consists of following:

  • Stability of the website under different load(in terms of users) and factors to work uninterruptedly without going into the state of failure or crash down, like
    • Normal load, full resource utilization.
    • Normal load with cut in resources.
    • Heavy load, full resource utilization.
    • Heavy load with cut in resources.
    • Extreme load with full access to resources.
    • Extreme load with limited access to resources.

    Here, resources may include usage of hardware, software (assisted application), server, memory (RAM), CPU, network speed and connection, space. Further, the above conditions may also include time criteria. Generally, load, stress, soak, spike and volume testing is being performed to ensure stability and reliability of the website.

  • Scalability of the website to accommodate the growing and changing requirements (load and resources).
  • Response time, throughput and speed of the website under different load and conditions as stated above.

4. Usability Testing:

If a user lacks interest in using and navigating a website, this directly impacts the traffic on the website. Thus, usability testing is essential QA activity to ensure user-friendliness feature of a website. In usability testing, following points may be taken into account:

  • Testing design, layout and presentation of the website as per the user’s need & expectation.
  • Easy & smooth navigation & control between the web pages throughout the website.
  • Content also plays a major role in maintaining the user’s interest. Thus, content testing needs to be performed on the website, which may include, spelling and grammatical error, pictures & font size & style, colour and other perceivable elements of the website.

5. Compatibility Testing:

Compatibility testing is done to ensure compatibility and subsequently the intended and appropriate functioning of the website across multiple variants of browsers, operating systems, devices, hardware or software, network configuration & settings, display resolution along with their different versions.

6. Security Testing:

Website or web security testing is done to explore and correct or remove security vulnerabilities present in the website. Following activities may be carried out under web security testing:

  • Penetration testing of the website, i.e. attacking the system to detect security flaws and loopholes.
  • Accessing website using invalid or incorrect credentials (login & password) multiple times.
  • Checking log files located at server and containing the various information such as those of transactions, error messages, security breaches, etc.
  • Hacking & cracking the password.
  • Verifying the submission of confidential data and information through SSL certificates (HTTP).
  • Checking the SQL injection attacks on the website.
  • To ensure automatic termination of sessions in the event of non-activity of user for a considerable amount of time. Further, the logout user should not able to use session further.
  • Checking the unauthorized access to confidential data, information and web log files and repositories.

7. User Interface Testing:

A user interface mainly comprises three components- application, database server and web server. These three components are needed to be tested to ensure proper interfacing along with the accurate and appropriate flow of the data.

Conclusion:

Above listed, is just a general checklist which commonly covers almost all sorts of website and important testing areas along with the corresponding testing methods. However, a tester may expand the list based on the specified requirements & specification, and the need felt by him/her, to carry out more in-depth and thorough testing of the website so that nothing gets missed out.